University-Related Research Parks

University-Related Research Parks

A university-related research park is a cluster of technology-based organizations (consisting primarily of private-sector research companies but also of selected federal and state research agencies and not-for-profit research foundations) that locate on or near a university campus in order to benefit from its knowledge base and research activities. A university is motivated to develop a research park by the possibility of financial gain associated with technology transfer, the opportunity to have faculty and students interact at the applied level with research organizations, and a desire to contribute to regional economic growth. Research organizations are motivated by the opportunity for access to eminent faculty and their students and university research equipment, as well as the possibility of fostering research synergies.

Research parks are an important infrastructure element of our national innovation system, yet there is no complete inventory of these parks, much less an analysis of their success. The following figures and tables, derived from research funded by the National Science Foundation, provides an initial look at the population of university-related research parks and factors associated with park growth.

Park creation

The oldest parks are Stanford Research Park (Stanford University in California, 1951) and Cornell Business and Technology Park (Cornell University in New York, 1952). Even though by the 1970s there was general acceptance of the concept of a park benefiting both research organizations and universities, park creation slowed at this time because a number of park ventures failed and an uncertain economic climate led to a decline in total R&D activity. The founding of new parks increased in the 1980s in response to public policy initiatives that encouraged additional private R&D investment and more aggressive university technology transfer activities. Economic expansion in the 1990s spurred another wave of new parks.


Wide distribution

States with the most university research activity have the largest number of parks, but this has not been a simple cause-and-effect relationship. State and university leadership has historically been a critical motivating factor for developing parks.


Key characteristics

Most parks are related to a single university and are located within a few miles of campus, but are not owned or operated by the university. About one-half of the parks were initially established with public funds. As parks have grown, the technologies represented at parks have expanded, and incubator facilities have been established. Park size varies considerably. Research Triangle Park (Duke University, North Carolina State University, and the University of North Carolina; 1959) has 37,000 employees on a 6,800-acre site. Research and Development Park (Florida Atlantic University, 1985) has 50 employees on a 52-acre site.

Selected Characteristics of University-Related Research Parks

Percentage of parks formally affiliated with multiple universities 6%
Percentage of parks owned and operated by a university 35.4%
Percentage of parks on or adjacent to a university campus 24.6%
Distances (miles) from a park to a university campus Mean: 5.7

Range: 0—26

Percentage of parks located in distressed urban areas or abandoned public-sector areas 11%
Percentage of parks initially funded with public money 50.4%
Percentage of parks with a single dominant technology 37.7%
Distribution of dominant technologies among parks with a dominant technology
Bioscience 48.5%
Information technology 42.4%
All other technologies 9.1%
Percentage of parks with an incubator facility 62.3%
Park size Mean employees: 2,740

Range: 30—37,000

Mean acres: 552

Range: 6—6,800

Growth factors

Many park directors associate park employment growth with park success, and this table compares the growth rates of parks having certain characteristics with the average rate for all parks. Parks with a single dominant technology, located very close to campus and managed by private-sector organizations, are the faster-growing parks. The fastest-growing newer park is the University of Arizona Science and Technology Park (1995), which has been adding an average of more than 1,100 employees per year. The fastest-growing of the older parks is Research Triangle Park, which has been adding an average of almost 950 employees per year since its founding in 1959.

Park Characteristics that Affect Annual Park Growth,

Measured in Terms of Park Employees,

Since Date of Establishment,

Averaged over the Population of University-Related Research Parks

Annual rate of park growth, averaged over the population ofuniversity-related research parks 13.0% per year
Parks with a single dominant technology Grow 3.2% faster than the average, per year
Off-campus parks (evaluated at the mean distance from the university) Grow 3.7% slower than the average, per year
Parks that are university-owned and -operated Grow 6.7% slower than the average, per year
An incubator facility Has no effect on park growth

A House with No Foundation

Many of the forensic techniques used in courtroom proceedings, such as hair analysis, fingerprinting, the polygraph, and ballistics, rest on a foundation of very weak science, and virtually no rigorous research to strengthen this foundation is being done. Instead, we have a growing body of unreliable research funded by law enforcement agencies with a strong interest in promoting the validity of these techniques. This forensic “science” differs significantly from what most of us consider science to be.

In the normal practice of science, it is hoped that professional acculturation reduces these worries to a functional minimum. To this degree, science is based on trust, albeit a trust that is defensible as reasonably warranted in most contexts. Nothing undermines the conditions supporting this normal trust like partisanship. This is not to say that partisanship does not exist in some form in most or all of the practice of science by humans, even if it is limited to overvaluing the importance of one’s own research agenda in the grand scheme of things. But science is a group activity whose individual outputs are the product of human hopes and expectations operating within a social system that has evolved to emphasize the testing of ideas and aspirations against an assumed substrate of objective external empirical fact.

The demands of the culture of science—ranging from the mental discipline and methodological requirements that constitute an important part of the scientific method to the various processes by which scientific work is reviewed, critiqued, and replicated (or not)—tend to keep human motivation-produced threats to validity within acceptable bounds in individuals; and the broad group nature of science ensures that, through the bias cancellation that results from multiple evaluation, something like progress can emerge in the long run. However, in contexts where partisanship is elevated and work is insulated from the normal systems of the science culture for checking and canceling bias, then the reasons to trust on which science depends are undermined. Nowhere is this more likely to be a serious problem than in a litigation-driven research setting, because virtually no human activity short of armed conflict or dogmatic religious controversy is more partisan than litigation. In litigation-driven situations, few participating experts can resist the urge to help their side win, even at the expense of the usual norms of scientific practice. Consider something as simple as communication between researchers who are on different sides of litigation. Although there is no formal legal reason for it, many such researchers cease communicating about their differences except through and in consultation with counsel. What could be more unnatural for normal researchers? And what purpose does such behavior serve other than ensure that scientific differences are not resolved but exacerbated?

These concerns apply not only to research undertaken for use in a particular case, but also to research undertaken for use in unspecified cases to come, as long as the litigation interest of the sponsoring party is sufficiently clear. This is what differentiates litigation-driven research from much other interest-driven research. For instance, in research directed toward Food and Drug Administration (FDA) approval of drugs, drug companies are interested not only in positive findings but also in the discovery of dangers that might require costly compensation in the future or cause their drug to be less competitive in the marketplace. In other words, built-in incentives exist to find correct answers. In addition, the research will have to be conducted according to protocols set by the FDA, and it will be reviewed by a community of regulators who are technically competent and, at least in theory, appropriately skeptical. By contrast, in much litigation-driven research, there is a single unambiguous desired result, and the research findings will be presented to a reviewing community (judges and juries) that typically is not scientifically literate. These circumstances are more like the ground conditions relating to industry-sponsored research in regard to food supplements and tobacco, two areas of notoriously problematic claims.

Our attention is focused on an area that does not appear to figure prominently in most examinations of the problems of litigation-driven research: law enforcement-sponsored research relevant to the reliability of expert evidence in criminal cases, evidence that virtually always is proffered on behalf of the government’s case. Of primary concern is research directly focused on the error rates of various currently accepted forensic identification processes, which have not been subject to any formal validity testing.

Illusion of infallibility

Many forces combine to raise special concerns in such areas. From the perspective of prosecution and law enforcement, any such research can result only in a net loss because in these areas there has been a carefully fostered public perception of near-infallibility. Research revealing almost any error rate under common real-world conditions undermines the aura. In addition, data that can show deficiencies in individual practitioners threaten that individual’s continued usefulness as an effective witness. The combined effects of these two kinds of findings can potentially result in increased numbers of acquittals in cases where other evidence of a defendant’s guilt is weak. Valid or not, however, such testimony is extremely useful to a prosecutor who is personally convinced of the guilt of the defendant (which, given the partisan nature of the litigation process, is virtually every prosecutor) and is willing to use whatever the law allows in an effort to convince the jury of the same thing. Consequently, research results calling into question the validity of such expertise, or defining its error rates, is threatening because it undermines a powerful tool for obtaining convictions and also threatens the status and livelihood of the law enforcement team members who practice the putative expertise.

It is not surprising, therefore, to discover that until recently such research was rare, especially in regard to forensic science claims that predated the application of the Frye test (requiring that the bases of novel scientific evidence be generally accepted in some relevant scientific community before it can be admitted into evidence). Such evidence had never been considered “novel” and therefore had never been confronted with any validity inquiry in any court. Even in regard to expert evidence that had been reviewed as novel, the review often consisted of little more than making sure that there was at least some loosely defined “scientific” community that would vouch for the accuracy of the claimed process.

The winds of change began to blow with the Supreme Court’s Daubert decision, of course, although it was several years before the first significant Daubert challenge to prosecution-proffered expertise was heard, and there is still reason to believe that substantial resistance exists among the judiciary to applying Daubert and its important descendant Kumho Tire to prosecution-proffered expertise as rigorously as they have been applied to the expert proffers of civil plaintiffs. Nevertheless, there have been some successful challenges, most notably in regard to handwriting identification expertise; and the potential for challenges in other areas has made law enforcement, particularly the Federal Bureau of Investigation (FBI), seek research that could be used to resist such challenges.

After a century of oversold and under-researched claims, suddenly there is interest in doing research. However, certain aspects of that research give reason to believe that it must be received with caution. Various strategies appear to have been adopted to ensure that positive results will be exaggerated and negative results will be glossed over, if not withheld. These include the following: placing some propositions beyond the reach of empirical research, using research designs that cannot generate clear data on individual practitioner competence, manufacturing favorable test results, refusing to share data with researchers wishing to conduct reanalyses or further analyze the data, encouraging overstated interpretations of data in published research reports, making access to case data in FBI files contingent on accepting a member of the FBI as a coauthor, and burying unfavorable results in reports where they are least likely to be noticed—coupled with an unexplained disclaimer that the data cannot be used to infer the false positive error rate that they plainly reveal.

The clearest example of the first strategy is the claim of fingerprint examiners that their technique has a “methodological error rate” of zero and that any errors that occur are therefore lapses on the part of individual examiners. Because the technique can never be performed except through the subjective judgment of human fingerprint examiners, it is impossible to test the claimed division of responsibility for error empirically. The claim is thereby rendered unfalsifiable.

No human activity short of armed conflict or dogmatic religious controversy is more partisan than litigation.

To see the second strategy at work, one need only examine the FBI-sponsored studies of the performance of handwriting identification examiners. These studies, led by engineer Moshe Kam, were supposed to compare the performance of ordinary persons and of document examiners in the identification of handwriting by giving them the task of comparing samples of handwriting. Instead of designing a test that would do this directly by, for example, giving all test takers a common set of problems with varied difficulty, Kam et al. adopted a roundabout design that randomly generated sorting tasks out of a large stockpile of known handwriting. Consequently, each individual test taken by each individual participant, expert or nonexpert, differed from every other test. In some, hard tasks may have predominated and in others trivial tasks. This meant that, given a large enough number of such tests administered to both the expert and the lay group, one might infer that the aggregate difficulty of the set of tests taken by each group was likely to be similar, but evaluation of the performance of any individual or subset of individuals was undermined. This unusual test design is inferior to more straightforward designs for most research purposes, but it is superior in one respect: It makes it impossible to identify individual scores and thus to expose unreliable examiners. Thus, any such people remained useful prosecution expert witnesses. This contrasts with research led by Australians Bryan Found and Doug Rogers, which posed similar research questions but was designed in a way that allowed them to discover that a considerable range of skill existed among professional examiners, some of whom were consistently more inaccurate than others.

The third strategy reflects the notion that, left to their own devices, those having a point to make using data rather than a genuine question to ask of data are tempted to design studies that produce seemingly favorable results but that actually are often meaningless and misleading. In one study of fingerprint identification, conducted during the pretrial phase of the first case in generations to raise serious questions about the fundamental claims of fingerprint identification experts, the FBI sent sample prints to crime laboratories around the country. The hope was to show that all labs reached the same conclusion on the same set of prints. When half a dozen labs did not reach the “correct” decisions, those labs, and only those labs, were sent annotated blow-ups of the prints and were asked to reconsider their original opinions. Those labs got the message and changed their minds. This supposedly proved that fingerprint examiners are unanimous in their judgments. A second study was designed to prove the assumption that fingerprints are unique. This study compared 50,000 fingerprint images to each other and then calculated the probability that two prints selected at random would appear indistinguishably alike. In a comment on the study written for a statistical journal, David H. Kaye explains the errors in the study’s design and analysis, which led to a considerable overstatement of the conclusions that its data can support. Kaye attributes the problems in the research to its being “an unpublished statistical study prepared specifically for litigation.” He concludes by suggesting that the study provides “a lesson about probabilities generated for use in litigation: If such a probability seems too good to be true, it probably is.”

The Kam handwriting studies also reflect the reluctance of many forensics researchers to share data, an obvious departure from standard scientific practice. Kam et al. have generated four data sets on government grants: three from the FBI and one from the Department of the Army. Repeated requests for the raw data from those studies for purposes of further analysis have been repeatedly denied, despite the fact that the youngest of the data sets is now more than three years old and hence well beyond the usual two-year presumptive period of exclusive use. Besides, there is serious criticism of the application of even this time-bound model of exclusive use of data relevant to public policy, especially when generated through government grants. Had the research been sponsored by almost any other federal agency, data sharing would have been required.

Until fundamental changes occur in forensic science research, courts would be well advised to regard the findings with a large grain of salt.

As for encouraging overstatement to produce sound bites useful for litigation, Kam’s first (nonpilot) study offers an example: It claims that it by itself “laid to rest . . . the debate over whether professional document examiners possess a skill that is absent in the general population.” (It didn’t.) Or consider Sargur Srihari and colleagues’ claim that their computer examination of a database of about 1,500 handwriting exemplars established the validity of the claim of document examiners that each and every person’s handwriting is unique. If one tracks Srihari et al.’s reports of the research from early drafts to final publication, the claims for uniqueness grow stronger, not more tempered, which is not the typical progression of drafts in scientific publishing. Finally, Srihari’s claims became the subject of a substantial publicity campaign on behalf of this first study to “prove” the uniqueness claim on which handwriting identification expertise had stood for a century. All this despite the simple fact that in the study itself, all writings were not found to be distinguishable from each other.

The FBI has apparently had a policy requiring coauthorship with an FBI employee as a condition of access to data derived from their files (at least by researchers not considered committed friends of the FBI). This policy has been in place at least since the early 1990s, when William Thompson of the University of California at Irvine and a coauthor were denied access to DNA case data unless they accepted such a condition. This practice undermines the normal process of multiple studies driven by multiple research interests and perspectives.

An example of the likely effects of such a “friends-only” regime may be seen in a recent study by Max Houck and Bruce Budowle in the Journal of Forensic Sciences. (Houck is a former examiner for the FBI laboratory, who recently joined the faculty of West Virginia University; and Budowle is still with the FBI.) That study dealt with an analysis of 170 hair comparisons done at the FBI laboratory between 1996 and 2000. In each case, a questioned hair sample from a real case had been compared microscopically to a hair sample from a known human source to determine whether they were sufficiently similar that they might have come from the same person. Subsequently, the same samples were subjected to mitochondrial DNA comparison. The authors stated that the purpose of the study was to “use mtDNA results to assess the performance of microscopic analysis.” Perhaps the most central question in such a study concerns how often a questioned hair actually comes from the known source when the human examiner declares that they are “associated”; that is, consistent in their characteristics. Of the 80 hairs in the set that had been declared associated, nine (11 percent) were found by mtDNA analysis to be nonmatches. However, this result was buried in a single paragraph in the middle of the paper, followed by this statement: “These nine mtDNA exclusions should not be construed as a false positive rate for the microscopic method or a false exclusion rate for mtDNA typing: it (sic) displays the limits of the comparison of the hairs examined in this sample only and not for any hairs examined by any particular examiner in any one case.” In making this statement, the authors equate the epistemic value of the results of subjective human evaluation and the results of mtDNA analysis on the question of common origin. In other words, all techniques are equal and no study should have any bearing on our evaluation of future cases in court.

What next?

Thus, we have seen favorable findings declared to “end all debate” on a question, whereas unfavorable findings are declared to have “no implications” beyond the pages of the study reporting them. Both of these extremes are seen infrequently in contexts other than research done with one eye on upcoming litigation.

We make no claim that the above examples are the result of any systematic review of the literature. They are merely instances we encountered as we labored down in our little corner of the forensic science mine, where we have for years examined reliability issues in regard to various forensic identification claims. However, enough canaries have died in our corner of the mine to suggest that such law enforcement-sponsored research should be approached with caution.

What does that suggest for the future? First, that the circumstances in the criminal justice system that tend to distort such research deserve attention as part of any larger inquiry into the problems of litigation-driven research. Second, it suggests that any efforts that bring more independent researchers working under more independent conditions into the forensic knowledge-testing process should be encouraged. As to the judicial consumers of such research, it is unlikely that, in an adversarial system, anything official can or will be done about the phenomenon, especially when the research enters the legal process during pretrial hearings, where the usual rules of evidence are themselves inapplicable. And thus until fundamental changes occur in the research environment that creates litigation-directed forensic science research, courts would be well advised to regard the findings of such research with a large grain of salt.

U.S. Oil Dependence Remains a Problem

War and terrorism have changed a lot about how we think about oil markets. But one thing they haven’t changed during the past 14 years is the fact that excessive dependence on oil in our domestic energy mix exposes us to potentially serious economic and security risks. And they have not changed the importance of taking action to cut oil consumption in the U.S. economy.

In 1989, I argued that the Organization of Petroleum Exporting Countries (OPEC) would be able to control energy prices by the mid-1990s unless policy actions were taken to prevent it. Price control would enable OPEC to extract rents–charge a lot more for its oil than it cost to produce–that would be a drag on the economies of oil consumers. Moreover, since the bulk of OPEC production is in the Middle East, the famous political instability of that region would expose consumers to price shocks. Those risks did not depend on how much OPEC oil a consumer imported; world oil markets would ensure that all consumer nations experienced the same high and potentially volatile prices.

The main lesson of the intervening years is that cutting demand is the essential policy.

That is pretty much what happened. As oil markets tightened during the 1990s economic boom, a reasonably well-disciplined OPEC became the world’s swing producer, or market regulator. As such, OPEC, and especially Saudi Arabia, could adjust production to exert a strong influence on world prices. Not surprisingly, two wars in the Persian Gulf and terrorism at home produced the expected price excursions, which happily did not last too long.

OPEC’s position is unlikely to change anytime soon. Growth in oil consumption will resume as the world economy recovers. Developing nations will stake an increasing claim on oil; China, for example, has swung from a small oil exporter in 1989 to an importer of almost 2 million barrels per day. On the supply side, some growth in oil production outside OPEC is likely, but the Middle East retains the dominant share of oil reserves on which future production will be based. For years to come, OPEC seems likely to remain the global swing producer. The key question is how OPEC will go about setting prices.

For the past few years, OPEC pricing policy has been fairly benign, successfully holding the per-barrel price in the $22 to $28 range. That price is intended to keep OPEC’s revenue high without creating so much competition from new sources as to erode its market control. The oil revenues enrich governing powers and families, while making life tolerable for the general population. Luxury for the few and calm for the many has been more than enough reason to keep the oil flowing during the procession of political crises that have beset the region. It’s why turmoil in oil markets has been relatively short-lived.

Currently, however, OPEC’s comfortable position has become precarious. Control of the world’s second-largest oil reserve–Iraq’s–is up for grabs. Terrorists seem eager to destabilize governments in the Middle East as well as undermine U.S. influence there. It’s too soon to predict how the political situation will play out, but the range of possibilities is broad. One current view is that the fall of the regime of Saddam Hussein is but the first step toward more democratic, market-driven societies throughout the region. Another is that the more radical elements will seize control and use oil as a weapon to redress grievances.

How will the political resolution affect oil markets? Clearly, the latter outcome would only increase the risks posed by excessive dependence on oil. Unfortunately, it’s not clear that the happier outcome would do much to reduce those risks. A more stable and globally integrated Middle East has much to recommend it, of course. Democracies presumably don’t start oil wars any faster than they start other kinds, and more open markets would encourage the investments needed to keep OPEC oil production growing. Yet even the most democratic of swing producer governments is unlikely to volunteer to sell its oil for less than the markets will bear; at least, none does so today. So it’s not unreasonable to expect that even the best political outcome would leave oil consumers paying a high price for increasing use of OPEC oil. Nor is it unreasonable to expect a less-than-optimal outcome in the troubled Middle East.

All of the above suggest that we face much the same problem we did 14 years ago: how to reduce our oil dependence. The main lesson of the intervening years is that cutting demand is the essential policy. Since 1989, non-OPEC production has been reasonably constant, which is no small success. Looking ahead, developments in the former Soviet Union and in Africa suggest that there is good potential for holding production steady. But aside from the collapse in demand that accompanied the breakup of the Soviet bloc, world oil consumption has grown at more than 2 percent annually since 1989.

Regrettably, the recommendations I offered for reducing oil demand in 1989 look a lot like the policy agenda I would recommend today. Even in those days, imposing a gasoline tax and creating an alternative motor fuel to depress and ultimately replace oil consumption were not new ideas. Despite the senior-citizen status of those proposals, their implementation is more urgent today than ever before.

The Bumpy Road to Reduced Carbon Emissions

A dozen years ago, the debate over controlling emissions of greenhouse gases was just beginning. Several European countries were calling for either a freeze or a 20 percent cut in emissions by the developed world by 2000. In the United States, Congress asked its now-defunct Office of Technology Assessment (OTA) to evaluate the potential for reductions in this country. Its report, which we helped develop, outlined both the technologies and the mix of regulatory and market-based federal policies that would be necessary to significantly lower greenhouse gas emissions over the next few decades.

Today, even as global temperatures continue to increase, the debate at the federal level over greenhouse gases still goes on. This lag is perhaps not surprising, given that major pieces of environmental legislation have almost always required at least a decade to enact. However, much has changed in other areas. Scientific understanding of climate change has improved markedly, policy has advanced at the state level and internationally, and the portfolio of climate-friendly energy technologies that will be needed to maintain modern society has expanded. Unfortunately, one crucial aspect of public policy–federal spending on energy technology R&D–is actually worse off than it was during the early years of the debate.

Given the depth of cuts necessary, a wide variety of energy technology changes will likely be required. Yet the federal government has not acted accordingly.

In 1990, the scientists of the Intergovernmental Panel on Climate Change, which had been synthesizing and interpreting the issue for policymakers worldwide, concluded: “The observed increase [in temperatures] could be largely due to natural variability; alternatively this variability and other man-made factors could have offset a still larger man-made greenhouse warming.” By the panel’s third assessment, in 2001, the conclusion read: “There is new and stronger evidence that most of the warming observed over the last 50 years is attributable to human activities.” Scientists are now able to compare global temperature records during the past 1,000 years with much-improved model-based projections for the next 100 years. What emerges from such comparisons is that observed temperature changes due to human activities to date may be a pale shadow of what the future holds.

As the science progressed, so too did international negotiations. The 1992 United Nations Framework Convention on Climate Change called for “stabilization of greenhouse gas concentrations in the atmosphere at a level that would prevent dangerous anthropogenic interference with the climate system.” About 190 nations signed, including the United States. Five years later, the Kyoto Protocol established quantitative emission limits for the developed world, for the period from 2008 to 2012. This agreement will not go into effect until countries representing 55 percent of the developed world’s carbon dioxide emissions ratify it; as of March 2003, the total was 44 percent.

Although President Bill Clinton signed the Kyoto Protocol, the United States has not yet ratified it. Neither the Bush administration nor Congress supports its approach of setting targets only for the industrialized world. The U.S. emissions target–7 percent below 1990 levels by 2012, about halfway between what our earlier OTA report presented as moderate and tough scenarios–has also been deemed too expensive to achieve.

Alternatively, President Bush has proposed a much more modest goal outside of the treaty framework: a reduction in “emissions intensity” of the U.S. economy (that is, a reduction in greenhouse gas emissions per dollar of gross domestic product). This goal still allows total emissions to grow. In 1991, U.S. carbon emissions totaled 1.3 billion tons per year. Today, emissions are about 1.5 billion tons per year. And even if the voluntary goal is achieved, emissions will continue to rise to a projected 1.8 billion tons in a decade, at likely rates of economic growth.

Dissatisfied with this pace, many states have proceeded by themselves. Sixteen states have enacted “renewable portfolio standards,” or goals that require their utilities to provide a specified percentage of electricity from carbon-free renewable energy. This ranges from a few percent in several states to as much as 20 percent in California and 30 percent in Maine. California also has adopted requirements to lower carbon dioxide emissions from highway vehicles beginning with the 2009 model year.

In addition, many companies are undertaking reductions on their own, even without state and federal requirements. More than 220 companies have notified the Department of Energy that they are undertaking projects to lower emissions of greenhouse gases. Some companies, such as Dupont, Alcoa, BP, and Shell, have pledged emission reductions far greater than the Kyoto goal.

Although modest, some progress in slowing emissions has also been made at the federal level. The Energy Policy Act of 1992 included several significant provisions, including 13 new energy-efficiency standards for appliances and electric motors and financial incentives for utilities to generate electricity using renewable energy. Energy legislation continues to be heatedly debated in Congress. But without price signals and multiple reinforcing policies, significant emissions reductions will be unlikely.

Mixed technological results

If atmospheric concentrations of carbon dioxide are to be stabilized at even two or three times preindustrial levels, growth in world energy emissions must be drastically curtailed. U.S. emissions must eventually drop well below the level of even the OTA-proposed tough scenario. At issue is only how quickly emissions must drop, not that tomorrow’s energy technology must be far lower-emitting than today’s. For energy technology to meet the challenge, aggressive research must be under way now. The story here is mixed.

On the positive side, several technologies that were barely on the horizon a decade ago appear very promising today. For example, one technology under development hopes to produce both electricity and hydrogen from coal. Carbon dioxide will be captured before it is released to the atmosphere and be injected into deep aquifers, the ocean, or oil wells. The success of this “carbon sequestration” may make it possible to continue to use the planet’s huge reserves of coal. Hydrogen offers promise as a future fuel, if it is produced without emissions of greenhouse gases. A major research initiative of the Bush administration is focusing on development of fuel cells to use hydrogen as the energy source for automobiles and buildings.

The rapidly advancing field of genomics offers promise to more efficiently use a variety of plants and microbes to produce hydrogen or such carbon-neutral fuels as ethanol, which can substitute for gasoline. Genomics may also help devise better ways to store carbon in agricultural soils, improving both the health and productivity of this vital resource.

But it remains apparent that no single technology will be the silver bullet for carbon emissions. Given the depth of cuts necessary, a wide variety of energy technology changes will likely be required. Yet the federal government has not acted accordingly.

In the early 1990s, R&D on energy technologies totaled about $2.1 billion per year, down from a peak of more than $6 billion per year after the 1973 OPEC oil embargo. By 1997, funding had dropped to about $1.3 billion. (All figures are in 1997 dollars.) In that year, a report by the President’s Committee of Advisors on Science and Technology recommended almost doubling funding to about $2.4 billion per year by 2003, principally in the areas of renewable energy and efficiency.

Federal spending for energy research has indeed increased since 1997, and there have been modest increases in renewables and efficiency. But total funding has yet to return to 1990 levels. Expenditures on energy R&D, both public and private, are less than 0.5 percent of the cost of the nation’s energy bill. For comparison, R&D investment equal to about 3.5 percent of sales is the norm for other U.S. industries. Surely the escalating threats of climate change and energy insecurity justify a correspondingly serious, immediate, and sustained investment.

There is an old adage: “If you do not change direction, you will end up where you are headed.” A wealth of data now shows that humanity is rapidly moving the planet’s temperature far outside of the range of the past millennium. No one should want the next generation to inherit that world.

Next Steps in Defense Restructuring

Twenty years ago, the United States was in the midst of a Cold War military buildup targeted against the Soviet Union and the other Warsaw Pact countries. Today there is no Soviet Union, Russia is no longer the enemy, and many of the former Warsaw Pact countries are members of the North Atlantic Treaty Organization. The nation now is engaged in a war against “terrorists and tyrants,” which began with the attacks of September 11, 2001, and was officially declared in the National Security Strategy document released on September 17, 2002.

This new security environment alone will require many changes, ranging from alterations in basic warfighting strategies to the development of new military equipment. The required changes have been compounded by the equally dramatic technological revolution–particularly in information-based systems–that has occurred during this same period.

During the 1980s, the nation’s security focus was on “deterrence and containment.” The United States had to have enough airplanes, ships, and tanks to stop a Warsaw Pact attack through the central plains of Germany, and it needed nuclear-armed missiles, bombers, and submarines to deter a nuclear attack on U.S. cities. This strategy worked.

But future adversaries, recognizing that the United States is the sole remaining superpower, will not attempt to match up plane for plane, ship for ship, or tank for tank. Rather, they will use “asymmetrical” approaches focused on areas in which the nation potentially is weak. These areas include attacks by suicide bombers; use of chemical, biological, radiological, and nuclear weapons; information warfare; attacks on U.S. infrastructure, such as power and water systems, financial systems, and food supplies; and long-range missiles aimed at the nation’s military forces or sites (or those of its allies). These approaches, especially when a number of them are used in combination, are not deterred or contained by conventional U.S. military might.

In response, the military needs to shift its resources, training, organizations, and equipment to nontraditional areas. These areas include special operations forces; urban warfare; defense against the battlefield use of weapons of mass destruction; cyberwarfare defense; ballistic missile defense; and “brilliant” systems for carrying out intelligence, surveillance, reconnaissance, communications, and control activities, as well as precision strikes on adversary targets. Needless to say, such shifts, particularly of resources and organizations, tend to be countercultural and fiercely resisted by the entrenched institutions in the military, the defense industry, and Congress. Thus, bringing about the needed changes will be neither rapid nor easy, as Defense Secretary Donald Rumsfeld is finding in his attempts to implement his proposed “transformation” of the nation’s military.

Among the cultural changes that will be required in carrying out this “revolution in military affairs” is that the armed services will have to work together. Future combat will not be carried out via separate air, land, and sea battles, but must be integrated (joint) operations. This requires joint doctrine, joint training, full equipment interoperability, and true dependence of the services on each other. Similarly, U.S. forces will have to work more effectively with allied forces, since it is virtually certain that all future conflicts involving the United States will be carried out in coalition with other nations–often for geopolitical reasons, if not military ones. During military operations in Kosovo, for example, U.S. aircraft were not equipped to communicate securely with allied planes. Pilots had to talk in the open, which made their planes more vulnerable. Working effectively with allies will require full interoperability of equipment and integrated tactics and training.

Another cultural change will involve accepting the likelihood that the nation will be fighting in an asymmetric environment–for example, against an enemy that uses biological and chemical weapons–and so it must develop equipment and train military personnel accordingly. Also, future enemies are likely to fully use cyberwarfare and deploy terrorists (simultaneously) to attack the nation’s supporting military infrastructures as well as its homeland. All of these factors greatly complicate military operations, and the nation has not adequately prepared for them.

Perhaps most difficult, advancing the revolution in military affairs will require reallocating resources and modernizing traditional military organizations. The commanders who have come of age with the “platform-centric” model in which ships, planes, and tanks are the critical elements; the industry that builds this equipment; and the members of Congress who support the continuation of business as usual–none of these elements are likely to enthusiastically support the necessary shift to the new and increasingly information-based weaponry needed to deter or defeat future adversaries.

Paying for changes

The good news is that sufficient resources to pay for such changes can be generated out of current defense budgets. There are four areas ripe for revision:

  • Base closures. With the recent downsizing of military forces, the nation has ended up with 25 percent excess capacity in the number of bases needed. (This figure is widely accepted, but no one has yet identified a base that a local member of Congress does not immediately declare to be critical to the nation’s security.) Closing unneeded bases would save roughly $6 billion per year. Moreover, empirical evidence shows that when such closures are well planned and the Department of Defense (DOD) provides transitional support, most communities end up far better off over the long run, in terms of employment and economic growth.
  • Logistics. DOD spends more than $80 billion a year in the area of logistics and does not do a world-class job. When an ordinary consumer sends a package by an express delivery company, he or she has nearly total confidence that it will arrive domestically within 24 hours or worldwide within 48 hours. Similar performance is achieved by the supply chains of Wal-Mart, Caterpillar, General Electric, and numerous other companies. DOD meets its logistics objectives by piling up lots of parts in warehouses and by using lots of people to manage them–not by using modern information technology and rapid transportation, as world-class firms do. In fact, there are more people working in the DOD logistics system than there are carrying weapons in the military.

The results of such inefficiency are to be expected. For example, during the 1991 Gulf War, it took field personnel 36 days, on average, to get a part ordered from a DOD warehouse. Today, the wait is down to 22 days. But that is the average, and it may actually take as long as two years for the needed part to arrive. So forces in the field routinely stock lots of parts and order needed parts three separate times, to make sure they get what they need. That is part of the reason why the DOD currently has an inventory of more than $60 billion in spare parts. It clearly makes sense for DOD to implement a world-class logistics information system, to replace a current set of more than 1,000 different and noninteroperable information systems. This step not only will save billions of dollars each year but also will greatly improve the readiness and responsiveness of defense systems.

  • Competing for work. Instead of having work performed on a monopoly basis by government workers, the government should open the door to private firms as well, and then have all interested parties bid for jobs. After all, turning a wrench in a maintenance facility or writing payroll checks are not inherently government jobs. Numerous studies show clearly that when such work is competitively awarded–regardless of whether the public or private sector wins the competition–job performance improves significantly, with a cost savings that averages about 30 percent. The Bush administration estimates that there are more than 800,000 jobs that are not inherently governmental that are currently being done on a monopoly basis by government workers. Subjecting them to public-private competition (or privatization or outsourcing) makes sense.
  • Buying smarter. Although pushing DOD to improve what it buys will have the greatest impact on its mission effectiveness, there also is great potential in improving how the department buys and from whom. In this area, significant steps have been taken during the past two decades to help transform DOD into more of a world-class buyer. Some of the changes include placing a priority on buying rugged commercial parts and subsystems, rather than special-purpose, high-cost items unique to the military; making unit production costs, support costs, and interoperability key military design requirements; implementing an integrated digital-based supply chain, including e-logistics, e-finance, and e-procurement; and making a concerted effort to increase the professionalism of the DOD acquisition workforce, which, depending on the definition used, stands at approximately 300,000 military and civilian personnel. Perhaps the most important change is using “spiral,” or evolutionary, development of products, with continuous involvement of the ultimate users of those products. This type of development necessitates corresponding changes in the requirements process, the budget process, the testing and evaluation process, and the support process, as well as maximization of rapid, competitive prototyping. The challenge for the coming years in these areas is for DOD to fully implement all of the acquisition reforms that have been introduced in recent years.

The private sector–the supply side of the equation–also has a role in advancing the military revolution. During the past 20 years, there have been dramatic changes in the defense industry in response to the changes in products and services demanded and to the need for greater efficiency. Specifically, there has been very significant consolidation during this period, from roughly 50 top contractors to only 5 or 6 large firms today. The government allowed such consolidation so long as sufficient competition in all critical areas was maintained. There also has been a shift toward more internationalization of defense firms, so long as the partner firms and countries agreed to meet U.S. controls on sensitive technologies. One area that only recently has been receiving needed attention is that of bringing commercial firms into the defense industry and/or encouraging traditional defense suppliers to integrate their civil and defense operations. The more DOD uses commercial buying practices, the more this trend will build up.

These various examples suggest that unless the government changes its way of doing business and adapts modern, commercial, cost-sensitive best practices, the revolution in military affairs will not be affordable. The United States will then be in a perverse situation in which the richest, most powerful nation in the world can be outmaneuvered by terrorists and tyrants. These hostile agents will be able to quickly acquire things such as state-of-the-art secure communications equipment, biological weapons, and information warfare tools, while the United States spends its money on maintaining the structure of its military forces (and the equipment available to those forces) in a fashion that is woefully out of date. Clearly, this cannot be allowed to happen.

How Smart Have Weapons Become?

By the early 1980s, the technology was in hand to define “smart weapons” that could fill at least three important military purposes. First, guided weapons could provide theater-range artillery fire accurately across an entire battle area, when linked with a theater-wide surveillance system to identify and locate targets. Second, advanced surveillance systems could be used to mount an effective theater-wide air defense using surface-to-air missiles. Third, electronics could be employed in handheld antitank weapons that would make it possible to activate and deactivate these weapons from a central location, making possible wide distribution, even to militias, without fear of their misuse. How have these possible applications played out?

Substantial elements of a real-time, theater-wide surveillance system capable of covering a region 1,000 kilometers (roughly 600 miles) across already existed in the mid-1980s. These elements would be supplemented by a robust theater-wide communications system. Surveillance would be carried out in part by forward observers on the ground and in part by small drone aircraft, equipped with global positioning system (GPS) navigation and with television cameras or other sensors that could obtain precise knowledge of the target position. Once a target was identified and located, attacking weapons were available that could be guided to the target by using a navigation grid common to the sensors and to the weapon.

There is every reason to take seriously the threat of theater-range missiles against U.S. forces.

As demonstrated in U.S. actions in Afghanistan in 2001 and in Iraq in 2003, this capability has been functionally achieved. The laser-guided bombs used during the Vietnam War have been augmented by the addition of highly accurate Joint Direct Attack Munitions (JDAMs). These devices are guided by GPS systems and do indeed “bomb by navigation” on coordinates provided by ground observers, aerial surveillance, or satellite observation. A typical JDAM is a 2,000-pound Mk 84 bomb. A much larger JDAM, a 21,000-pound device called Massive Ordnance Air Blast, or MOAB, was used in Iraq after its first test in 2003.

Despite the advances demonstrated by the U.S. Navy and Air Force, the Army has not yet seen the merit of largely replacing tube-fired artillery shells by GPS-guided rockets. Developing such weapons would be possible within the constraints of the Intermediate Range Nuclear Forces Treaty, which prohibits the United States and Russia from possessing ground-based ballistic or cruise missiles with a range between 500 and 5,500 kilometers. A ballistic missile with a range of 480 kilometers would give the Army the ability to mass accurate fire from secure areas onto targets across an entire theater. In contrast with a conventional howitzer, which has a range of only about 40 kilometers and might miss its target by as much as 150 meters, the probable error for GPS-guided rockets of any range is likely to be in the 5-meter range. The rockets also can be arranged for simultaneous arrival on target, with final approach from any desired angle.

The contribution of JDAMs has been realized in conjunction with an integrated targeting and communication system, including the possibility of changing the target coordinates in the individual weapons while the delivery aircraft is in flight. Similar in accuracy to laser-guided bombs, JDAMs offer the important capability of being able to work in cloud or smoke, and they can attack dozens of individual targets in a region tens of kilometers across from a single release of multiple bombs by a B-52 or other large aircraft.

Only extra care will prevent guided bombs from “accurately” destroying the wrong targets by mistake, as happened with the Chinese embassy in Belgrade. But it would be highly desirable in any case to add a feature to ensure that such weapons explode in the air rather than on the ground if their guidance system malfunctions or if surveillance shows a civilian bus approaching the target area.

The problem with missile defense

The Bush administration has placed great emphasis on National Missile Defense (NMD), focused on a possible North Korean attack on the United States using intercontinental ballistic missiles (ICBMs) bearing nuclear warheads. But as early as 1968, Hans Bethe and I warned that a missile defense that cannot deal with feasible countermeasures is worse than no defense at all. That, unfortunately, characterizes the midcourse interceptor system under development by the Pentagon. My colleagues and I have shown, for example, how balloons released by an ICBM could serve as credible decoys for a tumbling warhead, itself encased in a similar balloon, thus preventing intercept of a nuclear payload.

On the other hand, boost-phase intercept (BPI)–striking the missile before its rocket engine has driven it to full ICBM speed–has a real capability against the Taepo Dong 2 ICBM that North Korea has been expected to test since 1998. BPI would work against North Korea because the territory is small and almost surrounded by international waters. But progress has been slow in developing BPI, in large part because the administration has emphasized the ineffective midcourse system and to some extent because BPI would be more difficult to use against a missile launched from the much larger territory of Iran and, until the recent war, Iraq. Yet solving the most urgent problem first–North Korea–has some merit.

The administration no longer distinguishes the demands of NMD from those of theater missile defense, and this further confuses the issue of missile defense development. There is every reason to take seriously the threat of theater-range missiles against U.S. forces. And the same GPS guidance that has made such an enormous change in the effectiveness of U.S. high-explosive weapons is available and has been tested by others as terminal guidance for their theater-range ballistic missiles. But unlike terrorists, hostile regimes, unless their survival is at stake, will not launch ICBMs containing biological or nuclear payloads against the United States. Also, in many cases, a preemptive U.S. strike could destroy such capabilities.

A conventional Scud missile with a range of 300 kilometers, or an extended-range Scud that can travel up to 900 kilometers, now becomes a real threat, because its explosive payload of one-half ton to one ton can be delivered with 10-meter accuracy instead of the multikilometer error typical of previous attacks. What has not been emphasized is the highly feasible task of developing a missile defense system to protect against such attacks with a “keep-out” zone of half a kilometer. Instead, the attraction of the technological challenge drives industry to propose, and government to fund, solutions in which interceptors need to reach out tens or hundreds of kilometers to defend as much of a theater as possible. Until such capabilities are achieved, U.S. forces, as well as those of allies and friends, are unnecessarily vulnerable to highly accurate bombardment.

Fortunately, the Patriot PAC-3 hit-to-kill interceptor is expected to perform much better than did the original Patriot during the 1991 Gulf War, when it made few if any intercepts. President Bush in 1991 claimed that the Patriot was essentially 100 percent effective. But in January 2001, at the end of his tenure as secretary of defense, William Cohen flatly stated, “It didn’t work.”

It also seems unlikely that a NMD system will be useful in preventing an enemy from using ICBMs with nuclear or biological weapons to attack the U.S. mainland. After all, as the 1998 report of the commission headed by current Secretary of Defense Donald Rumsfeld concluded, an opponent could deploy a short-range, sea-based missile threat, either ballistic or cruise, against U.S. coastal cities. These capabilities are much easier to achieve than is an ICBM and would be more reliable and more accurate.

Electronic controls

Advances in electronics have made it possible to produce effective tripod-mounted or shoulder-fired antitank weapons, as well as similar antiaircraft missiles. But their development is proving to be a double-edged sword. Since September 11, 2001, many observers have been particularly worried about the whereabouts of Stinger missiles provided by the United States to anti-Soviet forces in Afghanistan, as well as similar Soviet and Russian antiaircraft weapons available worldwide. Civil aircraft taking off from airfields anywhere are at risk from these infrared homing weapons, which could readily bring down an airliner, killing all aboard. It is time to incorporate robust use-control features into these kinds of weapons. These features would make it possible to deactivate a weapon that is no longer in authorized hands.

In an era of rapid technological change and potentially reduced cost, the nation is paying dearly in treasure and capability for delays in decisions and in terminating programs, and for not taking advantage of existing technology to achieve limited goals. The JDAM bomb system, which was developed expeditiously and offers remarkable military capacity, is a shining example of how to do it right.

Conservator Society Still a Dream

Society is all too committed to the notion of “progress” as measured through economic growth and population expansion. The notion of working toward a “sustainable future” is not given much serious thought. Energy policy, for example, concentrates on expanding supply, with relatively little R&D being devoted to improving the efficiency of energy use or developing low-carbon fuels. Yet without a change in course, human activities are destined to further degrade the global environment.

That was my message in 1988, when I argued that it was imperative to create what I called a “conservator society.” After reviewing humanity’s “progress” during the intervening years, however, I have concluded, sadly, that I would change my argument very little. To say that more sustained effort will be needed to achieve the conservator society is obviously an understatement.

The first and foremost requirement is to understand that our traditional exponential model of progress is, at best, anachronistic and desperately needs to be succeeded by an equilibrium model.

Has humanity made any progress during the past 15 years, or have we been retrograde? Consider the following:

The end of the Cold War was the most memorable happening and, at first, appeared to provide a rare opportunity for a “peace dividend” that could be applied to humanitarian investments in economic renewal, health, and the environment. But such was not to be. Today, the United States is deeply drawn into conflicts around the world, its budget deficit is screaming upward, and attention to Third-World economic development, environmental protection, and population stabilization has been disastrously smothered.

Since 1988, the world’s human population has increased by 1.2 billion. By far the majority (over 90 percent) of global population growth is occurring in the developing world–about 75 million more people per year–placing extraordinary strains on global systems to provide for it. Mercifully, population growth rates have fallen in some parts of the world, such as South America. But the rates remain disastrously high in other regions, such as the Middle East. For example, if Saudi Arabia’s 3 percent annual growth rate continues, its population will double in 23 years. Today, nearly half of its citizens are under 15 years of age.

Led by conservative elements of organized religions that do not seem to understand arithmetic, pronatalist public policies are forcibly muzzling efforts in sex education and birth control–a situation that is also occurring in the United States as well as many other nations. The whole notion of the demographic transition that must be navigated as we move to population equilibrium seems to be unfamiliar (or lost) even to thoughtful business leaders, such as Peter Peterson, who proclaims that the United States needs more people in order to sustain its economic growth. The goal of population equilibrium seems to be receding.

Since 1988, knowledge about the science of global climate change and the human contributions to it has steadily improved, and there now is virtually complete consensus about the phenomenon, even though many technical uncertainties remain. But this scientific progress has not triggered significant action to slow or reverse the impacts. Rather than moving to lessen and delay global climate change, we in the United States tend to politically ignore the evidence, largely because of the argument that to take definitive action would hurt economic growth. The Clinton administration, in response to the Kyoto Protocol, proposed reducing greenhouse gas emissions by 2010 to a level about 10 percent below those that occurred in 1990. That would be only a modest step, but it would be a start on a long journey. But the Bush administration nixed the Kyoto Protocol and offered instead a voluntary program that would, in effect, merely maintain the growth trend of the past several decades. Assuming historic average economic growth, the Bush plan over its 10-year lifetime would result in a net increase in CO2 emissions of about 14 percent.

In the energy sector, emphasis remains on subsidizing oil and gas. Federal support for research has continued its long decline. Despite efforts, most notably in the Clinton administration, to work with the auto industry on developing more efficient cars, there has been a decline in fleet fuel efficiency as automakers aggressively market heavy, high-powered (and high-profit) machines. Meanwhile, U.S. production of oil continues to decline.

Signs of promise

Not all of the news is bad, however. There has been some movement toward the conservator society.

On the positive side of the energy ledger, for example, fuel-efficient gasoline-electric hybrid cars are in production and are becoming increasingly popular; combustion turbines that produce electricity far more efficiently than those of the past are being introduced; and fuel cell technology has advanced surprisingly fast. If research support is maintained vigorously, we could see major improvements in the next several decades. There is similar good news in the improved energy performance of lighting, buildings, and appliances, thanks to advances in materials, computer controls, and construction methods.

Perhaps the best news comes from the growing recognition that improved efficiency results not only in reduced demands on resources and lower environmental externality costs, but also in generally reduced costs to consumers. As a consequence, resource and energy efficiency are now widely viewed as an attractive option from virtually every perspective–except for automobiles, where smaller, lighter cars are criticized by the industry as being more dangerous than heavy sport-utility vehicles.

Another positive trend is that in at least a few instances, the net flow of raw materials is diminishing, with attendant decreases in the footprint of human activities on the biosphere. For example, new green designs of some products require at least a minimum content of post-manufacturing recycled materials. And though we mostly continue to think and act in terms of open rather than closed systems, some engineers and a few economists are now committed to taking into account externality costs and other phenomena that provide a truer picture of environmental costs and benefits. These factors are not now included in the national accounts that are used to measure economic progress.

Of course, the amount of time that has passed since I first offered my observations is but a moment in human history, so perhaps we should not expect great change in things such as global biodiversity, global climate, energy resources, and human population. These phenomena are all characterized by relatively slow change. But human population has advanced with lightning speed in terms of numbers and economic and environmental activities, so fast that we have probably no more than 100 years left to stabilize population, CO2 atmospheric concentrations, biodiversity, and energy systems, to cite but a few items–otherwise, humankind will be forced to settle for a substantially compromised future.

The conservator society can be achieved, but only by sustained effort. The efforts must be global, but the United States should, by all rights, lead. The first and foremost requirement is to understand that our traditional exponential model of progress is, at best, anachronistic and desperately needs to be succeeded by an equilibrium model. The second requirement is to devise a new economic perspective that incorporates environmental costs and benefits into national accounts, so that these factors are considered in making market decisions.

Let’s face it: Humans are not only the most powerful members of Earth’s community; we arguably are also the most intelligent and creative. Thus, it is in our grasp to creatively and productively build an ever more attractive future for ourselves, our descendants, and for the rest of the biosphere. But right now, we can be more accurately described as the most invasive and exotic species ever imagined. Surely, we can–and must–do better.

Globalization: Causes and Effects

Globalization traces its roots to at least the late 1980s. At that time, new countries were entering into manufacturing, which was in some sense the weakest link in the U.S. chain of science, development, manufacturing, and sale of goods and services. In the case of Japan, lower wages initially made it possible to exploit this relative U.S. weakness. But Japan rapidly developed a number of other advantages based on improved manufacturing methods. Falling costs of sea transport, coupled with a general lowering of tariff barriers, then made it possible for the Japanese to address a global market, including the U.S. market.

There seems to be little basis for the idea that there is a shortage of U.S. citizens who are interested in science, and that this shortage can be remedied by better K-12 instruction.

This process has since been repeated over and over again. Singapore took over the production of disc drives, while semiconductor manufacturing moved to various Far Eastern countries. Flat panel displays, a derivative of the same semiconductor processes, never even reached production status in the United States, despite U.S. technical contributions, but started in Japan and then moved farther west.

Scientific leadership

Such developments help illustrate the fact that scientific leadership does not automatically translate into product or industrial leadership and the resultant economic dividends–a fact that has become increasingly obvious over the intervening years. In addition, with the rapid globalization of science itself (more than 40 percent of scientific Ph.D. students trained in the United States are now foreign nationals, roughly half of whom return to their countries of origin), the once undisputed U.S. scientific lead, whether relevant to product lead or not, is diminishing.

The competition of foreign students for positions in U.S. graduate schools has also contributed to making scientific training relatively unattractive to U.S. students, because the rapidly increasing supply of students has diminished the relative rewards of this career path. For the best and brightest from low-income countries, a position as a research assistant in the United States is attractive, whereas the best and brightest U.S. students might now see better options in other fields. Science and engineering careers, to the extent that they are opening up to foreign competition (whether imported or available through better communication), also seem to be becoming relatively less attractive to U.S. students (see “Attracting the Best and the Brightest,” Issues, Winter 2002-03).

With respect to the role of universities in the innovation process, the speculative boom of the 1990s (which, among other things, made it possible to convert scientific findings into cash rather quickly) was largely unexpected. The boom brought universities and their faculties into much closer contact with private markets as they tried to gain as much of the economic dividends from their discoveries as possible. For a while, the path between discoveries in basic science and new flows of hard cash was considerably shortened. But during the next few decades, this path likely will revert toward its more traditional length and reestablish, in a healthy way, the more traditional (and more independent) relationship between the basic research done at universities and those entities that translate ideas into products and services.

In the intervening years, another new force also greatly facilitated globalization: the rapid growth of the Internet and cheap wide-bandwidth international communication. Today, complex design activities can take place in locations quite removed from manufacturing, other business functions, and the consumer. Indeed, there is now ample opportunity for real-time communication between business functions that are quite independent of their specific locations. For example, software development, with all its changes and complications, can to a considerable extent be done overseas for a U.S. customer. Foreign call centers can respond instantly to questions from thousands of miles away. The result is that low-wage workers in the Far East and in some other countries are coming into ever more direct competition with a much wider spectrum of U.S. labor: unskilled in the case of call centers; more highly skilled in the case of programmers.

System driven by profits

These trends are built into a global free-enterprise system consisting of profit-making companies. A company in California that makes printers cannot afford U.S. production workers if its competitors are assembling their printers in countries where labor costs are very low. Neither can they field their service calls in Florida if a company in India can perform the same task at a fraction of the cost. The need for profitability automatically drives these jobs abroad.

Shortcomings in primary and secondary education are sometimes cited as a factor in the loss of technological jobs in the United States, and perhaps they are. However, there seems to be little basis for the idea that there is a shortage of U.S. citizens who are interested in science, and that this shortage can be remedied by better instruction from kindergarten through 12th grade. There are more U.S. students entering college with the intent of majoring in science or engineering than the nation could ever use. However, many of them switch out, perhaps opting for more attractive careers.

In any case, better education by itself is unlikely to affect the prospects for less skilled U.S. workers by making them generally more productive. Even a well-trained U.S. worker cannot be expected to perform so well as to overcome a 5-to-1 wage ratio when workers abroad are also increasingly well educated. The solution to this challenge to the higher standard of living enjoyed by the U.S. workforce must include the development of new types of products and processes through which the nation might be able to maintain a distinctive advantage. Perhaps in the very long run many more countries will be able to match U.S. levels of productivity and income.

We can also ask whether today’s very pronounced globalization is good or bad for the United States as a whole, rather than discussing only its effects on certain labor markets. After all, there is a positive effect from being able to obtain goods and services more cheaply. On this point, economics does not give any simple answer. There is a widely cited argument that if one country becomes more productive in industry A, then there will be a readjustment because of comparative advantage, and a country that lost its former advantage in industry A will become more productive in industry B.

But this notion is correct only in a limited sense. Theory does assert that allowing such an adjustment to take place is better for the whole country than fighting against it through tariffs or protectionism, but does not say that the country will be better off afterward than before. A careful analysis of this complex question can be found in Global Trade and Conflicting National Interests by Ralph E. Gomory and William J. Baumol (MIT Press, 2000).

Globalization clearly is an ever-increasing force. Its consequences for the United States and other countries are not fully understood. It is driven by the profitability it affords companies and as such, globalization is insensitive to its effects on individual countries. Profit flows into certain pockets, while wages flow into others. The effect on a country produced by the wages is usually much greater than that produced by the profits. Understanding how all this will play out in the future deserves much more analysis than it has received until now.

Science in the New Russia

In the former Soviet Union, science and technology served as major forces moderating national policies. The advent of nuclear weapons forced the country to give up its earlier Leninist thesis that wars were inevitable events that produce socialist regimes. The development of personal computers and information technology made the tasks of Stalinist-style censors unachievable. Environmental protests over industrial pollution served as models for independent political action on other issues. New biomedical developments forced scientists and philosophers to pay more attention to ethics, a grossly neglected field in Soviet thought. And the damage caused by technocratic planning of industrial expansion led government leaders to pay more attention to social, economic, and cost-benefit modes of analysis. In all of these ways, the advancement of knowledge, particularly in science and technology, helped to mellow the nation’s behavior.

How much Russia should accept from Western methods of managing and financing scientific research remains a hot topic of internal debate.

Now that the Soviet Union is gone, what roles do science and technology play in the relationship between the United States and the new Russia? An examination of the events of the past dozen years shows that science and technology are still important in Russia, although they now play more subtle roles than earlier. If during the Soviet period science helped change the nation’s political system, then models of Western scientific organization are now changing the ways Russian science itself is managed and financed.

Collapse of science

When the Soviet Union collapsed in 1991, science almost collapsed as well. The main source of research funds disappeared. Science was heavily concentrated in one republic–Russia–and its government thought that changing the political, social, and economic systems was more important than helping science, which was regarded as a luxury. Russia’s budget for science dropped roughly 10-fold between 1991 and 1999, and thousands of the best scientists emigrated abroad. The crisis was so deep that some people spoke of the “death of Russian science.”

Western governments, private foundations, and professional scientific organizations came to the aid of Russian science in what the board chairman of a U.S. foundation called “one of the largest programs, if not the largest, of international scientific assistance the world has ever seen.” Government donors have included the United States, the European Union, and Japan. In the United States alone, at least 40 government agencies are now involved in science and technology activities in Russia. Private foundations that have provided support include the John D. and Catherine T. MacArthur Foundation, the Wellcome Trust, the Carnegie Corporation, and the U.S. Civilian Research and Development Foundation. One man alone, U.S. financier and philanthropist George Soros, donated more than $130 million through his International Science Foundation between 1993 and 1996. Professional organizations offering aid have included the American Physical Society and the American Mathematical Society, aided by the Sloan Foundation, as well as similar European and Japanese organizations.

Surveys show that the most successful Russian research institutions now derive 25 percent or more of their budgets from foreign sources. In 1999, almost 17 percent of Russia’s total gross expenditures on R&D development came from foreign sources–a greater share of outside support for science than that of any other country in the world.

Today, it is clear that Russian science is not dead. And although the scientific enterprise is now less than half the size it was in Soviet times, it will continue to be a vital part of the Russian economy and of Russian culture. The country’s economy has begun to recover in the past few years, and starting in 1999 the Russian government has stabilized and even slightly increased its science budget. Although it is impossible to predict the future, the worst period for Russian science is probably over. However, the country’s science is emerging from its crisis under heavy pressure, much of it from abroad, to change the way it is managed and financed. In order to understand this new Western influence, it is necessary to review briefly the organizational characteristics of Soviet science, many of which are now under question.

Soviet science and technology were highly centralized and ruled from above. In fundamental science, the Academy of Sciences, a network of several hundred institutes, acted as the major organizational force. The academy’s budget came from the central government and was distributed from the top down, in block grants. There were no fellowships and grants for which individual researchers could apply, and there was no system of peer review.

This mode of managing and financing research gave institute directors enormous power, for they controlled the budget. Senior researchers with influence were much more successful in garnering research funds than were junior scientists. Furthermore, research and teaching were largely separated, with the academy responsible for research and the universities assigned a primarily pedagogical role. The result was an exceptionally elitist organization of science, a system in which less prestigious researchers, such as the young or university teachers, had great difficulty in fulfilling their potential. Nonetheless, this system worked fairly well in cases where the elite scientists in charge were talented and productive, as was often the case in such fields as theoretical physics and mathematics. But it worked poorly when second-rate scientists ruled, as was the case in many other fields, such as biology during much of the Soviet period and the social sciences and humanities during all of that period.

Changes raise questions

When the Soviet Union collapsed and foreign foundations entered, a primary question was how does one choose who should receive both the dwindling old money from the Russian government and the growing new money from foreign sources? The old system was clearly authoritarian and inadequate. The Russian government and many scientists looked abroad to democratic countries for models, especially the United States.

Within a few years, the Russian government created an analog to the National Science Foundation (the Russian Foundation for Basic Research), as well as an analog to the National Endowment for the Humanities (the Russian Foundation for the Humanities). Rules involving principal investigators, peer review, and accountability–all new ideas–were introduced, although they remain only partially developed. The foreign foundations also emphasized the development of young scientists, university research, and geographical distribution of grants, also novel ideas. The Russian government sometimes followed suit, especially in a new emphasis on the young.

These changes have resulted in a period of great controversy in Russian science. What is the most effective balance between the old system of block grants and the new system of peer-reviewed individual grants? Under the reformed system of financing research, who should own intellectual property rights? (Everything belonged to the government in Soviet times.) Should universities become major research centers, as in the United States, or should the Academy of Sciences remain as the center of fundamental research? Given that the Soviet Union was rather strong in science, is there a danger that too dramatic a shift to a new system will damage a traditional strength? How can the government prevent Russian researchers from emigrating abroad, while maintaining the freedom of movement that is essential to a democracy?

At the moment, it appears unlikely that Russia will completely discard the old system. There almost certainly will be changes, but just how much the country should accept from Western methods of managing and financing scientific research remains a hot topic of internal debate.

Security versus Openness: The Case of Universities

The decade of the 1980s was marked by declines in the United States’ manufacturing skills and the apparent invincibility of Japanese industry. In this climate, many people in the academic community were concerned that the U.S. government might impose restrictions on the open publication of academic research results or on the openness of U.S. universities to international students and scholars. The Massachusetts Institute of Technology (MIT), for example, had a variety of industrial liaison activities under way, and we argued that such programs were in the national interest. Today, the concerns of the 1980s that were based on the economic ascendancy of Japan seem alarmist, at best.

The poorly defined designation of “sensitive but not classified” carries the risk of being applied much more broadly than may be warranted.

However, the unprecedented tragedy of September 11, 2001, and the government’s expanding war on terrorism present the academic community with similar worries, albeit of different origin. The fact that several of the terrorists had studied aviation in this country, the apparent amnesia of the U.S. Immigration and Naturalization Service with respect to foreign nationals admitted to the United States, and concerns in Congress and the administration about the transfer of knowledge that might be of use to terrorists have combined to raise again the possibility that the government may place restrictions on colleges and universities. Such restrictions might come in two forms. First, depending on their nationality or ethnic background, international students admitted to U.S. colleges and universities may not be able to enroll in those schools. Second, there may be restraints placed on the openness of research in certain fields and on the publication of research results.

Seeking balance

Clearly, the United States and its institutions of higher education must seek the appropriate balance between two imperatives: the prevention of future terrorist attacks and the openness of study and research that has served the nation so well during the past 60 years. Defining the elements of an appropriate balance will require care, and the course of discussions should involve representatives of higher education. It is essential that two perspectives concerning academic research and higher education be considered in those discussions.

First, there is the fact that the excellence and creativity sought in students and faculty do not come solely with U.S. passports. The MIT experience provides compelling evidence that the men and women who have come to the United States from abroad to learn, to contribute to the development of new knowledge and skills, and not infrequently to join the faculties of this and other U.S. universities have enhanced the stature of research universities and have made important contributions to society. The international graduates of U.S. universities who take positions in the nation’s technically based industries often fill positions that there are too few qualified U.S. citizens to fill. The United States clearly would be measurably poorer without international graduates.

It is important to preserve the separation of the roles of universities and of government with respect to the matriculation of international students. Universities make decisions affecting international applicants on the basis of their evident qualifications. The State Department then exercises its judgment by taking action on an applicant’s request for a student visa. Presumably this judgment takes into account the national origin and the academic intentions of the student and the possibility that his or her planned educational activities in the United States could contribute to the dissemination of knowledge about hazards such as potential weapons of mass destruction. Although no one involved in this process can regard either the issue or the judgment as unimportant, there is concern that in the human urge to be comprehensive and give national security the benefit of any doubt, the constraints imposed will be so broad as to limit too greatly the flow of outstanding international students and scholars to the nation.

The second perspective that must be considered is the concern that severe restrictions on the open publication of research results in certain fields will impede the progress of research itself. This issue arose at MIT nearly 25 years ago, when representatives of the government’s intelligence and national security communities came very close to classifying research here on the topics of two-key cryptography and computer network security.

As the development of the Linux computer operating system so elegantly demonstrates, research in the field of computer science is a transnational effort–an effort that thrives on openness. Investigators publish online and present results at open conferences to encourage the scrutiny of their work by others. Collective progress is made incrementally in a distributed manner. On the frontiers of modern molecular biology, including genomics and proteomics, it must surely be true as well that openness about work in progress and free access to results accelerates research progress everywhere and enhances the development of findings that are of benefit to society.

Troubling labels

Of particular concern with respect to controls on research results and an individual’s access to research or to areas of study is the government’s drive to create “Sensitive Areas of Study.” This designation is unlike the several familiar categories of classification, such as Confidential or Secret. Those categories are well understood from many years of use, are limited to carefully defined situations, and are applied by specifically authorized organizations or persons. The poorly defined designation of “sensitive but not classified” carries the risk of being applied much more broadly than may be warranted.

The propositions that universities and colleges are open institutions, that study and research are accessible to all those who are qualified, and that the results of investigations are published promptly or otherwise shared with the larger community are deeply held by most academic institutions. This fact is not because it has always been that way, but because openness is essential to learning, to intellectual discourse, and to greater understanding. Many research universities, including MIT, have policies that specifically prohibit the creation of restricted areas of study or research. We already have had to turn back several federal grants or contracts that labeled the proposed research as sensitive and would have required that access to work in process and results be limited to U.S. citizens.

This issue emerged nearly 20 years ago and was settled in 1985 by a directive of President Reagan, which stated that federally funded research should either be classified in the familiar sense or should not be restricted, with the determination to be made by the funding agency. Such a policy is needed now, both for funded research and for areas of study.

In this time of instability and uncertainty, finding the appropriate balance between national security and the intellectual openness that is so important to the academic enterprise is a necessary and difficult task. Surely the judgments involved should be guided by an ongoing dialogue in which government officials and representatives of universities and colleges strive together for answers that will best serve the nation.

Superfund Matures Gracefully

Superfund, one of the main programs used by the Environmental Protection Agency (EPA) to clean up serious, often abandoned, hazardous waste sites, has been improved considerably in recent years. Notably, progress has been made in two important areas: the development of risk assessments that are scientifically valid yet flexible, and the development and implementation of better treatment technologies.

The 1986 Superfund Amendments and Reauthorization Act (SARA) provided a broad refocus to the program. The act included an explicit preference for the selection of remediation technologies that “permanently and significantly reduce the volume, toxicity, or mobility of hazardous substances.” SARA also required the revision of the National Contingency Plan (NCP) that sets out EPA’s rules and guidance for site characterization, risk assessment, and remedy selection.

EPA has been slow in embracing other methodologies, such as probabilistic exposure analysis, that might offer more accurate risk assessments.

The NCP specifies the levels of risk to human health that are allowable at Superfund sites. However, “potentially responsible parties”–companies or other entities that may be forced to help pay for the cleanup–have often challenged the risk assessment methods used as scientifically flawed, resulting in remedies that are unnecessary and too costly. Since SARA was enacted, fundamental changes have evolved in the policies and science that EPA embraces in evaluating health risks at Superfund sites, and these changes have in turn affected which remedies are most often selected. Among the changes are three that collectively can have a profound impact on the selected remedy and attendant costs: EPA’s development of land use guidance, its development of guidance on “principal threats,” and the NCP requirement for the evaluation of “short-term effectiveness.”

Before EPA’s issuance in 1995 of land use guidance for evaluating the potential future public health risks at Superfund sites, its risk assessments usually would assume a future residential use scenario at a site, however unrealistic that assumption might be. This scenario would often result in the need for costly soil and waste removal remedies necessary to protect against hypothetical risks, such as those to children playing in contaminated soil or drinking contaminated ground water, even at sites where future residential use was highly improbable. The revised land use guidance provided a basis for selecting more realistic future use scenarios, with projected exposure patterns that may allow for less costly remedies.

Potentially responsible parties also complained that there was little room to tailor remedies to the magnitude of cancer risk at a site, and that the same costly remedies would be chosen for sites where the cancer risks may differ by several orders of magnitude. However, EPA’s guidance on principal threats essentially established a risk-based hierarchy for remedy selection. For example, if cancer risks at a site exceed 1 in 1,000, then treatment or waste removal or both might be required. Sites that posed a lower lifetime cancer risk could be managed in other ways, such as by prohibiting the installation of drinking water wells, which likely would be far less expensive than intrusive remedies.

Revisions to the NCP in 1990 not only codified provisions required by the 1986 Superfund amendments, but also refined EPA’s evolving remedy-selection criteria. For example, these revisions require an explicit consideration of the short-term effectiveness of a remedy, including the health and safety risks to the public and to workers associated with remedy implementation. EPA had learned by bitter experience that to ignore implementation risks, such as those associated with vapor and dust emissions during the excavation of wastes, could lead to the selection of remedies that proved costly and created unacceptable risks.

Although these changes in risk assessment procedures have brought greater rationality to the evaluation of Superfund sites, EPA still usually insists on the use of hypothetical exposure factors (for example, the length of time that someone may come in contact with the site) that may overstate risks. The agency has been slow in embracing other methodologies, such as probabilistic exposure analysis, that might offer more accurate assessments. Thus, some remedies are still fashioned on risk analyses that overstate risk.

Technological evolution

Cleanup efforts in Superfund’s early years were dominated by containment and excavation-and-disposal remedies. But over the years, cooperative work by government, industry, and academia have led to the development and implementation of improved treatment technologies.

The period from the mid-1980s to the early 1990s was marked by a dramatic increase in the use of source control treatment, reflecting the preference expressed in SARA for “permanent solutions and alternative treatment technologies or resource recovery technologies to the maximum extent practicable.” Two types of source control technologies that have been widely used are incineration and soil vapor extraction. Although the use of incineration decreased during the 1990s because of cost and other factors, soil vapor extraction remains a proven technology at Superfund sites.

Just as early source control remedies relied on containment or excavation and disposal offsite, the presumptive remedy for groundwater contamination has historically been “pump and treat.” It became widely recognized in the early 1990s that conventional pump-and-treat technologies had significant limitations, including relatively high costs. What emerged to fill the gap was an approach called “monitored natural attenuation” (MNA), which makes use of a variety of technologies, such as biodegradation, dispersion, dilution, absorption, and volatilization. As the name suggests, monitoring the effectiveness of the process is a key element of this technology. And although cleanup times still may be on the order of years, there is evidence that MNA can achieve comparable results in comparable periods and at significantly lower costs than conventional pump-and-treat systems. EPA has taken an active role in promoting this technology, and its use has increased dramatically in recent years.

As suggested by the MNA example, what may prove an even more formidable challenge than selecting specific remedies is the post-remedy implementation phase–that is, the monitoring and evaluation that will be required during coming decades to ensure that the remedy chosen is continuing to protect human health and the environment. Far too few resources have been devoted to this task, which will require not only monitoring and maintaining the physical integrity of the technology used and ensuring the continued viability of institutional controls, but also evaluating and responding to the developing science regarding chemical detection and toxicity.

Coming challenges

In recent years, the rate at which waste sites are being added to the National Priorities List (NPL) has been decreasing dramatically as compared with earlier years. In fiscal years 1983 to 1991, EPA placed an average of 135 sites on the NPL annually. The rate dropped to an average of 27 sites per year between 1992 and 2001. Although many factors have contributed to this trend, three stand out:

  • There was a finite group of truly troublesome sites before Superfund’s passage, and after a few years most of those were identified.
  • The program’s enforcement authority has had a profound impact on how wastes are managed, significantly reducing, although not eliminating, the types of waste management practices that result in the creation of Superfund sites.
  • A range of alternative cleanup programs, such as voluntary cleanup programs and those for brownfields, have evolved at both the federal and state levels. No longer is Superfund the only path for cleaning up sites.

But such programmatic changes are about more than just site numbers. In 1988, most NPL sites were in the investigation stage, and the program was widely criticized as being too much about studies and not enough about cleanup. Superfund is now a program predominantly focused on the design and construction of cleanup remedies.

This shift reflects the natural progress of sites through the Superfund pipeline, the changes in NPL listing activity, and a deliberate emphasis on achieving “construction completion,” which is the primary measure of achievement for the program as established under the Government Performance and Results Act. It is a truism in regulatory matters that what gets done is what gets measured, and Superfund is no exception.

In the late 1990s, many observers believed that the demands on Superfund were declining and that it would be completed sometime in the middle of the first decade of the new century. But this is not proving to be true. Although expenditures have not been changing dramatically over time, the resource demands on the program are greater today than ever before.

Few people would have predicted, for example, that among the biggest technical and resource challenges facing Superfund at this date would be the cleanup of hard-rock mining sites and of large volumes of sediments from contaminated waterways and ports. These sites tend to be very costly to clean up, with the driver behind these great costs weighted more toward the protection of natural resources than of human health. In mapping the future course of the program, Congress and EPA must address the question of whether Superfund is the most appropriate program for cleaning up these types of sites.

There are other uncertainties, as well. The substantial role that Superfund has played in emergency response in the aftermath of 9/11, the response to the anthrax attacks of October 2001, and the program’s role in the recovery of debris from the crash of the space shuttle Columbia were all totally unforeseeable. Although many valuable lessons have been learned over the past 20 years of the program, there remain substantial opportunities for improvement as well as considerable uncertainty about the kinds of environmental problems Superfund will tackle in the coming decade.

AIDS: The Battle Rages On

The late 1980s were not good times for New York’s Harlem or the other disadvantaged urban communities in the United States. Two linked epidemics, one posed by the human immunodeficiency virus/ acquired immunodeficiency syndrome (HIV/ AIDS) and the other by crack/cocaine, had shredded the already tattered social structure of these communities and threatened to destroy a generation of children.

I date the beginning of the AIDS epidemic to a telephone conversation in the early 1980s. The director of medicine at the hospital in Harlem where I was director of pediatrics asked me: “Did you see the article in last week’s New England Journal of Medicine about a mysterious disease causing repeated infections and rare malignancies in homosexuals? I think we have some drug-abusing patients with the same constellation of symptoms.” At the time I thought, “Well, whatever it is, I don’t have to worry, because it’s not in children.” So much for my perspicacity, for within months we began to find infants and children with unexplained growth failure and repeated serious infections, sometimes fatal, that usually were very rare. In 1982, my hospital admitted one child with what was eventually labeled AIDS. In 1987, that number had increased to 44, and by 1989 the number of children hospitalized with HIV/AIDS infections, often for extended periods, had reached 134.

Many HIV-infected children are orphans because their parents died of AIDS.

During that period, the crack/cocaine problem was, in many ways, even more devastating and difficult for the health care system and, more importantly, for Harlem families and children.

On the front lines

Several years ago, I retired and have retreated from the front lines to the sidelines. As I left Harlem, the crack/cocaine epidemic had largely subsided and the HIV epidemic had changed, in large measure as a result of scientific research.

When the first HIV/AIDS-infected infants and children appeared, no one knew why these infants and their mothers had such deranged immune systems. So while our brethren at the National Institutes of Health and the Pasteur Institute labored to find the cause of this new disease, we on the front lines could do only what we already knew how to do: treat the infections secondary to their immunodeficiency. The cost of providing these children and their parents with even this modest medical care placed an almost impossible burden on urban city hospitals.

The identification of a virus as the cause and the development of an antibody to detect the disease made the development of clinical research units imperative. Many of the adults infected with HIV/AIDS were middle-class homosexual men who had the organizational skills and influence required to demand and finally get federal funding for AIDS research. In contrast, most HIV-infected children were, and still are, from minority families that often are disorganized and have little political power or influence. Although the battle for federal funding for AIDS research in adults was difficult, it was even more challenging to convince funding agencies to pay attention to the problems of these disenfranchised families and children.

Finally, collaborative units were established to conduct pediatric AIDS clinical research. However, children infected with HIV/AIDS were not found in the usual academic research centers, but rather in the nation’s scruffy, always embattled city and county hospitals. Thus, pediatric units in these hospitals of last resort scrambled for funds from the National Institutes of Health to establish the clinical research units that provided and continue to provide much of the clinical data for pediatric AIDS research.

In the early days of our work, when we could not prevent underlying viral infection, we wondered if we could prevent the often-lethal secondary infections. Two early trials provided the first successes in the treatment of children with AIDS.

The logic of the first trials was rather simple: Because the virus attacks the immune system and gamma globulin is an important component of the immune system, can we prevent secondary infections by giving infected children immune gamma globulin? Controlled clinical trial data from these collaborating research centers showed that monthly intravenous gamma globulin could prevent serious secondary bacterial infections in some HIV-infected children. We had taken a first step in prevention.

For the second trial, which was based on experience in treating immunocompromised cancer patients, it seemed reasonable to try to find out whether primary prophylaxis might prevent Pneumocystis carinii pneumonia, a leading cause of early death in HIV-infected infants. Analysis of data from a multicenter longitudinal study of infected infants in New York City showed that prophylaxis with a rather common sulfa drug did prevent this complication of HIV infection.

But we had no treatment for the virus itself until a 1987 clinical trial of the drug AZT showed success in decreasing mortality and the incidence of opportunistic infections in HIV-infected adults. Perhaps because of the lag in the development of clinical pediatric HIV research units, not until 1991 did pediatricians have evidence of the benefit of AZT therapy for HIV-infected children. Since then, numerous multicenter controlled drug studies of new classes of retroviral drugs have resulted in increased longevity and improved quality of life for HIV-infected adults and children alike.

Significant breakthrough

Finally, in 1994, a multicenter clinical trial showed that AZT given to infected women during pregnancy and labor and to the infant at birth reduced the transmission of the virus from mother to infant by more than 60 percent. This finding undoubtedly represents one of the more important scientific advances in the battle against this virus. The U.S. Centers for Disease Control and Prevention recently estimated that the rate of HIV infection in infants in the United States has declined by 80 percent during the past 10 years.

The earlier identification of HIV-infected women, the use of AZT during pregnancy to prevent newborn infection, the use of prophylaxis to decrease the rate of bacterial and Pneumocystis carinii infections, and the development of more effective antiretroviral medications–all of these scientific advances have had a substantial effect on the lives of HIV-infected families and children. For example, in New York City the median age of HIV-infected children increased from 3 years in the 1989­1991 period to 6 years in the 1995­1998 period.

Thus, the country is now left with a cohort of HIV-infected children and adolescents with chronic, serious, but often manageable disease. Unfortunately, many of these children are orphans because their parents died of AIDS. Many have such serious mental and developmental problems that their ability ever to live independently is questionable. The challenge of caring for these child survivors of the AIDS epidemic may prove more difficult and complicated than the scientific research that has improved and prolonged their lives.

Clearly, we have made substantial, if limited, scientific progress in the care and treatment of HIV disease. But only those in the more affluent areas of this world can benefit from this progress. Meanwhile, the virus rages virtually unchecked in the developing world. The question now is whether the United States and other developed countries can export this scientific progress to those countries. If we cannot or do not take this next step, then the current debate about weapons of mass destruction will pale in comparison to the catastrophe of the worldwide HIV epidemic.

In Memoriam

In reviewing the Issues archives to find articles to revisit in this anniversary edition, we came across several authors whose insights we would dearly like to read again but who have died.

David E. Rogers was a member of the Issues editorial advisory board and wrote articles about reforming medical education and about AIDS. He had been the president of the Robert Wood Johnson Foundation for many years and was cochair of the president’s AIDS Commission when he died in 1994 at the age of 68.

Carl Sagan was 62 when he died in 1996. One of the nation’s most visible scientists because of his role as host of the TV series Cosmos, a visionary champion of space exploration, an active participant in arms control debates, and a winner of the National Academy of Sciences’ Public Welfare Medal, Sagan had written about space exploration and military technology for Issues.

Jonathan Mann was 51 and doing an exemplary job as head of the Joint United Nations Program on HIV/AIDS when he was killed in a plane crash in 1998. He was one of the first to recognize that AIDS would become an enormous global problem, and his type of energy and dedication is more needed than ever now.

Congressman George Brown was 79 when he died in 1999, but there was no hint that he was running out of provocative things to say and do. Brown not only wrote feature articles for Issues, he wrote book reviews. On Capitol Hill, where anything longer than a one-pager challenges a member’s attention span, Brown read books and took them seriously enough to want to write about them. For this anniversary, we would have asked Brown to revisit his article about enhancing the scientific capacity of developing countries.

Marc Reisner was 51 when he died of cancer in 2000. His article on water wars in the West, which came out of his groundbreaking book Cadillac Desert, won Issues its first major national award. The problems that he identified in the United States have yet to be resolved, and we could certainly use his expertise and wisdom in dealing with global water issues that are growing steadily more problematic.

Lisa J. Raines was only 42 when she boarded American Airlines Flight 77 on September 11, 2001. She first wrote about biotechnology regulation when she was at the U.S. Congress Office of Technology Assessment and wrote again when she moved to the Industrial Biotechnology Association. She had become senior vice president for government relations at Genzyme Corporation.

Cecil H. Green

One other individual deserves special mention. Cecil Green died in April at the age of 102. Even though he never wrote for Issues, Green played a critical role. Together with his wife Ida, Cecil was one of the great philanthropists of the 20th century, and most of his generosity was devoted to science, medicine, and education–including the University of Texas at Dallas and the National Academies. The Greens contributed to 50 academic, medical, and civic buildings; 20 instructional and research facilities; and 28 endowed chairs in 15 institutions.

In 1961, the Greens joined with Erik Jonsson and Eugene McDermott (who with Green had founded Texas Instruments Inc.) and their wives to create the Graduate Research Center of the Southwest. In 1969, the Center became the University of Texas at Dallas. In 1990, in what would be one of his last major gifts, Green made a substantial contribution to the creation of the university’s Cecil and Ida Green Center for the Study of Science and Society. The center’s mission is to promote a better scientific understanding of the world’s most critical problems and to analyze the wisdom and practicality of proposed solutions. One of its first activities was to become a cosponsor with the National Academies of Issues. This partnership provided a firm fiscal base for Issues, and without it we would not be celebrating this anniversary.

Looking Back, Looking Forward

There is a troubling disparity between the scientific sophistication of our culture and its social and political backwardness, a disparity that hovers over every aspect of our civilization,” wrote Daniel Yankelovich 20 years ago to begin the first article in the first Issues in

Science and Technology. This is exactly the problem that National Academy of Sciences President Frank Press had in mind when he wrote in that same issue that this new magazine would be “dedicated to the broadening of enlightened opinion, reasoned discussion, and informed debate of national and international issues in which science and technology play a critical role.”

We are delighted to have Yankelovich back to kick off this special 20th-anniversary issue. He and all the other authors in this issue are revisiting topics that they wrote about previously in Issues. (A list of these articles can be found on the following page, and the articles themselves are available at www.issues.org.) Our goal is to provide readers with a quick overview of the wide range of topics that have been tackled in our pages and a sense of how critical concerns in science, technology, and health policy have evolved. But if you’re looking for a simple coda that will sum up the experience of the past two decades, you’ve come to the wrong place.

This edition, like all editions of Issues, has no party line. When one asks a group of very knowledgeable and politically savvy individuals from all sectors of society to express their personal views on a wide variety of contentious and important subjects, uniformity is the last thing one should expect. Reading these articles could leave one surprised, delighted, or dismayed. Every issue is different.

In some cases, the original problem continues unabated and the recommendations for action unchanged. In other cases, what might first appear to be the same problem is in fact quite different because of scientific developments or political shifts, so the recommendations for action are also different. Some authors have seen their recommendations become policy. In some cases, the result has been what they hoped; in others, it’s back to the drawing board. Some problems are worse, others better.

In all cases, we have learned something. Although it often seems to people in the science, technology, and health communities that their expertise is given little weight in policy debates, that is not the case. Policymakers are aware that special expertise is necessary to understand many of the choices they must make, and the scientists, engineers, and physicians that have become involved in public policy understand that their contribution is only one of many factors that contribute to the formulation of wise policy.

Politics and science each have long histories. The integration of science, technology, and health expertise into public policy is a recent and rapidly evolving phenomenon. We hope that this quick review of policy developments and Yankelovich’s perceptive overview will provide useful insights not only into the individual topics but also into the way in which expert knowledge can most effectively be used to inform public policy. To an increasingly large extent, humanity’s future course will be determined by how well these realms work together.

The Unfinished Revolution in Military Affairs

In the early 1990s, the Department of Defense’s (DOD’s) Office of Net Assessment concluded that the world was probably entering a period of military revolution, or “revolution in military affairs.” DOD’s leadership soon accepted that a military revolution was under way and that the U.S. military would need to transform itself into a different kind of fighting force in order to meet new kinds of challenges, as well as to exploit the potential of rapidly advancing information-related technologies that seemed to be driving dramatic change in so many other areas of human endeavor. A decade and a series of stunning U.S. military operations later, it would be easy to conclude that the revolution has arrived. A closer inspection of the evidence, however, indicates that such a conclusion is probably premature.

The Pentagon has tried to exploit the revolution in several ways. One is through precision strike, whose potential was revealed in the first Gulf War. Another involves forces operating as part of distributed networks that minimize their vulnerability, while still enabling their combat power to be concentrated when needed. These networks, it is anticipated, will include highly integrated reconnaissance, surveillance, and other elements that are capable of identifying a wide range of targets over a broad area and of allowing strikes on those targets in a greatly compressed period of time.

The U.S.’s remarkable military capabilities have yet to be tested against the very different kinds of threats that are emerging.

Of course, as strategists point out, “the enemy also gets a vote.” New U.S. capabilities must be viewed within the context of the challenges likely to be posed by future adversaries. After all, the goal in exploiting a military revolution is not to become more effective at the kinds of warfare that are passing into history but to dominate the military competitions that will define the emerging conflict environment.

U.S. military operations in Afghanistan and Iraq tell us a great deal about how the military is doing in realizing its goal of exploiting the military revolution. In some respects, the results are impressive. For example, the military has greatly increased its ability to wage precision warfare since the 1991 Gulf War. Then, only 7 percent of the bombs dropped in the air campaign against Iraq were precision-guided. Few aircraft were equipped to carry such weapons, and the capabilities of the precision-guided munitions (PGMs) were not advanced. Ten years later, nearly all U.S. combat aircraft are capable of carrying PGMs. Moreover, a veritable family of PGMs has been fielded, enabling U.S. air forces to strike under all weather conditions and with increased effectiveness against deeply buried targets. In the 1999 campaign against Yugoslavia, about 30 percent of the weapons dropped were precision-guided; in Afghanistan, roughly 60 percent. In Iraq, the figure reached 70 percent, or an order of magnitude increase over the first Gulf War.

The U.S. military’s improvement in its ability to compress the engagement cycle–the time between when a target is identified and when it is attacked–and to strike deeply buried targets such as command bunkers are extremely important capabilities in the new age of precision warfare. This is because U.S. adversaries seek to use target camouflage, cover, concealment, deception, mobility, and hardening to reduce the effectiveness of precision strikes.

In 1991, the military’s air tasking order, which designated targets for attack, required several days to develop and, because of the military services’ lack of interoperable communications systems, had to be flown to Navy carriers. By the time of Operation Iraqi Freedom in 2003, the engagement cycle had been compressed to hours, as in the case of the initial attacks on Saddam Hussein’s command bunker in Baghdad, or even minutes, as when intelligence believed Saddam had been seen at a restaurant in the city.

A key enabler in both the targeting and the strike process is the development of robotic aircraft, or unmanned aerial vehicles (UAVs), such as the Predator and Global Hawk. These aircraft were used in Afghanistan and Iraq to scout for enemy targets and to relay information to strike elements. The Predator was also armed with air-to-surface missiles, enabling it to attack targets almost immediately once clearance was given. The UAVs’ ability to remain aloft for long periods and to provide persistent surveillance made it increasingly difficult for the enemy to make any moves of significance without being detected.

Moreover, Special Operations Forces (SOF), which had been an afterthought in the first Gulf War, became a central element of the subsequent Afghanistan and Iraq campaigns. These troops, inserted in small numbers in hostile territory, proved to be invaluable human sensors, identifying enemy locations and movements, relaying information back to headquarters, and directing strikes against enemy forces and facilities. The SOF also engaged in direct action on an unprecedented scale, striking an array of high-value targets and seizing key parts of the Iraqi economic infrastructure, including oil facilities and dams.

These improvements in an already dominant U.S. military capability yielded lopsided victories. A strong argument can be made that the widespread use of precision weapons in itself has transformed warfare. Still, military revolutions do not involve making improvements in the ability to prevail within an existing warfare regime; rather, they define a new regime. Although different in some significant respects, the Afghan War and the second Gulf War were more reflective of the warfare regime that has been dominant since the early days of World War II, when mechanization, aviation, and the use of radio and radar transformed warfare.

For example, the Afghan War saw SOF directing U.S. precision air strikes against Taliban and al Qaeda forces to devastating effect. This combination of a small SOF force on the ground, their communications links with manned and unmanned aircraft, and the use of precision weapons to minimize collateral damage was indeed impressive, even when the feeble capabilities of U.S. enemies are taken into consideration. Yet, a similar and perhaps even more impressive feat of arms came in 1972, when small numbers of U.S. advisors to the South Vietnamese military employed air power to stop the Easter Offensive by the North Vietnamese Army, a formidable foe.

Similarly, the remarkable U.S.-led campaign in the second Gulf War was essentially waged against an Iraqi force whose composition would have been familiar to the German Army that introduced blitzkrieg to the world. In fact, the Iraqi military took a step back from blitzkrieg, employing old tanks without air support. The Iraqi capabilities that concerned the coalition forces–missiles and weapons of mass destruction–were employed, respectively, in small numbers or not at all. In sum, the Iraqi military may not have been a match for the Wehrmacht, circa 1940, let alone the U.S. military juggernaut of today.

A measure of just how far the U. S. military has to go in terms of transforming itself to meet emerging threats can be seen in the findings of recent independent blue-ribbon panels on defense, as well as the Pentagon’s own strategy review. These reviews raised the concern that the proliferation of ballistic and cruise missile technology would enable even small states to destroy the forward air bases and the major ports used to resupply U.S. troops. Such an anti-access threat would be even more acute if the enemy had weapons of mass destruction. It is a problem that exists in nascent form today in North Korea. Revealingly, the Bush administration is not discussing a military solution against this new kind of threat.

The Pentagon’s 2001 strategy review called on the military to address what is known as the area-denial challenge: the problem of seizing control of littoral waters in the face of land-based military forces such as missiles and aircraft and coastal forces such as advanced antiship mines, submarines, and small combatant vessels (perhaps masquerading as commercial vessels) equipped with lethal high-speed antiship cruise missiles. In a major U.S. joint field exercise held in the summer of 2002, more than a dozen ships in a Navy battle group were damaged or destroyed by a minor adversary equipped with area-denial capabilities. The problem of securing narrow waters is potentially most acute in the Persian Gulf, through which passes much of the world’s oil supply.

In 2000, the Hart-Rudman Commission, echoing the concerns of the National Defense Panel, warned of the threat of catastrophic terrorism to the U.S. homeland. Today, the “democratization of destruction” enables even small groups to bring about enormous destruction and loss of life, as the United States and the world discovered to their horror on September 11, 2001. Yet, the United States is still in the early stages of determining which mix of military capabilities can preempt terrorist strikes, defend effectively against those under way, and limit the damage from those that occur despite efforts to prevent them.

There are still other challenges reflecting an era of revolutionary change in the conduct of warfare. Access to space is becoming ubiquitous. How will the U.S. military deny an enemy this access in the event of crisis or conflict? The United States has the world’s most advanced information infrastructure and, by some accounts, apparently one of the most vulnerable. How will it be defended?

The Pentagon is making progress, albeit unevenly, in addressing these challenges. For example, the Army is trying to transform itself from a heavy, mechanized-dominated force to a lighter, yet still highly lethal, force by exploiting information technologies to field a distributed networked force whose success relies more heavily on information, speed of action, and mobility. Such troops could operate independently of major forward bases, frustrating enemy anti-access strategies. Similarly, the Navy is seeking to deploy a networked battle fleet that will include clusters of small, littoral combat ships, unmanned underwater vessels, and sensor arrays as key elements in defeating the area-denial threat. On the other hand, in areas such as long-range precision-strike air forces, curiously little effort is being made, despite the growing risk to forward air bases.

In the final analysis, the recent conflicts in Afghanistan and Iraq offer some tantalizing hints about where the military is headed as it tries to transform itself. Yet, both wars were waged against unimposing adversaries that fought in ways that would have been quite familiar to Cold War­era militaries. Remarkable as the recent developments in U.S. military capabilities have been, they have yet to be tested against the very different kinds of threats that are emerging. Despite its recent successes, the Pentagon’s motto should be: “You ain’t seen nothing yet.”

Still Underserved

Much has happened during the past decade that has affected the quality of education received by underrepresented minority groups (African Americans, Alaska Natives, American Indians, Mexican Americans, and Puerto Ricans). Educational progress has been made on several fronts for these groups, even while significant increases in their numbers and diversity have occurred. Challenges remain in several areas, however. In addition, efforts are underway to undermine the progress that has taken place.

Between 1990 and 2000, the percentage of underrepresented minority groups in the US population increased from 21.9% (54.5 million) to 26.3% (72.5 million), with the Hispanic population increasing by more than 50%. By 2015, one-third of Americans will be members of underrepresented minority groups. Clearly, the nation’s continued well-being will depend to a significant degree on the quality and level of education received by this growing segment of the population. Access to education for minorities has improved at all levels during the past decade, but the improvement has been uneven and at no level has it been adequate.

Access to education for minorities has improved at all levels during the past decade, but the improvement has been uneven and at no level has it been adequate.

Preschool. Gains in participation by underrepresented minority groups in early childhood programs are particularly encouraging. For example, African American children aged 3 to 5 continued to participate in center-based early childhood care and education programs at a higher rate than children from other racial/ethnic groups, accounting for more than 60% of such students by 2001. Also, children from underrepresented minority groups continued to represent the majority of children (65% in 2002) served by the federally funded Head Start program.

Despite such significant participation by minority children in early childhood education programs, gaps persist in other areas. For example, white non-Hispanic children aged 3 to 5 remained more likely (64%) in 2001 to be read to every day by a family member or an adult than African American children (48%) or Hispanic children (42%).

K–12. Progress in elementary and high school is encouraging in certain respects. For example, high school completion rates improved for 18- to 24-year-old underrepresented minorities between 1990 and 2000, increasing to 84% for African Americans and to 64% for Hispanics. Nevertheless, these rates remained below the 88% completion rate for whites in this age group. Another encouraging development was a decline in school dropout rates among 16- to 24-year-old underrepresented minorities between 1990 and 1999 (to 12.6% for African Americans and 28.6% for Hispanics). Although encouraging, these rates remain well above the 7.3% dropout rate for white students in 1999.

Underrepresented minorities continue to score significantly lower than whites and Asians on the National Assessment of Educational Progress and other standardized tests throughout their schooling. On the Scholastic Assessment Test (SAT), which students take when applying for college, the gap has actually widened for some groups. For example, the difference between the combined math and verbal scores of white and African American students has increased from 193 points (1,049 vs. 856 out of a total of 1,600 points) in 1996 to 201 points (1,060 vs. 859) in 2001.

Part of this gap can be explained by differences in enrollment patterns in challenging courses in high school. In 1998, for example, underrepresented minority students were half as likely as whites and one-third as likely as Asian students to have taken calculus. Among high-school graduates that year, underrepresented minority students were less likely than white or Asian students to have taken physics (19% versus 31% of white and 46% of Asian high school graduates). Another contributing factor to the gap is a disparity in teacher quality. Students in high-poverty and high-minority schools are more likely to take classes with teachers who do not have majors or minors in the subjects they teach. Moreover, teachers of underrepresented minority students are less likely to be certified in the subjects they teach.

The lack of teacher diversity also may be a factor contributing to the achievement gap. In the 1999–­2000 school year, only 17 percent of the K–12 teaching workforce was nonwhite, whereas 38% of the public K–12 enrollment was nonwhite. It is important for students to see members of their own racial/ethnic group in positions of authority and influence, especially during their developmental years.

It is important for students to see members of their own racial/ethnic group in positions of authority and influence, especially during their developmental years.

These findings strongly reinforce the need for greater encouragement of underrepresented minority students to enroll in rigorous academic programs; achieve high academic standards; and sharpen their communications, computer, and leadership skills. Such preparation is necessary if they are to succeed in college and beyond.

Undergraduate education. Between 1991 and 2000, the percentage of baccalaureate degrees in science and engineering earned by underrepresented minority groups increased from 10.3% (36,682) to 15.6% (65,183) of all such baccalaureate degrees awarded to US citizens. Although impressive, this percentage is well below the 26.3% of the US population that these groups represent.

Science and engineering baccalaureate degrees represented 32 percent of all of baccalaureate degrees earned by underrepresented groups in 2000. However, 57 percent of these degrees were in the social sciences, whereas it is in fields such as mathematics, the physical sciences, and engineering that these groups are most underrepresented. Opportunities for underrepresented minorities to pursue doctoral degrees in these fields far exceed the available pool of minorities with the required undergraduate degrees.

Unfortunately, many underrepresented minority students continue to arrive on college campuses in need of remedial assistance to prepare them for the gatekeeper courses required of science and engineering majors. Many of these students become discouraged and do not see science and engineering majors or careers as realistic choices.

Graduate degrees. The percentage of doctoral degrees in science and engineering earned by underrepresented minority groups increased during the 1990s to 5.9%. This improvement represented a 50% increase (from 1,013 to 1,542) in the number of science and engineering doctorates earned by these groups. Despite this improvement, this percentage is significantly below the representation of these groups in the US population.

The science and engineering doctoral degrees earned by underrepresented minority students represented almost half (47%) of all of the doctoral degrees they earned in both 1991 and 2000. As was the case with baccalaureate degrees earned by underrepresented minorities, the majority of the science and engineering doctoral degrees were in the social sciences (51% in 1991 and 49% in 2000).

Of major concern to some advocates of equitable opportunities for underrepresented groups is the disproportionate number of doctoral degrees in science and engineering awarded to non-US citizens by US universities relative to those awarded to members of underrepresented minority groups. In 2001, non-US citizens were awarded 5,028 doctoral degrees by US institutions in the physical sciences and engineering, whereas underrepresented minorities received only 401 doctorates in these disciplines. Many underrepresented minorities see this disparate production as a lack of commitment to their education and a devaluing of their potential to contribute to the advancement of US society.

Enhancing equity

Action is needed on several fronts to remove the major barriers to equity in science and engineering for underrepresented minority students. We can begin by leveling the playing field and by not attributing achievement gaps to intellectual inferiority but rather to the real causes: the often deliberate denial of those things needed for educational success. Steps must be taken to:

We also must provide parents, particularly those in low-income communities, ready access to user-friendly information on courses in which their children should enroll and on steps they can take to improve their children’s academic performance. They need information on and assistance with college admissions and financial aid procedures.

Regrettably, even if all of these areas receive the attention they deserve, other threats have surfaced: the increasing resegregation of our public schools and multiple attempts to roll back affirmative action in higher education. School boards, community leaders, and state and local policymakers must address K–12 resegregation, because it is clear that schools with a large number of minority and poor students receive less of everything necessary for academic success. University boards, business leaders, and policymakers at the state and national levels need to speak out more forcibly against attacks on affirmative action. Without access to the quality of education available at our major research universities, underrepresented minority students will continue to receive a second-class education relative to their more affluent peers.

It is clear that schools with a large number of minority and poor students receive less of everything necessary for academic success.

Left unattended, the resulting resegregation of US education will ensure continued domination of this country’s academic, business, and military leadership by nonminorities. It will reinforce myths about racial superiority. It will further erode the hopes and aspirations of a rapidly growing and younger portion of our population, a population upon which America’s future well-being depends.

The United States must fully embrace the notion that every child has a right to a quality education. It must take the steps needed to make this right a reality. Fully developing the potential of almost a third of our population, closing the achievement gaps among racial and ethnic groups, and validating the worth of every American must be understood as critical to our national interest. Although our flagship institutions may continue to welcome students from across the globe, the United States must fully educate its own people and depend more on its own citizens for its continued global leadership and national security.

Toward Improved Quality of Life

Thomas Jefferson was unequivocal: “Without health, there is no happiness. An attention to health, then, should take the place of every other object.” During the past two decades, the association of health and quality of life has been underscored by developments that not only have meant better health for individuals but also have positioned the United States on the doorstep of transformational change in notions of good health and how it is determined. It has become clear that health is not just a matter of biology and medical care, but of behavior, of environment, and of social conditions. Along with these insights have come important new opportunities and priorities.

The most basic measure of progress–vital statistics records of death rates for the nation’s prominent health threats–provides a first-order indication of the trends (see figure 1). Between 1980 and 2000, age-adjusted death rates for heart disease declined by 37 percent, cancer by 4 percent, stroke by 40 percent, and injuries by 26 percent. This is good news, and it is substantially attributable to progress in prevention.

The medical care system misses very few opportunities to treat disease after it occurs, yet daily misses countless opportunities for prevention.

In that same period, however, some conditions have become more prevalent, with the age-adjusted death rate from diabetes increasing by 39 percent, influenza and pneumonia by 8 percent, chronic lung disease by 49 percent, kidney disease by 21 percent, and septicemia by 88 percent. The death rate from Alzheimer’s disease rose from 1.1 deaths per 100,000 people in 1980 to 18 deaths per 100,000 people in 2000. HIV/AIDS, not even known as a condition until June 1981, leapt onto the list of the 10 leading causes of death, ranking eighth at its highest rate in 1995, and then, in the face of aggressive countermeasures, dropping by 70 percent in the past six years and falling off the list. Overall, life expectancy increased by 4 years for men and 2.5 years for women, reaching 74.1 years for men and 79.5 years for women in 2000.

As life expectancy increases and population ages, issues of quality of life have become ever more compelling. There is good news on this front as well. In 2000, the proportion of people over age 65 grew to 12.6 percent, up from 11.3 percent in 1980, and is expected to reach 20 percent by 2030. Yet, as the population ages, disability among older people is declining. Mortality dropped by about 1 percent per year among older people during the past two decades, and disability rates declined even more, by about 2 percent per year. The proportion of older people living in institutions declined to 4.2 percent in 1999 from 6.8 percent in 1982.


In part, these improvements have come about because of greater appreciation of controllable risks and more efforts to reduce those risks. Research, some of which began a half century ago, has gradually deepened our understanding of the etiologic factors underlying the most common sources of disease and disability. Studies of the relative contributions of various factors to the leading health threats suggest that actual leading causes of death in 1990 were not heart disease and cancer, but tobacco and the combination of sedentary lifestyles and unhealthy diets. More recent assessments indicate that tobacco has now been overtaken by diet and inactivity as the leading threat to health.

Insights of the past two decades have provided a much better appreciation of the fact that conditions such as heart disease, cancer, stroke, diabetes, and even injuries are essentially the pathophysiologic endpoints of a complex interplay of an individual’s experiences in five domains of influence: genetic predisposition, social circumstance, physical environment, behavior, and medical care. The finding that 40 percent of premature mortality is attributable to behavioral factors even provides the beginning of a quantitative sense of the relative contributions of these domains (see figure 2).

But the important issue is not how much risk is contributed by each factor, because the factors do not act independently. The key dynamics determining health occur at the intersections of the various domains of influence: how they interface to shape a person’s fate. This is the particularly exciting dimension of the years ahead: learning how these factors interact with each person’s predispositions to shape his or her own health futures, and how the factors might be controlled to improve that person’s prospects.

Much has been learned about these predispositions, as progress in genetics sets the stage for broader progress in prevention. With the sequencing of the 3 billion base pairs in the human genome now complete, work to elucidate the structure and function of the genome’s 30,000 to 40,000 genes is proceeding apace. As this work unfolds, it will be possible to place genetic instructions into the experimental contexts within which they are expressed, and target with greater specificity interventions for groups and individuals. Greater clarity of insights into the interrelationship among factors will provide greater clarity to our challenges and opportunities. Environmental exposures and behavioral patterns may determine whether a gene is expressed. Social circumstances affect the nature and consequences of a person’s behavioral choices. Genetic predispositions affect the health care that people need, and social circumstances affect the health care they receive.

Even before we learn more about the ways things work, we have many available opportunities for further gains from prevention. Despite current knowledge about the importance of behavioral contributions to illness and injury, some 127 million adults in the United States are obese or overweight, 47 million still smoke, 17 million are diabetic, 14 million abuse alcohol, and 16 million use addictive drugs. Approximately 900,000 people are still infected with HIV, and last year some 900,000 teens became pregnant. The burden of conditions preventable by behavior change alone is substantial.

There also are many opportunities for medical care to do better in preventing disease. Nearly one in five children remains inadequately immunized against the major childhood vaccine-preventable diseases. An estimated four out of five people with high blood pressure lack adequate treatment, as do as many as a quarter of those with elevated cholesterol levels. Approximately 30 percent of women do not receive mammograms, and 20 percent do not receive cervical screening. Fewer than 40 percent of adults who are 50 and older have been screened for colorectal cancer in the past two years. Fewer than 20 percent of smokers receive cessation counseling or treatment. With health care commanding 15 percent of the nation’s gross domestic product, the medical care system misses very few opportunities to treat disease after it occurs, yet daily misses countless opportunities for prevention.

National health agenda

When it comes to prevention, medical care should have lots of allies lining up to play a role. People who exercise have lower rates of premature death, chronic disease, stress-related disorders, and instability during old age. Dietary interventions can reduce obesity, as well as heart attacks, stroke, diabetes, and certain cancers. Yet fewer than a quarter of employer-sponsored health plans offer any kind of smoking-cessation service, and fewer than a third offer counseling for nutrition or physical activity.

To accelerate progress through stronger focus, Healthy People 2010, a national health agenda prepared by scientists within and outside the federal government, presented a set of leading health indicators representing the conditions with the greatest potential to shape the nation’s health over the decade ahead.


For each health indicator, the challenge is considerable, but so is the possibility of progress, given today’s knowledge base. If the possible is captured, then the nation should anticipate a healthy 21st century, in which:

  • The right start is given to every child, with nurturing relationships that can help protect each one from harm.
  • The opportunity for lifelong vitality is available to every individual and derives from the power of health-promoting behaviors.
  • The environment is safe, nurturing, supportive of healthy lifestyles, and free of unusual hazards.
  • No addicted person goes untreated.
  • Medical care is of high quality, and no person goes without the prevention and treatment services that have been proved effective for his or her needs.
  • We live in a caring society, one in which those who are isolated or estranged will not suffer illness or injury as a result of society’s neglect.
  • We have the comfort and choices necessary for a humane conclusion to our lives.

The United States is at the threshold of a period of unique opportunity, a time in which, as the saying goes, people will be able to focus not just on adding years to their lives, but on adding life to their years. Given the knowledge and resources already in hand, the elements of this vision are all possible–if all segments of society can marshal the will. This is the mandate for the nation’s prevention agenda.

Viral Traffic on the Move

We now know a great deal about the factors that allow novel infections to originate and spread. Major outbreaks during the past decade, including those of hantavirus pulmonary syndrome, Ebola, hemolytic uremic syndrome, West Nile, and (currently) severe acute respiratory syndrome (SARS), all followed the same pattern.

In developing the conceptual framework of emerging infections, including the idea of viral or microbial traffic, I hoped to help us better prepare for unexpected threats: infections whose very existence, as in the case of HIV/AIDS, may be unknown before they erupt in the human population. The emphasis has been on identifying key commonalities among the various diseases, on understanding their ecological roots (many are infections of other species that are given opportunities to be introduced into the human population), and on developing generic public health capabilities, such as enhanced surveillance and response, to deal with the common factors. Although the initial focus was on viruses, it was clear that the same principles applied to infections caused by other types of microbes as well.

Public health surveillance remains the key to recognition and rapid response.

Since the late 1980s, there has been a welcome flurry of activity. In 1989, John LaMontagne and I organized the National Institutes of Health (NIH) conference on emerging viruses. In the 1990s, the Institute of Medicine of the National Academies convened a Committee on Emerging Microbial Threats to Health, which issued its report in 1992. The Centers for Disease Control and Prevention (CDC) developed a strategic plan for emerging infectious diseases; launched a new journal, Emerging Infectious Diseases; and began holding regular meetings. The Program for Monitoring Emerging Diseases (ProMED), which was founded in 1993, was aimed at developing effective plans for global infectious disease surveillance and making highly fragmented surveillance activities more seamless and coordinated. One important spinoff was ProMED-mail, an open email listserv started in 1994 for reporting outbreaks and discussing emerging diseases. ProMED-mail (now run by the International Society for Infectious Diseases) currently serves about 30,000 subscribers worldwide. Also in the 1990s, the World Health Organization (WHO) began to develop new programs to enhance global surveillance.

All of these developments have led to better detection of outbreaks and to a clearer recognition of the remaining problems. But much work still needs to be done.

Public health surveillance remains the key to recognition and rapid response. An effective early warning system requires three elements on the front lines: clinical recognition, epidemiologic investigation, and laboratory diagnosis. A current challenge is to better integrate these components, because the system remains fragmented and responsibilities are often broadly diffused. Moreover, each of the components could be greatly improved.

Early warning begins with clinical recognition: There has to be a way to identify patients with disease syndromes. This usually involves the proverbial astute clinician who notices something unusual and reports it. One way to improve the odds of detection is to ask clinicians to look for and report certain types of syndromes, such as flu-like illnesses. As the SARS outbreak demonstrates, however, communication is key. The alert must be reported swiftly and effectively, and it must set in motion a timely response. In the case of SARS, it appears that some doctors in south China and Hong Kong knew they were witnessing unusual outbreaks but did not warn others. If they had, SARS might have been contained early on. Although public communication has generally improved, it remains among the weakest links in an outbreak. Another need is to develop standards to ensure that data systems can share information.

Next comes epidemiologic investigation, the medical detective work that helps identify the source of infection and the mode of transmission. Most of this work involves tracking down cases and interviewing patients. Even one case of a highly unusual disease, such as inhalation anthrax or Ebola, should be sufficient to trigger an aggressive investigation. But adequate resources often are not available.

Laboratory diagnosis is the third component of the triad. Because many diseases of concern are zoonotic (transmissible from animals to humans), diagnosis can draw on the resources and expertise of the veterinary community. Developing technologies for molecular diagnosis will increasingly make it possible to identify even previously unknown pathogens. Nevertheless, laboratory capacity still needs much improvement.

Bioterrorism and public health

One of the biggest changes since 1990 is the degree to which bioterrorism has become a public health priority. Although there had long been concern about vulnerability to biowarfare and bioterrorism, the anthrax episode in the fall of 2001 made it clear that the concern is no longer theoretical. Until very recently, the important role of public health at the frontlines of bioterrorism preparedness was unrecognized. This was so even though the CDC’s Epidemic Intelligence Service, home of the famed “Disease Detectives” and one of the most important training programs for epidemiologic surveillance and investigation, had been started by Alexander Langmuir in 1950 specifically to develop defenses against bioterrorist attack. Although concern about emerging infections has helped stimulate funding for the chronically underappreciated public health system, the threat of bioterrorism motivated the first real infusion of new money into public health in decades.

Many of the capabilities needed to defend against bioterrorism are the same as those needed to combat natural emerging infections. In both instances, the problem is an unexpected outbreak of infectious disease, of which the first indication is likely to be sick people in emergency rooms or clinics. Indeed, as with the anthrax attacks, the public health and medical responses may be under way before the true nature of the outbreak is recognized. Public health and the interface with the health care system are therefore key elements in any effective response to bioterrorism.

Whether the biggest threat is natural or engineered, much remains to be done. Efforts to strengthen surveillance and response worldwide and to improve communication must be accelerated and sustained. Further, we have only scratched the surface in terms of understanding the ecology of infectious diseases and developing strategies for regulating microbial traffic. We need tools for better predictive epidemiologic modeling when a new infection first appears and for better analysis of the factors that transfer pathogens across species. One encouraging development is the program in the ecology of infectious diseases that was started a few years ago by the National Science Foundation in cooperation with NIH. The creation of microbial impact assessments, which I proposed in my 1990 article, is now even more feasible because of new technologies such as polymerase chain reaction.

SARS is a good yardstick of our progress during the past 13 years. The syndrome was unusual because novel infections that spread from person to person are relatively rare. Once cases were finally reported, the public health response was vigorous. WHO warned health care providers, researchers rapidly identified a candidate virus, and prototype diagnostic tests quickly became available. The vast reach of the Internet was instrumental in sharing information and coordinating activities worldwide. Despite these advances, SARS had already spread to many countries. In fact, had the disease been as transmissible as influenza, it would have invaded virtually every country in the world by the time the public health response had begun. So what SARS tells us is that although we have come a long way since 1990, we still have a long way to go.

U.S. Computer Insecurity Redux

The United States continues to face serious challenges in protecting computer systems and communications from unauthorized use and manipulation. In terms of computer security, the situation is worse than ever, because of the nation’s dramatically increased dependence on computers, the widespread growth of the Internet, the steady creation of pervasively popular applications, and the increased interdependence on the integrity of others.

There is a seemingly never-ending stream of old and new security flaws, as well as markedly increased security threats and risks, such as viruses, Trojan horses, penetrations, insider misuse, identity theft, and fraud. Today’s systems, applications, and networking tend to largely ignore security concerns–including such issues as integrity, confidentiality, availability, authentication, authorization, accountability, and the spread of malicious code and e-mail spam–and would-be attackers and misusers have significantly wider knowledge and experience. Moreover, there is a general naiveté whereby many people seem to believe that technology is the answer to all security questions, irrespective of what the questions are.

In addition to security concerns, there are serious problems relating to system dependability in the face of a wide range of adversities. Such adversities include not only misuse but also hardware malfunctions, software flaws, power disruptions, environmental hazards, so-called “acts of God,” and human errors. The nation seems to have evolved into having a rather blind faith in technologies that often are misunderstood or misapplied, and into placing trust in systems and the people involved with them, even though they have not proven entirely trust”worthy”.

Solutions–but few takers

The irony is that many solutions to these problems are already available or can be developed in fairly short order, but they are not receiving the attention they deserve.

Indeed, one of the most striking factors relating to computer security and system dependability involves the widening gap between what has been done in the research community and what is practiced in the commercial proprietary software marketplace. Over the past four decades, there have been some major research advances (as well as some major paradigm shifts) toward achieving dependably trustworthy systems. These advances include novel distributed system architectures; software engineering techniques for development and analysis; stronger security and reliability; and the use of cryptography for authentication, integrity, and secrecy. But relatively few of those advances have found their way into the mass-market commercial mainstream, which has been primarily concerned with remarkable advances in hardware speed and capacity and with new whiz-bang software features. The resulting lowest-common-denominator software is often risky to use in critical applications, whether critical in terms of lives, missions, or financial operations.

Another notable factor is that many of the system development problems that were recognized 40 years ago are still current. For example, software developments are typically late, over budget, and unsatisfactory in their compliance with requirements, which themselves are often ill-stated and incomplete. Certain historically ubiquitous types of software flaws remain commonplace, such as those that permit the execution of arbitrarily nasty code on the computers of unsuspecting victims or that cause systems to crash. Many of these problems can be easily avoided by the consistent use of hardware protections and carefully designed system interfaces, programming languages that are inherently less error-prone, and good software engineering practices. Having more precise requirements in the first place also would help. However, most of the nation’s academic institutions now employ curricula that pay insufficient attention to elements of disciplined system development and lack system architectures that encompass the necessary attributes of dependability.

One of the hopes for the future involves what I call “open-box software”: a category that includes what other people have called “open-source software” or “free software,” where “free” is interpreted not in terms of cost, but rather in terms of use and reuse. This concept stands in contradistinction to conventional closed-box software, which typically is proprietary. In essence, open-box software implies that it is possible to examine the source code, to modify it as deemed necessary, to use the code, and to incorporate it into larger systems. In some cases, certain constraints may be constructively imposed.

Open-box software is not a panacea when it comes to developing and operating dependable systems. (For example, the need to ensure disciplined system development is equally relevant to open-box systems.) Experience shows, however, that a community of developers of open-box software can greatly enhance interoperability and long-term evolvability, as well as security and reliability. If nothing else, the potentials of open-box software are putting competitive pressures on the proprietary software marketplace to do much better. But the potential is much greater than that. For example, the Defense Advanced Research Projects Agency’s CHATS program (the acronym stands for Composable High Assurance Trusted Systems, although “Trustworthy” would be far preferable to “Trusted”) and other efforts, such as SELinux, have provided significantly greater security, reliability, and dependability in several popular open-box systems.

Multidimensional effort needed

Looking ahead, one of the major challenges will be developing systems that are self-diagnosing, self-repairing, self-reconfiguring, and generally much more self-maintaining. Critical applications, in particular, will require system technologies far in excess of what is generally available in the commercial marketplace today. Once again, although the research community has a plethora of approaches to that end, the commercial marketplace has not been adequately responsive, with the notable exception of IBM, which seems to be devoting considerable effort to autonomic computing.

A particular class of systems in which these problems come to the fore is represented by voting machines that are completely electronic, such as those that use “touch-screen” systems. Ideally, these machines should satisfy stringent requirements for reliability, accuracy, system integrity, tamper resistance, privacy, and resistance to denial-of-service attacks, to list but a few. (They also should have all the other more general traits of security noted above.) In practice, current Federal Election Commission standards and requirements are fundamentally incomplete. Furthermore, the most widely used all-electronic systems provide essentially no genuine assurances that votes are correctly recorded and counted; instead, they provide various opportunities for undetected accidents and insider fraud. They also masquerade behind the cloak of closed-source proprietary code. On the other hand, even extensive certification (against incomplete standards) and code review cannot defend against undetected accidents and fraud in these systems. This is an intolerable situation.

Digging a way out of today’s security and dependability morass will require a truly multidimensional effort. One important step will be to improve undergraduate and graduate education. There also needs to be much greater technological awareness of security and privacy issues on the part of researchers, development managers, system designers and programmers, system administrators, government funding agencies, procurement officers, system evaluators, corporate leaders, legislators, law enforcement communities, and even users.

There are no easy fixes, and responsibility is widely distributed. Much greater vision is necessary to recognize and accommodate long-term needs, rather than to merely respond to short-term problems. The risks of not finding solutions to these growing problems also must be clearly recognized. But the importance of protecting the nation’s critical computer and communications infrastructures is so great that these issues must be addressed with a much greater sense of urgency.

Nuclear Proliferation Risks, New and Old

During the past decade, the United States and Russia have joined in a number of efforts to reduce the danger posed by the enormous quantity of weapons-usable material withdrawn from nuclear weapons. Other countries and various private groups have assisted in this task. But many impediments have prevented effective results, and most of the dangers still remain. Even more troubling, this threat is only one of several risks imposed on humanity by the existence of nuclear weapons.

These risks fall into three classes: the risk that some fraction, be it large or small, of the inventories of nuclear weapons held by eight countries will be detonated either by accident or deliberately; the risk that nuclear weapons technology will diffuse to additional nations; and the risk that nuclear weapons will reach the hands of terrorist individuals or groups.

The United States has undertaken diverse programs to reduce these risks. But efforts have been slow and irregular, and the priorities in addressing these problems have been distorted by politics.

Indeed, success in containing these risks would fly in the face of historical precedent. All new technologies have become dual-use, in that they have been used both to improve the human condition and as tools in military conflict. Moreover, all new technologies have, in time, spread around the globe. But this precedent must be broken with respect to the release of nuclear technology.

Risk is the product of the likelihood of an adverse event multiplied by the consequences of that event. Since the end of the Cold War, the likelihood that one or another country would deliberately use nuclear weapons has indeed lessened, although the consequences of such use would be enormous. Therefore, this risk has by no means disappeared. In particular, nuclear weapons might be used in a regional conflict, such as between India and Pakistan.

The risk of proliferation of nuclear weapons among countries has been limited in the past by the Nuclear Non-Proliferation Treaty (NPT), signed in 1968. The treaty recognizes five countries as “Nuclear Weapons States,” and three other countries not party to the treaty are de facto possessors of nuclear weapons. All other nations of the world have joined the treaty as “Non-Nuclear Weapons States,” but one country (North Korea) has withdrawn. Some countries–presumed to include Iran and, until the ouster of Saddam Hussein, Iraq–maintain ambitions to gain nuclear weapons. A much larger number of countries have pursued nuclear weapons programs in the past but have been persuaded to abandon them.

The NPT is a complex bargain that discriminates between have and have-not countries. The have-not nations have agreed not to receive nuclear weapons, their components, or relevant information, whereas the Nuclear Weapons States have agreed not to furnish these items. In order to decrease the discriminatory nature of the agreement, the nations possessing nuclear weapons are obligated to assist other nations in the peaceful applications of nuclear energy. And, most important of all, the Nuclear Weapons States have agreed to reduce the role of nuclear weapons in international relations and to work in good faith toward their elimination. It is in respect to this latter obligation that the United States has been most deficient. In fact, the current Bush administration’s recent Nuclear Posture Review projects an indefinite need for many thousands of nuclear weapons, and even searches for new missions for them.

The risk posed by the possible acquisition of nuclear weapons by terrorists is growing rapidly. Deterrence prevented direct military conflict between the United States and the former Soviet Union for many years, and deterrence retains its leverage even over the so-called “states of concern,” such as North Korea. But deterrence will not restrain terrorists driven by fanatical beliefs. Therefore, the prevention of nuclear catastrophe caused by terrorists has to rely either on interdicting the explosive materials that are essential to making nuclear weapons (highly enriched uranium and plutonium, in particular) or on preventing the hostile delivery of such weapons.

Once a terrorist group acquires nuclear weapons, preventing their detonation on U.S. soil would be extremely difficult. Such weapons might be delivered by aircraft or cruise missile, or they might be detonated on board ships near U.S. harbors. As demonstrated by the leakage of illegal drugs into the United States, closing U.S. boundaries to the entry of nuclear weapons is essentially impossible.

Against this multitude of delivery methods, the enormous effort that the United States spends on ballistic missile defense is an inexcusable distortion of priorities. Indeed, it is extremely unlikely that terrorists would ever gain access to ballistic missiles. The first line of defense against nuclear terrorism must be safeguarding the vast worldwide stockpiles of nuclear weapons-usable materials. Those inventories are sufficient to produce more than 100,000 nuclear weapons. According to public sources, small shipments containing a total of roughly 40 kilograms of smuggled nuclear explosive material have been seized worldwide between 1992 and 2002, generally originating in Russia.

Although public agencies and private groups in the United States have been working with Russia to improve “materials protection control and accounting” of its dangerous materials, actual achievements have been moderate. Several thousand Russian participants have received training in control and accounting techniques, and the nation has developed a number of federal regulations covering such activities. Still, controls over only about one-sixth of Russian nuclear explosive materials have been upgraded to standards comparable to those in the United States. And work to interdict nuclear smuggling at key Russian border crossings is only about 15 percent complete.

A deadly combination of U.S. and Russian travel and access restrictions is preventing truly effective collaboration and causing major delays. Russian security services continue to be preoccupied with preserving the secrecy of the nation’s nuclear weapons facilities, whereas the U.S. Department of Energy has been insisting on direct access by its personnel in order to ensure that U.S. funds are being properly spent in reducing risks. There also is the matter of whether Russia can achieve sustainability for its program to safeguard nuclear materials. Foreign assistance is necessary to initiate action, but sustainability has to be the responsibility of the Russians themselves.

The risks posed by nuclear weapons are perhaps the most threatening results of the interaction of science and technology with human endeavors. Can humanity successfully prevent these new technological developments from concentrating enormous destructive power in the hands of an ever-smaller number of individuals? The answer to this question remains open. The United States has the most to lose if nuclear weapons fall into the hands of terrorists, but thus far the nation has failed to take constructive leadership in attacking that risk.

Cooperation with China

In the wake of China’s crackdown on pro-democracy demonstrators in Tiananmen Square on June 4, 1989, the country’s progress on important fronts seemed to be in jeopardy. Many U.S. observers worried that China’s nascent economic reform, reliance on its scientific community, and movement toward

greater intellectual openness and international cooperation had come to a halt. The 1990s, however, saw dramatic Chinese progress in science, technology, education, and economic reform. Some positive political developments occurred as well, but severe restrictions on human rights and the free exchange of ideas and information endure, in part as a result of the Tiananmen demonstrations.

The scientific realm continues to be hampered by ambiguous and opaque regulations concerning the sharing of information. As a result, Chinese researchers, particularly in the social sciences, shy away from certain research topics or international collaborations, and intellectual exchange suffers. The initial reluctance of physicians and officials to share what they knew about severe acute respiratory syndrome clearly illustrates the liabilities of China’s restrictive system.

The growing power of both China and the United States has raised the stakes of cooperation and poses new challenges in managing the relationship. In the aftermath of the Soviet Union’s collapse, the United States enjoys greater freedom to pursue its international objectives, which will not always coincide with China’s interests. China’s growing economic prowess, coupled with its still disappointing performance in human rights and political openness, makes some observers wary of U.S. cooperation. They fear that the result will only be to strengthen a formidable economic competitor and political adversary. I believe that the benefits of cooperation outweigh the risks. At the same time, though, I see a need to increase our efforts to build greater trust and communication as a foundation for future cooperation.

Scientific cooperation is no longer simply an element of our policy to create a fabric of relations with China. Its importance has grown in several ways. First, China’s capabilities have reached a level where the scientific payoffs of cooperation can benefit not only China but also the rest of the world. Second, cooperation is key to building China’s capacity for technological innovation. Although this will strengthen an emerging competitor, it must be understood that if a country of China’s magnitude fails to become a thriving player in the global, knowledge-based economy–and soon–the economic, political, and human consequences for the entire world could be disastrous. Finally, thanks to their prestige, China’s scientists and engineers are powerful agents of change; international cooperation strengthens their ability to encourage intellectual freedom and the exchange of knowledge.

In strengthening our scientific ties with China, it is important to realize that more is at stake than scientific knowledge. Cooperation can have a broad impact on our mutual understanding. In its quest for integration into the global economic system and the global scientific enterprise, China is open to acquiring a deeper understanding of the United States and other systems. There may also be lessons for us to learn from China’s vigorous experiments to improve its capacity for technological innovation. Cooperation in science increases our knowledge of each other’s systems; conversely, a better appreciation of our respective values can help us identify and remove obstacles to productive cooperation.

Many topics lend themselves to fruitful exchanges, including the treatment of intellectual property, approaches to human subjects and genetic research, attracting precollege students into scientific careers, and popularizing science. The joint exploration of subjects such as research financing, access to and dissemination of scientific information, and the interaction of the scientific community with policymakers can lead to broader questions of political processes and cultural norms. An example of what might be done on a broader scale is sustained policy dialogues. Since 1999, the U.S. National Science Foundation (NSF) and its Chinese equivalent have sponsored discussions between Chinese and U.S. scientists and policymakers as a complement to the agencies’ support of research collaborations. The time is also right to encourage joint in-depth comparative policy studies with China’s emerging community of policy researchers.

Although cooperation is most easily negotiated and managed bilaterally, a bilateral partnership is also more vulnerable to misunderstanding and mistrust. Given the unprecedented global power of the United States and the growing strength of China, it is more important than ever to ensure the stability of our scientific partnership. We should therefore complement our bilateral arrangements with an equivalent portfolio of multilateral partnerships. The goal of encouraging Chinese political and cultural change through scientific cooperation is more likely to be reached by helping China become more comfortable with multinational norms and standards than by applying pressure unilaterally.

China is already a constructive member of international organizations ranging from the International Council for Science to UNESCO. It also participates in ad hoc structures that engage multilaterally in research (the international rice genome project, for example) and scientific advice (the Inter-Academy Panel), and helps support various NATO-like advanced study institutes with NSF and other partners in the Asia-Pacific region. More such ad hoc partnerships, large or small, would be beneficial.

Another payoff of closer scientific ties is that they will allow both countries to capitalize on the potential of U.S.-trained Chinese scientists and engineers. The flow of Chinese students and scholars to the United States during the past 20 years has benefited both countries. The United States has gained a critical influx of talent and, to the extent that the researchers return home, China has received an injection of scientists and engineers who are not only trained at the frontiers of knowledge but familiar with the world’s most productive system of research and technological innovation. Both countries have a stake in the continuation of this process.

China is meeting its rapidly growing need for scientific and technical workers in part by aggressively expanding its educational system. Incentive programs and the growth of technology-based joint ventures have attracted home about a third of those trained overseas, but it is unlikely that these efforts will be sufficient. It will take time for China to become more attractive to its foreign-trained scientists and engineers who have tasted the professional and personal rewards of a competitive and open society.

In the meantime, however, foreign-trained workers may be able to contribute to China’s scientific enterprise part time or intermittently as transnational researchers. Such arrangements can benefit all parties: The individual contributes to China’s development while continuing to enjoy the advantages of remaining within our system; China has access to researchers whose value is higher because they are still connected to the U.S. enterprise; and the United States retains U.S.-trained talent, at least for part of the time.

We should encourage this emerging pattern of trans-Pacific mobility. U.S.-trained Chinese scientists and engineers function effectively in both cultures. Hence they are in a unique position to build mutual understanding of our respective systems; raise the level of trust that underlies cooperation; enhance cross-fertilization between our two large scientific communities; and, most critically, accelerate change across a broad swath of Chinese politics and culture.

Of course, capitalizing on this opportunity is not without obstacles. China must continue its support for the international mobility of its scientists and engineers. More important, it needs to further depoliticize its research and education environment to suit those who have lived in an open society. On the U.S. side, national and homeland security considerations have made transborder mobility more complex. As security procedures are applied, we must ensure that they take into consideration all aspects of our national interest, including the considerable benefits of scientific cooperation with China.

Restructuring the U.S. Health Care System

The past two decades have seen major economic changes in the health care system in the United States, but no solution has been found for the basic problem of cost control. Per-capita medical expenditures increased at an inflation-corrected rate of about 5 to 7 percent per year during most of this period, with health care costs consuming an ever-growing fraction of the gross national product. The rate of increase slowed a little for several years during the 1990s, with the spread of managed care programs. But the rate is now increasing more rapidly than ever, and control of medical costs has reemerged as a major national imperative. Failure to solve this problem has resulted in most of the other critical defects in the health care system.

Half of all medical expenditures occur in the private sector, where employment-based health insurance provides at least partial coverage for most (but by no means all) people under age 65. Until the mid-1980s, most private insurance was of the indemnity type, in which the insurer simply paid the customary bills of hospitals and physicians. This coverage was offered by employers as a tax-free fringe benefit to employees (who might be required to contribute 10 to 20 percent of the cost as a copayment), and was tax-deductible for employers as a business cost. But the economic burden and unpredictability of ever-increasing premiums caused employers ultimately to abandon indemnity insurance for most of their workers. Companies increasingly turned to managed care plans, which contracted with employers to provide a given package of health care benefits at a negotiated and prearranged premium in a price-competitive market.

Managed care has failed in its promise to prevent sustained escalation in costs.

When the Clinton administration took office in 1993, one of its first initiatives was an ambitious proposal to introduce federally regulated competition among managed care plans. The objective was to control premium prices while ensuring that the public had universal care, received quality care, and could choose freely among care providers. It was hoped that all kinds of managed care plans, including the older not-for-profit plans as well as the more recent plans offered by investor-owned companies, would be attracted to the market and would want to compete for patients on a playing field kept level by government regulations.

But this initiative was sidetracked before even coming to a congressional vote. There was strong opposition from the private insurance industry, which saw huge profit-making opportunities in an unregulated managed care market but not under the Clinton plan. Moreover, the proposed plan’s complexity and heavy dependence on government regulation frightened many people–including the leaders of the American Medical Association–into believing it was “socialized medicine.”

The failure of this initiative delivered private health insurance into the hands of a new and aggressive industry that made enormous profits by keeping the lid on premiums while greatly reducing its expenditures on medical services–and keeping the difference as net income. This industry referred to its expenditures on care as “medical losses,” a term that speaks volumes about the basic conflict between the health interests of patients and the financial interests of the investor-owned companies. But, in fact, there was an enormous amount of fat in the services that had been provided through traditional insurance, so these new managed care insurance businesses could easily spin gold for their investors, executives, and owners by eliminating many costs. They did this in many different ways, including denial of payment for hospitalizations and physicians’ services deemed not medically essential by the insurer. The plans also forced price discounts from hospitals and physicians and made contracts that discouraged primary care physicians from spending much time with patients, ordering expensive tests, or referring patients to specialists. These tactics were temporarily successful in controlling expenditures in the private sector. Fueled by the great profits they made, managed care companies expanded rapidly. It then consolidated into a relatively few giant corporations that enjoyed great favor on Wall Street, and quickly came to exercise substantial influence over the political economy of U.S. health care.

The other half of medical expenditures is publicly funded, and this sector was not even temporarily successful in restraining costs. The government’s initial step was to adopt a method of reimbursing hospitals based on diagnostic related groupings (DRGs). Rather than paying fees for each hospital day and for individual procedures, the government would pay a set amount for treating a patient with a given diagnosis. Hospitals were thus given powerful incentives to shorten stays and to cut corners in the use of resources for inpatient care. At the same time, they encouraged physicians to conduct many diagnostic and therapeutic procedures in ambulatory facilities that were exempt from DRG-based restrictions on reimbursement.

Meanwhile, the temporary success of private managed care insurance in holding down premiums–along with its much-touted (but never proven) claims of higher quality of care–suggested to many politicians that government could solve its health care cost problems by turning over much of the public system to private enterprise. Therefore, states began to contract out to private managed care plans a major part of the services provided under Medicaid to low-income people. The federal government, for political reasons, could not so cavalierly outsource care provided to the elderly under Medicare, but did begin to encourage those over 65 to join government-subsidized private plans in lieu of receiving Medicare benefits. For a time, up to 15 percent of Medicare beneficiaries chose to do so, mainly because the plans promised coverage for outpatient prescription drugs, which Medicare did not provide.

What about attempts to contain the rapidly rising physicians’ bills for the great majority of Medicare beneficiaries who chose to remain in the traditional fee-for-service system? The government first considered paying doctors through a DRG-style system similar to that used for hospitals, but this idea was never implemented; and in 1990, a standardized fee schedule replaced the old “usual and customary” fees. Physicians found a way to maintain their incomes, however, by disaggregating (and thereby multiplying) billable services and by increasing the number of visits; and Medicare’s payments for medical services continued to rise.

Cost-control efforts by for-profit managed care plans and by government have diminished the professional role of physicians as defenders of their patients’ interests. Physicians have become more entrepreneurial and have entered into many different kinds of business arrangements with hospitals and outpatient facilities, in an effort not only to sustain their income but also to preserve their autonomy as professionals. Doctor-owned imaging centers, kidney dialysis units, and ambulatory surgery centers have proliferated. Physicians have acquired financial interests in the medical goods and services they use and prescribe. They have installed expensive new equipment in their offices that generates more billing and more income. And, in a recent trend, groups of physicians have been investing in hospitals that specialize in cardiac, orthopedic, or other kinds of specialty care, thus serving as competition for community-based general hospitals for the most profitable patients. Of course, all of these self-serving reactions to the cost-controlling efforts of insurers are justified by physicians as a way to protect the quality of medical care. Nevertheless, they increase the costs of health care, and they raise serious questions about financial influences on professional decisions.

In the private sector, managed care has failed in its promise to prevent sustained escalation in costs. Once all the excess was squeezed out, further cuts could only be achieved by cutting essentials. Meanwhile, new and more expensive technology continues to come on line, inexorably pushing up medical expenditures. Employers are once again facing a disastrous inflation in costs that they clearly cannot and will not accept, and they are cutting back on covered benefits and shifting more costs to employees. Moreover, there has been a major public backlash against the restrictions imposed by managed care, forcing many state governments to pass laws that prevent private insurers from limiting the health care choices of patients and the medical decisions of physicians. The courts also have begun to side with complaints that managed care plans are usurping the prerogatives of physicians and harming patients.

In the public sector, a large fraction of those Medicare beneficiaries who chose to shift to managed care are now back with their standard coverage, either because they were dissatisfied and chose to leave their plans or because plans have terminated their government contracts for lack of profit. The unchecked rise in expenditures on the Medicaid and Medicare programs is causing government to cut back on benefits to patients and on payments to physicians and hospitals. Increased unemployment has reduced the numbers of those covered by job-related insurance and thus has expanded the ranks of the uninsured, which now total more than 41 million people. Reduced payments have caused many physicians to refuse to accept Medicaid patients. Some doctors are even considering whether they want to continue taking new elderly patients into their practices who do not have private Medigap insurance to supplement their Medicare coverage.

Major changes needed

What will the future bring? The present state of affairs cannot continue much longer. The health care system is imploding, and proposals for its rescue will be an important part of the national political debate in the upcoming election year. Most voters want a system that is affordable and yet provides good-quality care for everyone. Some people believe that modest, piecemeal improvements in the existing health care structure can do the job, but that seems unlikely. Major widespread changes will be needed.

Those people who think of health care as primarily an economic commodity, and of the health care system as simply another industry, are inclined to believe in market-based solutions. They suggest that more business competition in the insuring and delivering of medical care, and more consumer involvement in sharing costs and making health care choices, will rein in expenditures and improve the quality of care. However, they also believe that additional government expenditures will be required to cover the poor.

Those people who do not think that market forces can or should control the health care system usually advocate a different kind of reform. They favor a consolidated and universal not-for-profit insurance system. Some believe in funding this system entirely through taxes and others through a combination of taxes and employer and individual contributions. But the essential feature of this idea is that almost all payments should go directly to health care providers rather than to the middlemen and satellite businesses that now live off the health care dollar.

A consolidated insurance system of this kind–sometimes called a single-payer system–could eliminate many of the problems in today’s hodgepodge of a system. However, sustained cost control and the realignment of incentives for physicians with the best interests of their patients will require still further reform in the organization of medical care. Fee-for-service private practice, as well as regulation of physician practices by managed care businesses, will need to be largely replaced by a system in which multispecialty not-for-profit groups of salaried physicians accept risk-free prepayment from the central insurer for the delivery of a defined benefit package of comprehensive care.

Such reform, seemingly utopian now, may eventually gain wide support as the failure of market-based health care services to meet the public’s need becomes increasingly evident, and as the ethical values of the medical profession continue to erode in the rising tide of commercialism.

Maglev Ready for Prime Time

Putting Maglev on Track” (Issues, Spring 1990) observed that growing airline traffic and associated delays were already significant and predicted that they would worsen. The article argued that a 300-mile-per-hour (mph) magnetic levitation (maglev) system integrated into airport and airline operations could be a part of the solution. Maglev was not ready for prime time in 1990, but it is now.

As frequent travelers know, air traffic delays have gotten worse, because the airport capacity problem has not been solved. As noted in the Federal Aviation Administration’s (FAA’s) 2001 Airport Capacity Enhancement Plan: “In recent years growth in air passenger traffic has outpaced growth in aviation system capacity. As a result, the effects of adverse weather or other disruptions to flight schedules are more substantial than in years past. From 1995 to 2000, operations increased by 11 percent, enplanements by 18 percent, and delays by 90 percent.” With the heightened security that followed the September 11, 2001, terrorist attacks, ground delays have exacerbated the problem. The obvious way to reduce delays is to expand airport capacity, but expansion has encountered determined public opposition and daunting costs. The time is right to take a fresh look at maglev.

If fully exploited, Maglev will provide speed, frequency, and reliability unlike any extant transportation mode.

High-speed trains that travel faster than 150 mph have demonstrated their appeal in Europe and Asia. Although Amtrak has had some success with trains that go as fast as 125 mph on the Washington, D.C., to New York line, the United States has yet to build a true high-speed rail line. But interest is growing among transportation planners. Roughly half the states are currently developing plans for regional high-speed rail corridors. Pending congressional legislation would authorize $10 billion in bonds over 10 years to finance high-speed rail projects in partnerships with the states. However, due to the severe funding limitations, most of these projects are likely to pursue only incremental improvements in existing rail lines. Experience in Europe and Japan suggests that higher speeds are needed to lure passengers from planes and to attract new travelers.

Even though–or perhaps because–the Europeans and Japanese already have high-speed rail lines, they have been aggressively developing maglev systems. The Japanese built a new 12-mile maglev test track just west of Tokyo and achieved a maximum speed of 350 mph. They plan to extend the test track and make it part of a commercial line between Tokyo and Osaka. The German government approved the Transrapid System of maglev technology for development in the early 1990s and has been actively marketing the system for export. It recently announced funding of $2 billion to build a 50-mile route between Dusseldorf and Dortmund and a 20-mile connector linking Munich to its airport. Meanwhile, the Swiss have been developing a new approach for their Swiss Metro System, involving high-speed maglev vehicles moving in partially evacuated tunnels. China is building a maglev system to connect Shanghai with Pudong International Airport. This system should be in demonstration operation in 2003 and in revenue operation early in 2004.

The United States has also exhibited interest, but its progress has been slower. In 1990, the United States began a multiagency National Maglev Initiative that began with a feasibility analysis and was eventually to evolve into a development program. Although the initial analysis was promising, the effort was terminated in 1993 before any significant hardware development began. After a five-year hiatus, Congress passed the Transportation Equity Act for the 21st Century, which included a program to demonstrate a 40-mile maglev rail line, which could later be lengthened. Selection of a test site will be announced soon.

Maglev makes the most economic sense where there is already strong demand and where the cost of meeting this demand through expansion of existing infrastructure is expensive. Airports offer an appealing target. Current capital improvement projects at 20 major airports have a combined cost of $85 billion, enough to build 2,460 miles of maglev guideway at $35 million per double-track mile. This would be sufficient to connect maglev lines to airports in several parts of the country.

Maglev must also be compared with conventional high-speed rail. Maglev and high-speed rail costs are roughly equivalent for elevated guideways, the type of system most likely to be built. The added technology cost of maglev systems tends to be balanced by the fact that maglev vehicles weigh about one-half to one-third as much per seat as high-speed passenger trains, resulting in competitive construction costs. And because there is no physical contact between the train and the guideway in a maglev system, operation and maintenance costs are estimated to be between 20 and 50 percent less than what is required for high-speed rail systems. Maglev also has other advantages over rail systems: It takes up less space and has greater grade-climbing and turning capabilities, which permit greater flexibility in route selection; its superior speed and acceleration make it possible for fewer trains to serve the same number of people; and the greater speed will undoubtedly attract more passengers.

Lessons learned

After looking at the progress of the technology, the history of U.S. government involvement in transportation infrastructure, and the experience of other countries that have begun maglev development, we arrived at the following key conclusions:

Performance: Speed counts. Ridership on ground transportation systems increases markedly with speeds that enable trips to be completed in times that are competitive with airline travel. Amtrak’s incremental improvements don’t cut it.

Economics: Maglev is cost-competitive with high-speed rail, yet provides greater speed, more flexibility, and the capability to integrate with airline/airport operations. The physical connection to the airport is a necessary first step, but the benefits of maglev will not be realized until the next step is taken: integrating maglev with airline operations.

Government role: If maglev is to be a part of the solution to airport congestion, the advocate agency should be the FAA or the Federal Highway Administration, since maglev would primarily be accommodating air and highway travelers.

Public-private partnership: Private industry has long been a willing partner in development and deployment, but the federal government needs to demonstrate a long-term commitment if the private sector is expected to participate. In 1997, the Maglev Study Advisory Committee was congressionally mandated to evaluate near-term applications of maglev technology in the United States. The committee made the following recommendations for government action: a federal commitment to develop maglev networks for passenger and freight transportation, with the government as infrastructure provider and the private sector as operator; federal support for two or three demonstration projects; and federal or state funding for guideways and private financing for the vehicles, stations, and operating and maintenance costs.

Benefits of early deployment: The United States needs to have one or two operating systems to convince the nation that the technology is practical and to identify areas for improvement, such as new electronic components and magnetic materials, new aircraftlike lightweight vehicle body designs, new manufacturing and installation methods, and innovative civil construction techniques and materials.

Research: The nation needs long-term federal support for transportation system planning and R&D activities. In addition, since it is impractical to conduct R&D activities on a commercial line, it will be necessary to design a national test facility where innovations that affect system cost and performance can be fully evaluated under carefully controlled and safe conditions. This is no different from the research approach that other transportation modes have developed.

Fresh thinking: Maglev may best be thought of as an entirely new mode of transportation. It is neither a low-flying airplane nor a very fast locomotive-drawn train. It has many attributes that, if fully exploited, will provide speed, frequency, and reliability unlike any extant mode. It will add mobility even in adverse weather conditions and without the adverse effects of added noise and air pollution and increased dependence on foreign oil. If integrated with airline operations, it will augment rather than compete with the airlines for intercity travelers and will decrease the need for further highway expansion. It can be incorporated into local transit systems to improve intracity mobility and access to airports.

The future of high-speed ground transportation in the United States can be a bright one. If implemented appropriately, maglev presents the opportunity to break the frustrating cycle in which modest infrastructure improvements produce only a minimal ridership increase that results in disappointing financial performance and a call for additional incremental funding. Successful implementation of just one U.S. maglev project should open the door to an alternative to the cycle of frustration. Government should be an active partner in this process.

The Continued Danger of Overfishing

New studies continue to chronicle how overfishing and poor management have severely hurt the U.S. commercial fishing industry. Thus, it makes sense to examine the effectiveness of the Sustainable Fisheries Act of 1996, which overhauled federal legislation guiding fisheries management. At the time, I predicted that, if properly implemented, the act would do much to bolster recovery and sustainable management of the nation’s fisheries. Today, I see some encouraging signs but still overall a mixed picture.

The 1996 legislation amended the Fisheries Conservation and Management Act of two decades earlier. The original law had claimed waters within 200 miles of the coast of the United States and its possessions (equivalent to some two-thirds of the U.S. continental landmass) as an “exclusive economic zone.” In so doing, it set the stage for eliminating the foreign fishing that had devastated commercially important fish and other marine life populations. Although it set up a complicated management scheme involving regional councils, the original legislation failed to direct fishery managers to prohibit overfishing or to rebuild depleted fish populations. Nor did it do anything to protect habitat for fishery resources or to reduce bycatch of nontarget species. Under purely U.S. control, many fish and shellfish populations sank to record low levels.

The only sensible course is to move forward: to eliminate overfishing, reduce bycatch, and protect and improve habitat.

The 1996 act addressed many of those management problems, especially the ones connected with overfishing and rebuilding. In the previous reauthorization of the earlier act, for example, the goal of “optimum yield” had been defined as “the maximum sustainable yield from the fishery, as modified by any relevant social, economic, or ecological factor.” A tendency of fishery managers to act on short-term economic considerations had often led to modifications upward, resulting in catch goals that exceeded sustainable levels and hence in overfishing, depletion, and the loss of economic viability in numerous fisheries.

The Sustainable Fisheries Act changed the word “modified” to “reduced.” In other words, fishery managers may no longer allow catches exceeding sustainable yields. Other new language defined a mandatory recovery process and created a list of overfished species. When a fish stock was listed as overfished, managers were given a time limit to enact a recovery plan. Because undersized fish and nontarget species caught incidentally and discarded dead account for about a quarter of the total catch, the law enabled fishery managers to require bycatch-reduction devices.

Although I had high hopes for the act when it was passed, its actual implementation, which began only in 1998, has been less than uniform. Fishery groups have sued to slow or block recovery plans, because the first step in those plans is usually to restrict fishing. Meanwhile, conservation groups have sued to spur implementation.

In that contentious climate, progress has been somewhat halting. On the one hand, overfishing continues for some species, and many fish populations remain depleted. One of the most commercially important fish–Atlantic cod–has yet to show strong increases despite tighter fishing restrictions.

On the other hand, in cases in which recovery plans have actually been produced, fish populations have done well. For example, New England has some of the most depleted stocks in U.S. waters. But remedies that in some cases began even before the law was reformed–closures of important breeding areas, regulation of net size, and reductions in fishing pressure–have resulted in encouraging upswings in the numbers of some overfished species. Not least among the rebounding species are scallops, yellowtail flounder, and haddock. Goals have been met for rebuilding sea scallops on Georges Bank and waters off the mid-Atlantic states. There has even been a sudden increase in juvenile abundance of notoriously overfished Atlantic swordfish. That is because federal managers, responding to consumer pressure and to lawsuits from conservation groups, closed swordfish nursery areas where bycatch of undersized fish had been high and cut swordfishing quotas. Some other overfished species, among them Atlantic summer flounder, certain mackerel off the Southeast, red snapper in the Gulf of Mexico, and tanner and snow crabs off Alaska, are rebounding nicely.

The trend in recovery efforts is generally upward. The number of fish populations with sustainable catch rates and healthy numbers has been increasing, and the number that are overfished declining. And rebuilding programs are now finally in place or being developed for nearly all overfished species.

Maintaining healthy fish populations is not just good for the ocean, of course, but also for commerce: Fish are worth money. Ocean fishing contributes $50 billion to the U.S. gross domestic product annually, according to the National Oceanic and Atmospheric Administration. But because fish are worth money only after they are caught, not everyone is pleased with aggressive efforts to ensure that there will be more fish tomorrow. Some people want more fish today. Restrictions designed to rebuild depleted stocks are costing them money in the short term.

For that reason, various amendments have been introduced in Congress that would weaken the gains of the Sustainable Fisheries Act and jeopardize fisheries. In particular, industry interests have sought to lengthen recovery times. Currently, the law requires plans for rebuilding most fish populations within a decade, with exceptions for slow-growing species. (Many fish could recover twice as fast if fishing was severely limited, but a decade was deemed a reasonable amount of time: It is practical biologically, meaningful within the working lifetime of individual fishers, and yet rapid enough to allow trends to be perceived and adjustments made if necessary.) Longer rebuilding schedules make it harder to assess whether a fish population is growing or shrinking in response to management efforts. The danger is that overfishing will continue in the short term, leading to tighter restrictions and greater hardship later on.

Recovered fish populations would contribute substantially to the U.S. economy and to the welfare of fishing communities. In just five years since the Sustainable Fisheries Act went into effect, the outlook for U.S. fisheries has improved noticeably, for the first time in decades. The only sensible course is to move forward: to eliminate overfishing, reduce bycatch, and protect and improve habitat. It would be foolish to move backward and allow hard-gotten gains to unravel just when they are gaining traction. Yet the debate continues.

Alternative routes

The Sustainable Fisheries Act is not the only defense against overfishing. There are two promising alternatives: marine protected areas and consumer seafood-awareness campaigns. Although traditional fishery management regulations have led to the closure of some areas to certain types of fishing gear, conservation groups in the past five years have pushed for a complete prohibition on fishing in certain spawning or nursery areas. They argue that fishing methods such as dragging bottom-trawl nets are inherently destructive to seafloor habitats and that vulnerable structures such as coral reefs need to be left alone to regenerate healthy marine communities.

On one tenet of that approach, the science is clear: Fish do grow larger and more abundant in areas where there is no fishing, and larger fish produce disproportionately more offspring than smaller fish. A single 10-year-old red snapper, for example, lays as many eggs as 212 two-year-old red snappers.

But on another score–the idea that fishing improves outside protected areas as a result of “spillover”–the evidence is less conclusive. Studies in different countries have produced contradictory results. Only a fraction of one percent of U.S. waters have been designated no-take reserves, and not enough time has passed to show whether or how much people fishing outside reserve boundaries will benefit. New studies specifically designed to answer that question are now being conducted.

Recreational fishing groups have generally fought attempts to put areas off limits. Their opposition has resulted in the introduction of a bill in Congress called the Freedom to Fish Act, which has ardent supporters. Recently, though, conservation and recreational fishing groups have begun a new dialogue to explain their respective positions on the science and the sensitivities of closing marine areas to fishing. I predict that the outcome will be a “zoning plan” that specifies what kinds of fishing should be allowed where, guaranteeing access to certain areas in exchange for putting other areas off limits.

The other major conservation alternative is to promote best fishing practices by harnessing consumer purchasing power. One such market approach is ecolabeling, as in “dolphin-safe” tuna. The Marine Stewardship Council–founded originally as a partnership between the corporate giant Unilever and the World Wide Fund for Nature–is leading a global effort to encourage fishing establishments to apply for certification. Certified products receive a logo telling consumers that the product is from a sustainable fishery.

Another market approach is a campaign to raise public awareness through wallet cards, books, and Web sites that help consumers choose well-managed, sustainably caught seafood. That effort has been carried out mainly by conservation groups, often in partnership with aquariums and other institutions, and has been aided by prominent chefs. Some specific goals of these campaigns have been a swordfish recovery plan, effective protection of endangered sturgeon, and better policing against illegal catches of Chilean seabass.

Although results are mixed, a new awareness about seafood has developed among consumers. Boycotts of Atlantic swordfish, Beluga caviar, and Chilean seabass have spread, and some seafood sellers are beginning to market toward this more sensitized consumer niche. I predict that over the next few years, consumer education will become the largest area of growth and change in the toolbox of ocean conservation strategy.

Bolstering Support for Academic R&D

Funding for academic research from all sources grew quite satisfactorily in the 1980s, at about 5.6 percent per year in constant dollars. Yet when I examined the picture in 1991, the future looked dim. The United States was just emerging from a recession, federal deficits were projected as far into the future as we could see, and the country was struggling to regain international competitiveness in a number of industries. The incoming president of the American Association for the Advancement of Science had issued a gloomy report stating that the academic research community suffered from low morale as a result of inadequate support and dim prospects.

Although I did not agree with all of the report’s arguments, I did believe that the academic research community had to vastly improve its appeal to its traditional funding sources if it hoped to thrive. It would have to “persuade our political and industrial supporters that academic research contributes to practical applications and to the education of students in sufficient measure to warrant the level of support we seek–particularly now, when adjusting to finite resources is fast becoming society’s watchword.” The community needed to improve its advocacy in the federal, state, and industrial arenas, and it had to bolster the confidence of sponsors by using resources more effectively and efficiently.

The scene has indeed changed during the past decade. Advocacy at the federal level has improved. More than three dozen research universities now have Washington offices, charged with establishing relations with members of Congress and their staffs and with agency and program heads. The aim of those Washington advocates is to promote favorable budgets for academic research generally and then to steer money to their institutions. Their job is little different from that of their counterparts in other interest groups. Beyond that, other activities that were once uncommon in academe have come into play. Professional societies now regularly communicate their views to Congress on a variety of matters. These societies also band together on particular issues, giving their voices even greater strength. Perhaps more important, some societies have learned how to practice constituency politics, alerting members to contact their congressional representatives on important matters. The scientific community has learned that politicians listen most attentively to people who can vote for them.

At the state level, support for academic research is widely associated with regional economic development. A strong research university embedded in a supporting infrastructure–one that includes incubators, tech parks, sources of angel and venture capital, mentoring and networking structures, and tech-based industries–can be an important source of economic development. In past decades, nearly all research universities viewed large, established, technically based companies as the principal source of industrial support for academic research and, often, the principal beneficiary. Today, spinoff of entrepreneurial ventures from academic research (the strategy behind Silicon Valley and Route 128) has become widespread. Because most such startups are too small to be a primary source of support, states step in. Despite those developments, however, the larger established firms continue to be the principal source of direct industrial funding, and that source has been growing.

On the matter of improving the use of resources, I made two main recommendations. One was that research institutions should do a better job of utilizing capital assets such as buildings and equipment. Little has been accomplished on that front. The other recommendation was that each campus should be selective in the fields of research it pursued and should consider the type of local industry, nearby research institutions, and other synergistic resources when making its choices. Here there has been progress. Instead of every research university trying to be all-encompassing, many campuses have enlisted faculty, students, alumni, and administrators to decide on areas of emphasis. As they have grown in number, research universities have recognized the need to become more distinctive.

In a subsequent article, “The Business of Higher Education” (The Bridge, Spring 1994), I dealt more extensively and critically with the leadership and business side of academic institutions: budgeting and accounting systems, committee functions, management skills, organizational effectiveness, and so on. For example, I noted that the classic accounting system used by universities–“fund” accounting–has been an egregious saboteur of good management. Among other defects, it undermines the ability to align programs with strategic directions and choices. It also fails to distinguish adequately between operating and capital expenditures and makes it difficult to allocate decisions to the most appropriate spot in the organization.

Fortunately, in the mid-1990s private colleges and universities adopted a new financial reporting model promulgated by the Financial Accounting Standards Board. The new model requires a balance sheet, an operating statement, and a statement of cash flows, which results in a system more congenial to the strategic management of resources. Although the full benefit of these changes remains to be achieved, they are at least a step in the right direction.

But before we congratulate ourselves on solving the funding problems I discussed a decade ago, let us take a look at what has actually happened. The 1990s departed radically from what had been expected at their outset. Federal deficits vanished, the economy grew handsomely, and the stock market boomed. One might therefor think that the support of academic research should have grown faster than in the 1980s. But the reverse is true. From all sources, support for academic R&D grew 77 percent (in constant dollars) during the 1980s, but only 49 percent in the 1990s. Federal support grew 55 percent in the 1980s, 47 percent in the 1990s. Even the biomedical area, which captured at least half of all increases (from all sources) in the two decades, grew less rapidly in the 1990s (68 percent) than in the 1980s (89 percent).

Funding did, however, became slightly less volatile during the past decade. Although annual variations during both decades were quite marked, the wide swings of the 1980s, when increases ranged from 0.4 to 9.5 percent, narrowed to a range of 2.4 to 7.7 percent in the 1990s. That modulation was true of both total support and federal support. A similar trend held for total biomedical support, where annual increases ranged from 1.9 percent to 10.2 percent in the 1980s and from 2 percent to 9.5 percent in the 1990s.

If I had written the article today instead of 12 years ago, I do not believe my advice would have been much different. Although our advocacy has improved in all quarters, it obviously needs to become better still if we hope to achieve real gains in support for research. And although academe has taken steps toward the more effective use of resources, too many bad habits prevail.

Math Education at Risk

Two decades ago, the United States awoke to headlines declaring that it was “A Nation at Risk.” In dramatic language, the National Commission on Excellence in Education warned of a “rising tide of mediocrity” that, had it been “imposed by a foreign power,” might well have been interpreted as “an act of war.” Shortly thereafter, dismal results from a major international assessment of mathematics education confirmed the commission’s judgment. Analysts at that time described U.S. mathematics education as the product of an “underachieving curriculum.”

Alarmed by these unfavorable assessments, mathematicians and mathematics educators launched an energetic and coordinated campaign to move mathematics education out from underachievement. Their strategy: national standards for school mathematics–an unprecedented venture for the United States–coordinated with textbooks, tests, and teacher training. Science shortly followed suit in this campaign for standards, as did other subjects.

With one exception, none of the nation’s grand objectives for mathematics and science education has been met or even approached.

By 1990, the president and the state governors formally adopted six national goals for education, including this one: “By the year 2000, United States students will be the first in the world in mathematics and science achievement.” Subsequently, states established standards in core academic subjects and introduced tests aligned with these standards to measure the performance of students, teachers, and schools.

Yet today, the nation remains very much at risk in this area. Although newsmaking perils appear more immediate (viruses, terrorists, deficits, unemployment), underachievement in education remains the most certain long-term national threat. Despite brave rhetoric and countless projects, we have not vanquished educational mediocrity, especially not in mathematics and science. Judging by recent policy proposals, we have not even grasped the true character of the problem.

Solid effort, poor results

The nation may deserve an A for effort, or at least a B+. All states but one have established content standards in mathematics, and most have done so in science. The number of states requiring more than two years of high-school mathematics and science has doubled. Many more high-school students, including students in all racial and ethnic groups, now take advanced mathematics and science courses. International comparisons show that U.S. students receive at least as much instruction in mathematics and science as students in other nations, and spend about as much time on homework.

Notwithstanding these notable efforts, data from national and international assessments show that, with one exception, none of the nation’s grand objectives for mathematics and science education has been met or even approached.

  • Student performance has stagnated. The average mathematics performance of 17-year-olds on the National Assessment of Educational Progress (NAEP) is essentially the same now as it was in 1973. During the 1970s, performance declined slightly, then rose during the 1980s, but has remained essentially constant since then. Science performance on the NAEP during the past three decades has generally mirrored that of mathematics: decline followed by recovery and then stagnation.
  • Mathematics performance remains substandard. In 2000, only one in six 12th-grade students achieved the NAEP “proficient” level, and only 1 in 50 performed at the “advanced” level. That same year, 34 percent of all students enrolled in postsecondary mathematics department were in remedial courses, up from 28 percent in 1980.
  • The gap between low- and high-performing students is immense. In mathematics, the difference between the highest and lowest NAEP quartiles for 17-year-olds is approximately the same as the difference between the average scores for 17- and 9-year-olds–roughly equivalent to eight years of schooling.
  • Racial and ethnic gaps are persistent and large. In 2000, one in three Asian/Pacific Islanders in the 12th grade and one in five white 12th graders scored at the NAEP’s proficient level, but less than 1 in 25 Hispanic and black 12th graders scored at that level. Modest gains during the 1970s and 1980s narrowed longstanding gaps among racial and ethnic groups, but there is no evidence of any further narrowing since 1990. In fact, there is some evidence that the gap between whites and blacks in mathematics has widened.
  • Students in poverty perform poorly. Twelfth-grade students who are eligible for the national school lunch program perform on the NAEP at about the same level as 8th-grade students who are not in the school lunch program. Throughout school, low-income students are twice as likely as their higher-income peers to score below the “basic” level of achievement in mathematics.
  • U.S. students remain uncompetitive internationally. Repeated assessments reveal little improvement in the U.S. ranking among nations and a widening of the cross-national achievement gap as students progress through school. Even the most advanced U.S. students perform poorly compared with similarly advanced students in other countries. Confirming evidence could be seen (at least when the economy was flourishing) in urgent business support for the H1-B visa program, which allows U.S. companies to hire skilled foreign workers when no U.S. citizens have proper qualifications.

One important exception to this recital of failure is gender equity. After decades of underrepresentation, girls are now as likely as boys to take advanced mathematics classes and more likely to take biology and chemistry. They remain, however, less likely to take physics. More important, the differences in performance between boys and girls on most high-school mathematics and science examinations are no longer statistically significant.

College attendance has increased dramatically during the past 20 years, even among low-performing students. At the same time, failure rates on high-school exit tests that are aligned with new state standards have shocked parents and led to political revolts. More telling, the gap between high- and low-performing students within each grade remains particularly wide, posing a major challenge for new mandatory programs designed to hold all students accountable to the same set of high standards.

Dearth of remedies

Many diagnoses but few remedies have emerged. International comparisons suggest that mathematics and science curricula in the United States are excessively repetitive and slight important topics. Instruction in U.S. classrooms focuses on developing routine skills (often to prepare students for high-stakes tests) and offers few opportunities for students to engage in high-level mathematical thinking.

In the mid-1990s, a vociferous national argument erupted over how to respond to this new round of dismal tidings (dubbed the “Math Wars” by the media). Advocates of traditional curricula and pedagogy were pitted against people who argued that old methods had failed and that new approaches were needed for the computer age. This debate in mathematics education paralleled contemporaneous cultural divides over reading, core curricula, and traditional values.

The lack of demonstrable progress in improving educational performance in mathematics and other subjects has led some people to view the problem as inherently unsolvable within a system of public education. This view is often supported by statistics that appear to show little correlation between expenditures and achievement in education from kindergarten through 12th grade. Down this road lies the political quagmire of vouchers and school choice.

Other observers see the lack of progress more as an indicator of flawed strategies–of widespread underestimation of the depth of understanding and intensity of effort required to teach mathematics effectively. A lack of respect for the complexity of the problem encourages quick fixes (smaller classes, higher standards, more tests, higher teacher salaries) that do not yield greater disciplinary understanding or pedagogical skill.

A decade after the first President Bush said that the United States would be “first in the world,” Congress enacted the signature legislation of another President Bush, sadly entitled “No Child Left Behind.” Faced with overwhelming evidence of failure to meet the 1990 goal, this unprecedented legislation imposes the authority and financial muscle of the federal government across the entire landscape of K-12 education. The law mandates annual testing of students in the 3rd through 8th grades and in 11th grade, with reporting disaggregated by ethnic categories. Schools that do not demonstrate annual improvements in each category at each grade are subject to various sanctions, and students in these “failing” schools will be allowed to move to other schools.

Advocates of federally mandated tests argue that making progress requires measuring progress. Critics see classrooms turning into test-prep centers, where depth and cohesion are abandoned for isolated skills found on standardized tests. Totally absent from the current debate is the 1990s ideal of being “first in the world.” Chastened by experience, the nation’s new educational aspiration appears much more modest: Just avoid putting children at risk.

New Life for Nuclear Power

Most of what I wrote in “Engineering in an Age of Anxiety” and “Energy Policy in an Age of Uncertainty” I still believe: Inherently safe nuclear energy technologies will continue to evolve; total U.S. energy output will rise more slowly than it has hitherto; and incrementalism will, at least in the short run, dominate our energy supply. However, my perspective has changed in some ways as the result of an emerging development in electricity generation: the remarkable extension of the lifetimes of many generating facilities, particularly nuclear reactors. If this trend continues, it could significantly alter the long-term prospect for nuclear energy.

This trend toward nuclear reactor “immortality” has become apparent in the past 20 years, and it has become clear that the projected lifetime of a reactor is far longer than we had estimated when we licensed these reactors for 30 to 40 years. Some 14 U.S. reactors have been relicensed, 16 others have applied for relicensing, and 18 more applications are expected by 2004. According to former Nuclear Regulatory Commission Chairman Richard Meserve, essentially all 103 U.S. power reactors will be relicensed for at least another 20 years.

Making a significant contribution to CO2 control would require a roughly 10-fold increase in the world’s nuclear capacity.

If nuclear reactors receive normal maintenance, they will “never” wear out, and this will profoundly affect the economic performance of the reactors. Time annihilates capital costs. The economic Achilles’ heel of nuclear energy has been its high capital cost. In this respect, nuclear energy resembles renewable energy sources such as wind turbines, hydroelectric facilities, and photovoltaic cells, which have high capital costs but low operating expenses. If a reactor lasts beyond its amortization time, the burden of debt falls drastically. Indeed, according to one estimate, fully amortized nuclear reactors with total electricity production costs (operation and maintenance, fuel, and capital costs) below 2 cents per kilowatt hour are possible.

Electricity that inexpensive would make it economically feasible to power operations such as seawater desalinization, fulfilling a dream that was common in the early days of nuclear power. President Eisenhower proposed building nuclear-powered industrial complexes in the West Bank as a solution to the Middle East’s water problem, and Sen. Howard Baker promulgated a “sense of the U.S. Senate” resolution authorizing a study of such complexes as part of a settlement of the Israel-Palestinian conflict.

If power reactors are virtually immortal, we have in principle achieved nuclear electricity “too cheap to meter.” But there is a major catch. The very inexpensive electricity does not kick in until the reactor is fully amortized, which means that the generation that pays for the reactor is giving a gift of cheap electricity to the next generation. Because such altruism is not likely to drive investment, the task becomes to develop accounting or funding methods that will make it possible to build the generation capacity that will eventually be a virtually permanent part of society’s infrastructure.

If the only benefit of these reactors is to produce less expensive electricity and the market is the only force driving investment, then we will not see a massive investment in nuclear power. But if immortal reactors by their very nature serve purposes that fall outside of the market economy, their original capital cost can be handled in the way that society pays for infrastructure.

Such a purpose has emerged in recent years: the need to limit CO2 emissions to protect against climate change. To a remarkable degree, the incentive to go nuclear has shifted from meeting future energy demand to controlling CO2. At an extremely low price, electricity uses could expand to include activities such as electrolysis to produce hydrogen. If the purpose of building reactors is CO2 control rather than producing electricity, then the issue of going nuclear is no longer a matter of simple economics. Just as the Tennessee Valley Authority’s (TVA’s) system of dams is justified by the public good of flood control, the system of reactors would be justified by the public good of CO2 control. And just as TVA is underwritten by the government, the future expansion of nuclear energy could, at the very least, be financed by federally guaranteed loans. Larry Foulke, president of the American Nuclear Society, has proposed the creation of an Energy Independence Security Agency, which would underwrite the construction of nuclear reactors whose primary purpose is to control CO2.

Making a significant contribution to CO2 control would require a roughly 10-fold increase in the world’s nuclear capacity. Providing fissile material to fuel these thousands of reactors for an indefinite period would require the use of breeder reactors, a technology that is already available; or the extraction of uranium from seawater, a technology yet to be developed.

Is the vision of a worldwide system of as many as 4,000 reactors to be taken seriously? In 1944, Enrico Fermi himself warned that the future of nuclear energy depended on the public’s acceptance of an energy source encumbered by radioactivity and closely linked to the production of nuclear weapons. Aware of these concerns, the early advocates of nuclear power formulated the Acheson-Lilienthal plan, which called for rigorous control of all nuclear activities by the International Atomic Energy Agency (IAEA). But is this enough to make the public willing to accept 4,000 large reactors? Princeton University’s Harold Feiveson has already said that he would rather forego nuclear energy than accept the risk of nuclear weapons proliferation in a 4,000-reactor world.

I cannot concede that our ingenuity is unequal to living in a 4,000-reactor world. With thoughtful planning, we could manage the risks. I imagine having about 500 nuclear parks, each of which would have up to 10 reactors plus reprocessing facilities. The parks would be regulated and guarded by a much-strengthened IAEA.

What about the possibility of another Chernobyl? Certainly today’s reactors are safer than yesterday’s, but the possibility of an accident is real. Last year, alarming corrosion was found at Ohio’s Davis Besse plant, apparently the result of a breakdown in the management and operating practices at the plant. Chernobyl and Davis Besse illustrate the point of Fermi’s warning: Although nuclear energy has been a successful technology that now provides 20 percent of U.S. electricity, it is a demanding technology.

In addition to the risk of accidents, we face a growing possibility that nuclear material could fall into the hands of rogue states or terrorist groups and be used to create nuclear weapons. I disagree with Feiveson’s conclusion that this risk is too great to bear. I believe that we can provide adequate security for 500 nuclear parks.

Is all this the fantasy of an aging nuclear pioneer? Possibly so. In any case, I won’t be around to see how the 21st century deals with CO2 and nuclear energy. Nevertheless, this much seems clear: If we are to establish a proliferation-proof fleet of 500 nuclear parks, we will have to expand on the Acheson-Lillienthal plan in ways that will–as George Schultz observed in 1989–require all nations to relinquish some national sovereignty.

Whither the U.S. Climate Program?

Approximately 50 years ago, the first contemporary stirring within the scientific community about climate change began when Roger Revelle and Hans Suess wrote that “human beings are now carrying out a large-scale geophysical experiment.” Since that time, the scientific community has made remarkable progress in defining the effect that increased concentrations of greenhouse gases could have on the global climate and in estimating the nature and scale of the consequences. The political discussion about how to respond to this threat has been less successful.

Although a small vocal group of scientists continues to raise important questions about whether the data and the theory validate the projected trend in the climate, these views have been more than counteracted by the overwhelming consensus of scientists that the case for the projected climate change is solid. The 2001 assessment by the Intergovernmental Panel on Climate Change of the World Meteorological Organization projects that by the year 2100, there will be a global temperature increase of 1.4 to 5.8 degrees centigrade, a global sea level rise of 9 to 88 centimeters, and a significant increase in the number of intense precipitation events. The wide range of these estimates reflects differences in assumptions about population projections, technological developments, and economic trends that are used in constructing the scenarios.

By some time in the latter half of this century we will need to have in place a transformed energy system that has been largely “decarbonized.”

As the consensus on the likelihood of climate change became more robust, the world’s political leaders began to take notice. At a 1992 meeting in Rio De Janeiro, the world’s nations agreed to the United Nations’ Framework Convention on Climate Change (FCCC), which called for the “stabilization of the concentration of greenhouse gases at a level that would prevent dangerous climatic consequences.” Unfortunately, they were not able to specify what level of concentration would be acceptable or what constituted “dangerous” climate change. Instead, they established a Conference of Parties to work out the details and develop a plan of action to control climate change.

When the Conference of Parties convened in 1997 to produce the plan of action known as the Kyoto Protocol, the scientific and political differences among nations came into sharp focus. The U.S. Senate by unanimous vote declined to approve the plan because it excused many less developed countries such as India and Brazil from the requirement to curb greenhouse gas emissions and because the senators were afraid that meeting its requirement of reducing greenhouse gas emissions to 7 percent below the 1990 level by 2012 would have disastrous consequences for the U.S. economy.

A final political blow occurred in 2000 when President Bush withdrew the United States as a signatory to the Kyoto Protocol and announced that the country would meet its climate objectives by voluntary means. He called on the National Academies to review the science and recommend courses of action. The Academies confirmed that the threat of global climate warming was real but acknowledged that there were considerable uncertainties. The administration established a revised climate program overseen by a cabinet-level committee and assigned the responsibility for the climate science and technology programs to the Department of Commerce and the Department of Energy, respectively. It changed the U.S. goal from reducing the absolute amount of greenhouse gas emissions to reducing their intensity per unit of gross domestic product, which means that total emissions could increase in a growing economy.

The automobile has the potential to soften its impact on climate.

In spite of the political controversies surrounding the climate change issue, the U.S. research program is now barreling ahead. One goal is to define more precisely what concentration of greenhouse gases in the atmosphere would result in what the FCCC described as dangerous. Work will continue on trying to further refine the expected global temperature increase, sea level rise, and precipitation change. A central question now is what will happen at the local and regional level. The U.S. Global Change Research Program has already completed a preliminary assessment of the consequences of climate change for 16 regions of the country and four economic sectors. Further research is needed to confirm and refine these assessments.

Technology to the rescue

The climate-related field of research that needs the most attention now is energy technology. Most energy research has been done with the goal of reducing cost or improving efficiency. Only in recent years has the goal been to reduce carbon emissions and the atmospheric concentration of greenhouse gases. Much more needs to be done with that focus. By some time in the latter half of this century we will need to have in place a transformed energy system that has been largely “decarbonized.” British Prime Minister Tony Blair has declared that the United Kingdom will reduce its emissions of greenhouse gases by 60 percent by the year 2050. President Bush has not been so bold.

A variety of technologies deserve attention. The growing use of renewable energy systems that entail little or no use of carbon will play an important role. Photovoltaic cells are proving their value in remote niche applications, wind farms on land and in coastal waters are advancing rapidly in Europe and the United States, and biofuels are showing increasing promise. The next generation of nuclear power plants promises to be much safer and to produce much less hazardous waste, and nuclear fusion may eventually become practical. Another approach to reducing atmospheric carbon concentrations that is showing promise is carbon sequestration. Carbon can be sequestered in the terrestrial biosphere in trees, plants, and soils. And action to do so is contemplated. The technology to capture carbon emissions at the point of combustion and to store that carbon in geological formations under land and sea is becoming a reality. We are also moving forward with efforts to reduce emissions from vehicles. With the development of hybrid vehicles combining electric motors and petroleum, the automobile has the potential to soften its impact on climate in the near term. With the development of fuel cells that power vehicles with hydrogen, we can envision a future in which transportation is no longer a major source of greenhouse gas emissions.

Combining these technological advances with a continued assault by science on the outstanding problems could create the conditions necessary to meet the terms of the FCCC. However, caveats are in order. Climate issues are inherently international and require the participation of developing as well as developed countries. Although the scientific research can be conducted primarily by the developed countries, the implementation of new technologies must take place everywhere. Because many countries lack the resources to invest in new energy and transportation technology, this is an area where the United States could be a leader in providing financial assistance and demonstrating its willingness to work constructively with other nations for the benefit of all.

Biodiversity in the Information Age

My 1985 Issues article was among the first to document and assess the problem of biodiversity in the context of public policy. It was intended to bring the extinction crisis to the attention of environmental policymakers, whose focus theretofore had been almost entirely on pollution and other problems of the physical environment. Several factors contributed to this disproportion: Physical events are simpler than biological ones, they are easier to measure, and they are more transparently relevant to human health. No senator’s spouse, it had been said, ever died of a species extinction.

The mid-1980s saw a steep increase in awareness concerning the living environment. In 1986, the National Academy of Sciences and the Smithsonian Institution cosponsored a major conference on biodiversity, assembling for the first time the scores of specialists representing the wide range of disciplines, from systematics and ecology to agriculture and forestry, that needed to merge their expertise in basic and applied research to address the critical questions. The papers were published in the book BioDiversity, which became an international scientific bestseller. The term biodiversity soon became a household word. By 1992, when I published The Diversity of Life, the scientific and popular literature on the subject had grown enormously. The Society of Conservation Biology emerged as one of the fastest-growing of all scientific societies. Membership in organizations dedicated to preserving biodiversity grew manyfold. Now there are a dozen new journals and shelves of technical and popular books on the topic.

Developing countries are in desperate need of advanced scientific institutions that can engage the energies of their brightest young people and encourage political leaders to create national science programs.

The past decade has witnessed the emergence of a much clearer picture of the magnitude of the biodiversity problem. Put simply, the biosphere has proved to be more diverse than was earlier supposed, especially in the case of small invertebrates and microorganisms. An entire domain of life, the Archaea, has been distinguished from the bacteria, and a huge, still mostly unknown and energetically independent biome–the subterranean lithoautotrophic microbial ecosystems–has been found to extend three kilometers or more below the surface of Earth.

In the midst of this exuberance of life forms, however, the rate of species extinction is rising, chiefly through habitat destruction. Most serious of all is the conversion of tropical rainforests, where most species of animals and plants live. The rate has been estimated, by two independent methods, to fall between 100 and 10,000 times the prehuman background rate, with 1,000 times being the most widely accepted figure. The price ultimately to be paid for this cataclysm is beyond measure in foregone scientific knowledge; new pharmaceutical and other products; ecosystems services such as water purification and soil renewal; and, not least, aesthetic and spiritual benefits.

Concerned citizens and scientists have begun to take action. A wide range of solutions is being proposed to stanch the hemorrhaging of biodiversity at the regional as well as the global level. Since 1985, the effort has become more precisely charted, economically efficient, and politically sensitive.

The increasing attention given to the biodiversity crisis highlights the inadequacy of biodiversity research itself. As I stressed in 1985, Earth remains in this respect a relatively unexplored planet. The total number of described and formally named species of organisms (plant, animal, and microbial) has grown, but not by much, and today is generally believed to lie somewhere between 1.5 million and 1.8 million. The full number, including species yet to be discovered, has been estimated in various accounts that differ according to assumptions and methods from an improbably low 3.5 million to an improbably high 100 million. By far the greatest fraction of the unknown species will be insects and microorganisms.

Since the current hierarchical, binomial classification was introduced by Carolus Linnaeus 250 years ago, 10 percent, at a guess, of the species of organisms have been described. Many systematists believe that most and perhaps nearly all of the remaining 90 percent can be discovered, diagnosed, and named in as little as one-10th that time–about 25 years. That potential is the result of two developments needed to accelerate biodiversity studies. The first is information technology: It is now possible to obtain high-resolution digitized images of specimens, including the smallest of invertebrates, that are better than can be perceived through conventional dissecting microscopes. Type specimens, sequestered in museums scattered around the world and thus unavailable except by mail or visits to the repositories, can now be photographed and made instantly available everywhere as “e-types” on the Internet. Recently, the New York Botanical Garden made available the images of almost all its types of 90,000 species. In a parallel effort, Harvard’s Museum of Comparative Zoology has laid plans to publish e-types of its many thousands of insect species. As the total world collection of primary type specimens is brought online, covering most or all of perhaps one million species that can be imaged in sufficient detail to be illustrated in this manner, the rate of taxonomic reviews of named species and the discovery of new ones can be accelerated 10 times or more over that of predigital taxonomy.

The second revolution about to catapult biodiversity studies forward is genomics. With base pair sequencing automated and growing ever faster and less expensive, it will soon be possible to describe bacterial and archaean species by partial DNA sequences and to subsequently identify them by genetic bar-coding. As genomic research proceeds as a broader scientific enterprise, microorganism taxonomy will follow close behind.

The new biodiversity studies will lead logically to an electronic encyclopedia of life designed to organize and make immediately available everything known about each of the millions of species. Its composition will be led for a time by the industrialized countries. However, the bulk of the work must eventually be done in the developing countries. The latter contain most of the world’s species, and they are destined to benefit soonest from the research. Developing countries are in desperate need of advanced scientific institutions that can engage the energies of their brightest young people and encourage political leaders to create national science programs. The technology needed is relatively inexpensive, and its transfer can be accomplished quickly. The discoveries generated can be applied directly to meet the concerns of greatest importance to the geographic region in which the research is conducted, being equally relevant to agriculture, medicine, and economic growth.

Information Technology and the University

A decade ago, many people had yet to accept that the inexorable progress of information technology (IT) would result in fundamental change in universities. Experience is shrinking that group. The basic premises that underlie the need for change are the same today as they were then, but are even more compelling:

The modern research university provides a range of functions that are incredibly important to our society, all of which are highly information-intensive.

IT will continue to become faster and cheaper at an exponential pace for the foreseeable future, enabling alternatives to the ways that universities have traditionally fulfilled their various functions, and possibly even to the university as provider of those functions.

It would be naïve to assume that, unlike other businesses, the availability of these alternatives will not transform both the roles and character of the university.

Precisely because of the importance of the functions provided by the research university, it behooves us to explore deeply and critically what sorts of changes might occur so that, if they do occur, we are better prepared for them.

The capacity to reproduce with high fidelity all aspects of human interactions at a distance could well eliminate the classroom and perhaps even the campus.

It’s hard for those of us who have spent much of our lives as academics to look inward at the university, with its traditions and obvious social value, and accept the possibility that it might change in dramatic ways. But although its roots are millennia old, the university has changed before. In the 17th and 18th centuries, scholasticism slowly gave way to the scientific method as the way of knowing truth. In the early 19th century, universities embraced the notion of secular, liberal education and began to include scholarship and advanced degrees as integral parts of their mission. After World War II, they accepted an implied responsibility for national security, economic prosperity, and public health in return for federally funded research. Although the effects of these changes have been assimilated and now seem natural, at the time they involved profound reassessment of the mission and structure of the university as an institution.

Today, the university has entered yet another period of change driven by powerful social, economic, and technological forces. To better understand the implications for the research university, in February 2000 the National Academies convened a steering committee that, through a series of meetings and a workshop, produced the report Preparing for the Revolution (National Academies Press, 2002). Subsequently, the Academies have created a roundtable process to encourage a dialogue among university leaders and other stakeholders, and in April 2003 held the first such dialogue with university presidents and chancellors.

The first finding of the Academies’ steering committee was that the extraordinary pace of the IT evolution is not only likely to continue but could well accelerate. One of the hardest things for most people to understand is the compound effect of this exponential rate of improvement. For the past four decades, the speed and storage capacity of computers have doubled every 18 to 24 months; the cost, size, and power consumption have become smaller at about the same rate. As a result, today’s typical desktop computer has more computing power and storage than all the computers in the world combined in 1970. In thinking about changes in the university, one must think about the technology that will be available in 10 or 20 years; technology that will be thousands of times more powerful as well as thousands of times cheaper.

The second finding of the committee, in the words of North Carolina State University Chancellor Mary Anne Fox, was that the impact of IT on the university is likely to be “profound, rapid, and discontinuous,” affecting all of its activities (teaching, research, and service), its organization (academic structure, faculty culture, financing, and management), and the broader higher education enterprise as it evolves toward a global knowledge and learning industry. If change is gradual, there will be time to adapt gracefully, but that is not the history of disruptive technologies. As Clayton Christensen explains in The Innovator’s Dilemma, new technologies are at first inadequate to displace existing technology in existing applications, but they later explosively displace those applications as they enable new ways of satisfying the underlying need.

Although it may be difficult to imagine today’s digital technology replacing human teachers, as the power of this technology continues to evolve 100- to 1000-fold each decade, the capacity to reproduce with high fidelity all aspects of human interactions at a distance could well eliminate the classroom and perhaps even the campus as the location of learning. Access to the accumulated knowledge of our civilization through digital libraries and networks, not to mention massive repositories of scientific data from remote instruments such as astronomical observatories or high-energy physics accelerators, is changing the nature of scholarship and collaboration in very fundamental ways. Each new generation of supercomputers extends our capacity to simulate physical reality at a higher level of accuracy, from global climate change to biological functions at the molecular level.

The third finding of the committee suggests that although IT will present many complex challenges and opportunities to universities, procrastination and inaction are the most dangerous courses to follow during a time of rapid technological change. After all, attempting to cling to the status quo is a decision in itself, perhaps of momentous consequence. To be sure, there are certain ancient values and traditions of the university, such as academic freedom, a rational spirit of inquiry, and liberal learning that should be maintained and protected. But just as it has in earlier times, the university will have to transform itself once again to serve a radically changing world if it is to sustain these important values and roles.

After the publication of Preparing for the Revolution, the Academies formed a standing roundtable to facilitate discussion among stakeholders. Earlier this spring, the roundtable had the opportunity to discuss these findings in a workshop with two dozen presidents and chancellors of major research universities. The conversation began with several presidents reviewing contemporary issues such as how universities can finance the acquisition and maintenance of digital technology and how they can manage the use of this technology to protect security, privacy, and integrity–issues that presidents all too often delegate to others. However, as the workshop progressed further to consider the rapid evolution of digital technology, the presidents began to realize just how unpredictable the future of their institutions had become. As University of California-Berkeley Chancellor Robert Berdahl observed, presidents have very little experience with providing strategic visions and leadership for futures driven by such disruptive technologies.

Addressing this concern, Louis Gerstner, retired CEO of IBM, shared with the presidents some of his own observations concerning leadership during a period of rapid change. The IBM experience demonstrated the dangers of resting on past successes. Instead, leaders need to view IT as a powerful tool capable of driving a process of strategic change, but only with the full attention and engagement of the chief executive.

These early efforts of the National Academies suggest that during the coming decade, the university as a physical place, a community of scholars, and a center of culture will remain much as it is today. IT will be used to augment and enrich the traditional activities of the university without transforming them. To be sure, the current arrangements of higher education may shift. For example, the new knowledge media will enable us to build and sustain new types of learning communities, free from the constraints of space and time, which may create powerful new market forces. But university leadership should not simply react to threats but instead act positively and strategically to exploit the opportunities presented by IT. As Gerstner suggested, this technology will provide great opportunities to improve the quality of our activities. It will allow colleges and universities to serve society in new ways, perhaps more closely aligned with their fundamental academic mission and values.

Looking forward two or more decades, the future of the university becomes far less certain. Although the digital age will provide a wealth of opportunities for the future, one must take great care not simply to extrapolate the past but to examine the full range of possibilities for the future. There is clearly a need to explore new forms of learning and learning institutions that are capable of sensing and understanding the change and of engaging in the strategic processes necessary to adapt or control it. In this regard, IT should be viewed as a tool of immense power to use in enhancing the fundamental roles and missions of the university as it enters the digital age.