Collaborative Advantage

Almost daily, news reports feature multinational companies—many based in the United States—that are establishing technology development facilities in China, India, and other emerging economies. General Electric, General Motors, IBM, Intel, Microsoft, Motorola—the list grows steadily longer. And these new facilities no longer focus on low-level technologies to meet Third World conditions. They are doing the cutting-edge research once done only in the United States, Japan, and Europe. Moreover, the multinationals are being joined by new firms, such as Huawei, Lenovo, and Wipro, from the emerging economies. This current globalization of technology development is, we believe, qualitatively different from globalization of the past. But the implications of the differences have not sunk in with key U.S. decisionmakers in government and industry.

It is not that the new globalization has gone unnoticed. Many observers are concerned that the United States is beginning to fall into a vicious cycle of disinvestment in and weakening of its innovation systems. As U.S. firms move their engineering and R&D activities offshore, they may be disinvesting not just in their own facilities but also in colleges and regions of the country that now form critical innovation clusters. These forces may combine to dissolve the bonds that form the basis of U.S. innovation leadership.

THE UNITED STATES NEEDS TO AGGRESSIVELY LOOK FOR PARTNERSHIP OPPORTUNITIES— MUTUAL-GAIN SITUATIONS—AROUND THE GLOBE.

A variety of policies have been proposed to protect and restore the preeminent position of U.S. technology. Some of these proposals are most concerned with building up U.S. science and technology (S&T) human resources by strengthening the nation’s education system from kindergarten through high school; encouraging more U.S. students to study engineering and science, specifically inducing more women and minorities to pursue science and technology careers; and easing visa restrictions that form barriers to talented foreigners who want to enter U.S. universities and industries. Other proposals include measures to outbid other countries as they offer benefits to attract R&D activities. Still others call for funneling public funds into the development of technology. Some observers, for example, believe that the technological strength of U.S. firms would be improved by the government’s greatly increasing its support of basic research.

Our studies of engineering development centers in multinational home countries and in emerging economies lead us to a concern that many U.S. policymakers and corporate strategists, like the proverbial generals preparing to fight the previous war, are failing to recognize what is distinctive about today’s emerging global economy. Indeed, in some cases they are pinning their hopes on strategies that were not notably successful in past battles. Although our research suggests several trends that may be problematic for the United States, we also see strong possibilities that the nation can benefit by developing “mutual gain” policies for technology development. Doing so requires a fundamental change in global strategy. The United States should move away from an almost certainly futile attempt to maintain dominance and toward an approach in which leadership comes from developing and brokering mutual gains among equal partners. Such “collaborative advantage,” as we call it, comes not from self-sufficiency or maintaining a monopoly on advanced technology, but from being a valued collaborator at various levels in the international system of technology development.

First, however, it is necessary to understand the trends that could lead to a vicious cycle of disinvestment in U.S. S&T capabilities and, most important, how these trends differ from previous challenges to the U.S. system.

Fighting the last war

Half a century ago, the United States was shocked by the ability of the Soviet Union to break the U.S. nuclear monopoly and then to beat the United States in the race to launch a space satellite. Americans were deluged with reports that Soviet children were receiving a far better education in S&T than were U.S. children and that the USSR graduated several times as many engineers each year as did the United States. Worse, the USSR appeared to be targeting its technological resources toward global domination. Twenty years later, Americans were further shaken by the rapid advance of Japanese (and then Korean) firms in industries ranging from steelmaking and auto production to semiconductors. It was widely pointed out that Japan graduated far more engineers per capita than did the United States. As the Japanese seemed on a relentless march to dominance in industry after industry, pundits in the United States commented that whereas the brightest young U.S. students studied law or finance, the brightest Japanese studied engineering. Books were written about Japanese government policies that targeted certain industries, enabling them to gain comparative advantage in key technologies. Some observers advocated the establishment of a U.S. Ministry of International Trade and Industry on the model of Japan’s. As the United States lost its technological edge, many feared that it would also lose its ability to maintain its global power and high standard of living.

The military threat from the Soviet Union was real, but it diminished as a result of weaknesses in the Communist economic and technological systems. The economic threat from East Asia also quickly diminished. To be sure, the United States lost hundreds of thousands of jobs beginning in the 1980s as multinationals moved production to low-cost sites offshore and as new multinationals from Japan and Korea took growing shares of global markets. But even though that shift was painful for certain U.S. companies and for workers who lost their jobs, the U.S. economy as a whole grew along with the growth in world trade, and much of the new U.S. workforce moved into higher value–added activities.

The United States was not saved from either of these threats because it improved its educational system to surpass those of other countries or because it managed to produce more engineers than other countries. The United States had other strengths. It attracted large numbers of talented foreigners to its universities and businesses. It provided the world’s most fertile environment for fostering new business ventures. Its institutions were flexible, enabling human and other resources to be constantly redeployed to more efficient uses. At the end of the past century, the United States was spending far more on R&D than Japan and nearly twice as much as Germany, France, and the United Kingdom combined.

The globalization challenging U.S. firms in the 1970s and 1980s was different from the globalization in the more immediate postwar era. In the 1950s and 1960s, U.S. firms had taken simple, often obsolete technology offshore to make further profits in markets that were less demanding than those at home. That era of globalization was dominated by U.S. (and some European) firms. Wages could be far higher in the United States than elsewhere because the U.S. workforce, backed by more capital and superior technology, was far more productive. Firms did not need to worry much about foreign competition. Moreover, trade restrictions protected the privileged situation enjoyed by U.S. companies and workers.

Beginning in the late 1960s, however, it was becoming clear that the world was moving to a second generation of postwar globalization. One of the most notable facets of this new wave was the emergence of large numbers of non-Western firms to positions of global strength in automobiles, consumer electronics, machine tools, steelmaking, and other industries. U.S. firms often were blindsided by the emergence of these new competitors, and many domestic firms at first refused to take them seriously. It was thought that the Japanese could make only lower grades of steel, unsophisticated cars, or cheap transistor radios, but that U.S. firms would hold on to the higher value–added, top ends of these markets. In part because of this arrogance, U.S. firms sought “windfall” income by actually selling technology to firms that would soon be their competitors. Meanwhile, capital and technology were becoming more mobile, and Japan and a few other countries became major sources of innovation and global finance. The momentum of the East Asian firms was further increased as these firms enjoyed the advantage of home and nearby markets that were growing faster than those in the United States and Europe.

When the U.S. technology system found itself challenged by the Japanese and others, many firms sought to reassert their dominance by lobbying for the protection of their home markets and by using their overwhelming strengths in basic technology and their access to capital to maintain competitiveness. Still, many leading U.S. firms, such as RCA, Zenith, and most of the integrated steel producers, failed. But others, such as GE and Motorola, thrived in the new environment. Those that succeeded were relatively quick to give up industries where there was little chance to compete against their new rivals, quick to find new opportunities outside the United States, and often quick to find new partners.

The globalization of today represents another quantum leap. We believe it is different enough to characterize it as “third-generation globalization.” It stems from the emergence of a new trade environment in the 1990s that has vastly reduced barriers to the flow of goods, services, technology, and capital. The move to a new environment was accelerated by the development and diffusion of new communications, information, and work-sharing technologies over the past decade.

Strategies that may have served U.S. firms in the second-generation globalization will not work in the third-generation world. The new emerging economies are an order of magnitude larger than those that emerged a generation ago, and they are today’s growth markets. Nor does the United States, despite its undeniable strengths, enjoy global dominance across the range of cutting-edge technologies. Moreover, U.S. multinationals are weakening their national identities, becoming citizens of the countries in which they do business and providing no favors to their country of origin. This means that the goal advocated by some U.S. policymakers of having the United States regain its position of leadership in all key technologies is simply not feasible, nor is it clear how the United States would retain that advantage when its firms are only loosely tied to the country.

We believe that there are opportunities as well as challenges in the third-generation world. Our research, however, does suggest some other reasons to be concerned about certain developments that are now taking place.

Current trends could lead to an unnecessary weakening of one of the foundations of U.S. economic strength: the country’s national and regional innovation systems. Four factors have surfaced in our research that, in combination, may undermine the innovation capacity of U.S.-based firms and technology-savvy regions of the country.

The bandwagon syndrome. As U.S. multinationals join the bandwagon of offshore technology development, they often seem to go beyond what makes economic sense. Top management at many firms are coming to believe that they have to move offshore in order to look as though they are aggressively cutting cost—even if the offshoring does not actually result in demonstrated savings. None of the companies that we studied conducted systematic cost/benefit analyses before moving technology development activities offshore.

The snowball effect. The more that U.S. multinationals move activities offshore, the more sense it makes to offshore more activities. When asked what activities will always have to be done in the United States, the engineering managers we interviewed could not give consistent and convincing answers. One R&D manager said he found it difficult to engage in long-term planning because he was no longer sure what capabilities remained at his company after recent waves of technology outsourcing.

The loss of positive externalities. Some multinationals are finding that if their technology is developed offshore, then it makes more sense to invest in offshore universities than in domestic universities. Support for summer internships, cooperative programs, and other efforts at U.S. universities becomes less attractive. As one study participant noted, “Why contribute to colleges from which we no longer recruit?”

The rapid rise of competing innovation systems. Regional competence centers or innovation clusters in the United States grew haphazardly in response to local market stimuli. China, India, and other countries are much more explicitly strategic in creating competence and innovation centers. Although markets have worked well for the U.S. centers, it is essential that these centers have a better sense of where their overseas rivals are moving, what comparative advantages provide viable bases for local development, and how to strengthen them.

As these developments have unfolded, many U.S. firms or their domestic sites are now running the risk of losing their capabilities to innovate. At best, they may be able to hold on to only a diminishing advantage in brand-name value and recognition.

Another factor that is proving important is the declining ability of the United States to attract the world’s best S&T talent. As an open society and the world’s leading innovator, the United States was long able to depend heavily on the inflow of human capital. Although the market impact of high-skill immigration has been widely debated, it is clear that this inflow eased the pressure to increase the domestic S&T workforce through either educational or market inducements.

The United States was highly dependent on foreign-born scientists and engineers in 1990, and its growing need for S&T human resources in the 1990s was met largely through immigration. An issue widely discussed and analyzed in depth by the National Science Foundation (NSF), among others, is that the inflow of immigrant S&T personnel began to slow down beginning in the late 1990s. Coupled with the longer downward trend of U.S. students entering S&T fields and careers, this raises concerns about whether the United States will have adequate personnel to maintain its technological leadership.

The changes in migration patterns go beyond just the availability of a science and engineering workforce. Immigrants have been an important source of technology entrepreneurship, particularly in information technology. Less noted is the potentially quite large loss of technology entrepreneurship and innovation with the decline in the number of emerging-economy S&T people who might start businesses, and the return of growing numbers of successful U.S.-based entrepreneurs to their home countries to take advantage of opportunities there.

It seems clear from our interviews, however, that efforts to solve the perceived U.S. technology problem by emphasizing policies to induce more U.S. students to major in engineering are no more likely to succeed than did similar efforts made in response to the Japanese challenge. None of the engineering managers we interviewed mentioned a shortage of new graduates in engineering as a problem. Indeed, some managers said they would not recommend that their own children go into engineering, since they did not see it as a career with a bright future. Several said they were not allowed to increase “head count” in the United States at all; if they wanted to add engineers, then they had to do it offshore. Increasing the number of engineers coming into the system might do no more than raise the unemployment rates of engineers. In fact, if increasing the short-term supply of scientist and engineers leads to increased unemployment and stagnant wages, it will further signal to students that this is not a good career choice.

To be sure, there are good reasons to increase the representation of women and minorities in U.S. S&T education programs. It also is desirable to increase the technical sophistication of U.S. students more broadly, and to make it attractive for those who are so inclined to go into the S&T professions. But “throwing more scientists and engineers at the problem” should not be sought as a strategy to regain a U.S. monopoly over most cutting-edge technologies. It would be a mistake to try to replicate the technological advantages enjoyed by other countries in these areas. The United States cannot match the Chinese or Indians in numbers of new engineering graduates.

Rather, the United States needs to develop new strengths for the new generation of globalization. With U.S. and other multinational firms globalizing their innovation work, emerging economies developing their education systems and culling the most talented young people from their huge populations, and communication technologies enabling the free and fast flow of information, it is hard to imagine the United States being able to regain its former position as global technology hegemon.

What the United States needs now is to find its place in a rapidly developing global innovation system. In many cases, strong companies are succeeding through the integration of technologies developed around the world, with firms such as GE, Boeing, and Motorola managing project teams working together from sites in the United States, India, China, and other countries. It is unclear, however, the extent to which it would benefit the United States to subsidize the technology development efforts of companies headquartered in the United States. For example, it is Toyota, not GM, that is building new auto plants in the United States; it is China, not the United States, that owns, builds, and now designs what were IBM-branded personal computers; and it is countries ranging from Finland to Taiwan that are doing leading-edge electronics development. The one area overwhelmingly dominated by the United States, packaged software development, employs less than one half of 1% of the workforce and is unlikely to have a large direct impact on the economy, although use of the software may contribute significantly to productivity increases in other industries.

As a country, the United States is strong in motivating university researchers to start new enterprises, from biotechnology to other areas across the technology spectrum. The United States is not as strong when it comes to projects where brute force applications of large numbers of low-wage engineers are required. Nor is the United States as strong in developing technologies for markets very different from its own. Competitive strategies from the past will not change this situation. No amount of science and engineering expansion will restore U.S. technology autarchy. Instead, a new approach—collaborative technology advantage—is needed to develop a vibrant S&T economy in the United States.

Policies for strength

We believe that the government, universities, and other major players in the U.S. innovation system need to work toward three fundamental major goals:

REGIONS HOSTING OR DEVELOPING TECHNOLOGY COMPETENCY CENTERS NEED TO LOOK CLOSELY AT THE INTERNATIONAL COMPETITION. THEY NEED TO IDENTIFY NICHES THAT EXIST OR CAN BE DEVELOPED IN THE CONTEXT OF A GLOBAL INNOVATION SYSTEM.

First, the United States should develop national strategies that are less focused on competitive, or even comparative, advantage in the traditional meaning of these terms, and are more focused on collaborative advantage. It is tempting to think of technology in neomercantilist terms. National security, both militarily and economically, can depend on a country’s ability to be the first to come out with new technologies. In the 1980s, it was widely believed that Japan and other East Asian economies were using industrial policies to create comparative advantage in high-tech industries in the belief that these industries provided unusually high levels of spillover benefits. U.S. policymakers were advised to counter these moves by investing heavily in high technology, restricting imports of high technology, and promoting joint technology development programs by U.S. firms.

To be sure, it makes sense for U.S. policy to ensure that technology development activities are not attracted away by foreign government policies, where the foreign sites do not have legitimate comparative advantages. It also makes sense to make sure that the United States retains strength in technologies that truly are strategic. An important, but difficult, task is finding ways to develop policies that strengthen U.S. S&T capabilities when market pressures are leading firms to disinvest in their U.S. capacity, including their university collaborations.

To start, the nation needs to counter the bandwagon and snowball effects that are driving the outsourcing of technology in potentially harmful ways. To do this, it will be necessary to develop new tools to assess the costs and benefits of the outsourcing of technology development, particularly tools that more comprehensively account for the costs. There also is a need to develop a better understanding of what technology development activities are most efficiently colocated, so that the United States does not end up destroying its own areas of comparative advantage. NSF and other funding agencies could sponsor such studies.

Foreign-Born S&E Workers in the United States, 2000

Source: U.S. Census Bureau.

But then the United States needs to aggressively look for partnership opportunities—mutual-gain situations—around the globe. National government funding agencies, such as NSF, and regional governments can support projects that work toward these aims. Designers of tax policies at all levels also can redirect policies in these directions. Some of these mutual-gain situations will involve the creation of technologies that unequivocally address global needs to minimize environmental damage or reduce demands on diminishing resources.

Regions hosting or developing technology competency centers need to look closely at the international competition. They need to identify niches that exist or can be developed in the context of a global innovation system. Existing artificial barriers for certain industries and technologies will continue to fall at a rapid pace as the world continues its path to globalization. Alliance may be possible between U.S. centers of technology competence and those in other countries.

We believe that one area in which the United States enjoys comparative advantage is its patent system. To a large degree, the U.S. patent office serves as the patent office for the world. Foreign firms want access to the U.S. market, so they must disclose their technology by filing for patents in the United States. It is essential that the United States preserve (and perhaps extend) this advantage.

As a second goal, the United States needs to help create a world based on the free flow of S&T brainpower rather than a futile attempt to monopolize the global S&T workforce. The United States can further develop its advantage as an immigrant-friendly society and become the key node of new networks of brain circulation. Importantly, the United States needs to redesign its immigration policies with the long view in mind. New U.S. policies should focus on the broad goal of maximizing the innovation and productivity benefits of the global movement of S&T workers and students, rather than the shortsighted aim of importing low-cost S&T workers as a substitute for developing the U.S. domestic workforce. This implies that an alternative to the current types of visas that cover foreign-born students and S&T workers—such as the H-1b visa—needs to be developed. Promoting the global circulation of students and workers, while not undermining the incentives for U.S. students and workers, will create human capital flows that support collaborative advantage. The goal should be to make it easier for talented foreign S&T people to come, study, work, and start businesses in the United States, and also make it easier for foreign members of U.S. engineering teams to come to the United States to confer with their teammates. Visas shouldn’t be used to have permanent workers train their replacements or to distort market mechanisms that provide incentives for long-term S&T workforce development.

Immigration policies that support global circulation would allow easy short-term entry of three to eight months for collaboration with U.S.-based scientists and engineers. Facilitating cross-border projects actually helps retain that work here; our research finds that when projects stumble because of collaboration difficulties, the impulse is to move the entire project offshore. When U.S. S&T workers have more opportunities to work with foreign S&T workers, they broaden their perspective and better understand global technology requirements. A new type of short-term, easy-to-obtain visa for this purpose would strengthen the U.S. collaborative advantage while not undermining the incentives for U.S. students to pursue S&T careers and continuing to attract immigrants who want to become part of the permanent U.S. workforce.

Finally, in working toward the first two goals, the United States needs to develop an S&T education system that teaches collaborative competencies rather than just technical knowledge and skills. U.S. universities must restructure their S&T curricula to better meet the needs of the new global innovation system. This may include providing more course work on systems integration, entrepreneurship, managing global technology teams, and understanding how cross-cultural differences influence technology development. Our findings suggest that it is not the technical education but the cross-boundary skills that are most needed (working across disciplinary, organizational, cultural, and time/distance boundaries). Universities must build a less parochial, more international focus into their curricula. Both the implicit and explicit pedagogical frameworks should support an international perspective on S&T—for example, looking at foreign approaches to science and engineering—and should promote the collaborative advantage perspective that recognizes the new global S&T order. Specific things that could be done include developing exchange programs and providing more course work on cross-cultural management, and encouraging firms to become involved in this effort through cooperative ventures, internships, and other programs.

Our research suggests that the new engineering requirements, like the old, should build on a strong foundation of science and mathematics. But now they go much further. Communication across disciplinary, organizational, and cultural boundaries is the hallmark of the new global engineer. Integrative technologies require collaboration among scientific disciplines, between science and engineering, and across the natural and social sciences. They also require collaboration across organizations as innovation emanates from small to large firms and from vendors to original equipment manufacturers. And obviously they require collaboration across cultures as global collaboration becomes the norm. These requirements mandate a new approach not only to education but to selecting future engineers: colleges need to recognize that the talent required for the new global engineer falls outside their traditional student profiles. Managers increasingly report that although they want technically competent engineers, the qualities most valued are these other attributes.

Education policy must reflect the new engineering paradigm. It must structure science and engineering education in ways that encourage students to pursue the new approaches to engineering and science. Indeed, we believe that the new approaches will make careers in science and engineering more exciting and attractive to U.S. students. Information technology, for example, is famous for innovation that comes from people educated in a wide range of fields working across disciplines. The education system needs to better understand the new engineering requirements rather than attempt to shore up approaches from a previous era. This is a challenge that goes beyond providing more and better science and math education. It does, of course, require strengthening basic education for the weakest students and schools, but it also requires combining the best of education pedagogy with an understanding of the requirements of the “new” scientist and engineer.

Leadership in developing a global science, technology, and management curriculum may also attract more international S&T students to U.S. universities. Other desirable changes may include collaborative agreements with universities in emerging economies that enable U.S. students to be sent there for part of their education, thus helping to promote the overall move to brain circulation. Government support might be needed to make such programs economically viable for U.S. universities—for example, by making up some of the tuition differences between U.S. and foreign universities.

We believe that progress toward these goals will lead to a future where U.S. residents can more fully benefit from the creativity of S&T people from other countries, where the U.S. is still a leader in global innovation, and where a stronger U.S. system is revitalized by accelerated flows of ideas from around the world.

Rethinking, Then Rebuilding New Orleans

New Orleans will certainly be rebuilt. But looking at the recent flooding as a problem that can be fixed by simply strengthening levees will squander the enormous economic investment required and, worse, put people back in harm’s way. Rather, planners should look to science to guide the rebuilding, and scientists now advise that the most sensible strategy is to work with the forces of nature rather than trying to overpower them. This approach will mean letting the Mississippi River shift most of its flow to a route that the river really wants to take; protecting the highest parts of the city from flooding and hurricane-generated storm surges while retreating from the lowest parts; and building a new port city on higher ground that the Mississippi is already forming through natural processes. The long-term benefits—economically and in terms of human lives—may well be considerable.

To understand the risks that New Orleans faces, three sources need to be considered. They are the Atlantic Ocean, where hurricanes form that eventually batter coastal areas with high winds, heavy rains, and storm surge; the Gulf of Mexico, which provides the water vapor that periodically turns to devastatingly heavy rain over the Mississippi basin; and the Mississippi River, which carries a massive quantity of water from the center of the continent and can be a source of destruction when the water overflows its banks. It also is necessary to understand the geologic region in which the city is located: the Mississippi Delta.

The Mississippi Delta is the roughly triangular plain whose apex is the head of the Atchafalaya River and whose broad curved base is the Gulf coastline. The Atchafalaya is the upstream-most distributary of the Mississippi that discharges to the Gulf of Mexico. The straight-line distance from the apex to the Atchafalaya Bay is about 112 miles, whereas the straight-line distance from the apex to the mouth of the Mississippi is twice as long, about 225 miles. (These distances will prove important.) The Delta includes the large cities of Baton Rouge and New Orleans on the Mississippi River, and smaller communities, such as Morgan City, on the Atchafalaya. (Although residents along the Mississippi River at many places considerably to the north of New Orleans commonly refer to their floodplain lands as “the Delta,” the smaller rivers and streams here empty directly into the Mississippi River, not the Gulf of Mexico, and hence geologists more properly call this region the alluvial plain of the Mississippi River.)

The Mississippi River builds, then abandons, portions (called “lobes”) of the Delta in an orderly cycle: six lobes in the past 8,000 years (Fig. 1). A lobe is built through the process of sediment deposition where the river meets the sea. During seasonal floods, the river spreads over the active lobe, depositing sediment and building the land higher than sea level. But this process cannot continue indefinitely. As the lobe extends further into the sea, the river channel also lengthens. A longer path to the sea means a more gradual slope and a reduced capacity to carry water and sediment. Eventually, the river finds a new, shorter path to the sea, usually down a branch off the old channel. The final switching of most of the water and sediment from the old to the new channel may be triggered by a major flood that scours and widens the new channel. Once the switch occurs, the new lobe gains ground while the old lobe gradually recedes because the sediment supply is insufficient to counteract sea level rise, subsidence of the land, and wave-generated coastal erosion.

Figure 1. Mississippi Delta switching. The succession of different river channels and delta lobes during the past 5,000 years are numbered from oldest (1) to youngest (7). (Meade, 1995 U.S. Geological Survey Circular 1133, fig. 4C; also see Törnqvist et al., 1996, for an updated chronology.)

Geologist Harold Fisk predicted in 1951 that sometime in the 1970s, the Mississippi River would switch much of its water and sediment from its present course past New Orleans to its major branch, the Atchafalaya River. In order to maintain New Orleans as a deepwater port, the U.S. Army Corps of Engineers in the late 1950s constructed the Old River Control Structure, a dam with gates that essentially meters about 30% of the Mississippi River water down the Atchafalaya and keeps the remainder flowing in the old channel downstream toward New Orleans. Trying to meter the flow carries its own risks. During the 1973 flood on the Mississippi, the torrent of water scoured the channel and damaged the foundation of the Old River Control Structure. If the structure had failed, then the flood of 1973 would have been the event that switched the Mississippi into its new outlet—the Atchafalaya River—to the Gulf. The Corps repaired the structure and built a new Auxiliary Structure, completed in 1985, to take some of the pressure off the Old River Control Structure. The Mississippi kept rolling along.

Still, the fact remains that the “new” Atchafalaya lobe is actively building, despite receiving only one-third of the Mississippi water and sediment, while the old lobe south of New Orleans is regressing, leaving less and less of a coastal buffer between the city and hurricane surges from the Gulf. This situation has major implications.

Nature’s protection

At one time, the major supplier of sediment to the Mississippi Delta was the Missouri River, the longest tributary of the Mississippi River. However, the big reservoirs constructed in the 1950s on the upper Missouri River now trap much of the sediment, with the result that the lower Mississippi now carries about 50% less sediment (Fig. 2). It is ironic that the reservoirs on the Missouri, whose purposes include flood storage to protect downstream areas, entrap the sediments needed to maintain the Delta above sea level and flood level. Much less fine sediment (silt and clay) flows downstream to build up the Delta during seasonal floods, and much of this sediment is confined between human-made levees all the way to the Gulf, where it spills into deep water. Coarser sediment (sand) trapped in upstream reservoirs or dropped into deep water likewise cannot carry out its usual ecological role of contributing to the maintenance of the islands and beaches along the Gulf, and beaches can gradually erode away because the supply of sand no longer equals the loss to along-shore currents and to deeper water.

If Hurricane Katrina, which in 2005 pounded New Orleans and the Delta with surge and heavy rainfall, had followed the same path over the Gulf 50 years ago, the damage would have been less, because more barrier islands and coastal marshes were available then to buffer the city. Early settlers on the barrier islands offshore of the Delta built their homes well back from the beach, and they allowed driftwood to accumulate where it would be covered by sand and beach grasses, forming protective dunes. The beach grasses were essential because they helped stabilize the shores against wind and waves and continued to grow up through additional layers of sand. In contrast to a cement wall, the grasses would recolonize and repair a breach in the dune. (A similar lesson can be taken from the tsunami-damaged areas of the Indian Ocean. Damage was less severe where mangrove forests buffered the shorelines than where the land had been cleared and developed to the shoreline.) Vegetation offers resistance to the flow of water, so the more vegetation a surge encounters before it reaches a city, the greater the damping effect on surge height. The greatest resistance is offered by tall trees intergrown with shrubs; next are shorter trees intergrown with shrubs; then shrubs; followed by supple seedlings or grasses; and finally, mud, sand, gravel, or rock with no vegetation.

One of the major factors determining vegetation type and stature is land elevation. In general, marsh grasses occur at lower elevations because they tolerate frequent flooding. Trees occur at higher elevations (usually 8 to 10 feet above sea level) because they are less tolerant of flooding. Before European settlement, trees occurred on the natural levees created by the Mississippi River and its distributaries, and on beach ridges called “cheniers” (from the French word for oak) formed on the mudflats along the Gulf coast. The cheniers were usually about 10 feet high and 100 or so feet wide, but extended for miles, paralleling the coast. Two management implications can be derived from this relationship between elevation and vegetation: Existing vegetation provides valuable wind, wave, and surge protection and should be maintained; and the lines of woody vegetation might be restored by allowing the Mississippi and its distributaries to build or rebuild natural levees during overbank flows and by using dredge spoil to maintain the chenier ridges and then planting or allowing plant recolonization to occur.

Figure 2. The sediment loads carried by the Mississippi River to the Gulf of Mexico have decreased by half since 1700, so less sediment is available to build up the Delta and counteract subsidence and sea level rise. The greatest decrease occurred after 1950, when large reservoirs constructed trapped most of the sediment entering them. Part of the water and sediment from the Mississippi River below Vicksburg is now diverted through the Corps of Engineers’ Old River Outflow Channel and the Atchafalaya River. Without the controlling works, the Mississippi would have shifted most of its water and sediment from its present course to the Atchafalaya, as part of the natural delta switching process. The widths of the rivers in the diagram are proportional to the estimated (1700) or measured (1980–1990) suspended sediment loads (in millions of metric tons per year). (Meade, 1995 U.S. Geological Survey Millions of metric tons of Circular 1133, fig. 6A).

Of course, the vegetation has its limits: Hurricanes uproot trees and the surge of salt or brackish water can kill salt-intolerant vegetation. Barrier islands, dunes, and shorelines can all be leveled or completely washed away by waves and currents, leaving no place for vegetation to grow. The canals cut into the Delta for navigation and to float oil-drilling platforms out to the Gulf disrupted the native vegetation by enabling salt or brackish water to penetrate deep into freshwater marshes. The initial cuts have widened as vegetation dies back and shorelines erode without the plant roots to hold the soil and plant leaves to dampen wind- or boat-generated waves. The ecological and geological sciences can help determine to what extent the natural system can be put back together, perhaps by selective filling of some of the canals and by controlled flooding and sediment deposition on portions of the Delta through gates inserted in the levees.

The Mississippi River typically floods once a year, when snowmelt and runoff from spring rains are delivered to the mainstem river by the major tributaries. Before extensive human alterations of the watersheds and the rivers, these moderate seasonal floods had many beneficial effects, including providing access to floodplain resources for fishes that spawned and reared their young on the floodplains and supporting migratory waterfowl that fed in flooded forests and marshes. The deposition of nutrient-rich sediments on the floodplain encouraged the growth of valuable bottomland hardwood trees, and the floodwaters dispersed their seeds.

Human developments in the tributary watersheds and regulation of the rivers have altered the natural flood patterns. In the Upper Mississippi Basin, which includes much of the nation’s corn belt, 80 to 90% of the wetlands were drained for agriculture; undersoil drain tubes were installed and streams were channelized, to move water off the fields as quickly as possible so that farmers could plant as early as possible. Impervious surfaces in cities and suburbs likewise speed water into storm drains that empty into channelized streams. The end result is unnaturally rapid delivery of water into the Upper Mississippi and more frequent small and moderate floods than in the past. In the arid western lands drained by the Missouri River, the problem is shortage of water; it is this phenomenon that led to the construction of the huge reservoirs to store floodwaters and use them for irrigating crops in the Dakotas, while also lowering flood crest levels in the downstream states of Nebraska, Iowa, and Missouri.

In all of the tributaries of the Mississippi, the floodplains have been leveed to various degrees, so there is less capacity to store or convey floods (as well as less fish and wildlife habitat), and the same volume of water in the rivers now causes higher floods than in the past. On tributaries with flood storage reservoirs, the heights of the moderate floods that occur can be controlled. On other tributaries, flood heights could be reduced by restoring some of the wetlands in the watersheds; constructing “green roofs” that incorporate vegetation to trap rainfall; adopting permeable paving; building stormwater detention basins in urban and suburban areas; and reconnecting some floodplains with their rivers. Between Clinton, Iowa, and the mouth of the Ohio River, 50 to 80% of the floodplain has been leveed and drained, primarily for dry-land agriculture. On the lower Mississippi River, from the Ohio River downstream and including the Delta, more than 90% has been leveed and drained. Ironically, levees in some critical areas back up water on other levees. In such areas, building levees higher is fruitless—it simply sets off a “levee race” that leaves no one better off. In the Delta, the additional weight of higher, thicker levees themselves can cause further compaction and subsidence of the underlying sediments.

The occasional great floods on the Mississippi are on a different scale than the more regular moderate floods. It takes exceptional amounts of rain and snowmelt occurring simultaneously in several or all of the major tributary basins of the Mississippi (the Missouri, upper Mississippi, Ohio, Arkansas, and Red Rivers) to produce an extreme flood, such as the one that occurred in 1927. That flood broke levees from Illinois south to the Gulf of Mexico, flooding an area equal in size to Massachusetts, Connecticut, New Hampshire, and Vermont combined, and forcing nearly a million people from their homes. With so much rain and snowmelt, wetlands, urban detention ponds, and even the flood control reservoirs are likely to fill up before the rains stop.

Protection shortfalls

In order to protect New Orleans from such great floods, the Corps of Engineers plans to divert some floodwater upstream of the city. Floodwater would be diverted through both the Old River Control Structure and the Morganza floodway to the Atchafalaya River; and through the Bonnet Carré Spillway (30 miles upstream of New Orleans) into Lake Ponchartrain, which opens to the Gulf. All of these structures and operating plans are designed to safely convey a flood 11% greater in volume than the 1927 flood around and past New Orleans.

But what is the risk that an even greater flood might occur? How does one assess the risk of flooding and determine whether it makes more sense to move away than to rebuild?

The Corps of Engineers estimates flood frequencies based on existing river-gauging networks and specifies levee designs (placement, height, thickness) accordingly. The resulting estimates and flood protection designs are therefore based on hydrologic records that cover only one to two centuries, at most. Yet public officials may ask for levees and flood walls that provide protection against “100-year” or even “1,000-year” floods—time spans that are well beyond most existing records. Because floods, unlike trains, do not arrive on a schedule, these terms are better understood as estimates of probabilities. A 100-year levee is designed to protect against a flood that would occur, if averaged over a sufficiently long period (say 1,000 years), once in 100 years. This means that in any given year, the risk that this levee will fail is estimated, not guaranteed, to be 1% (1 in 100). If 99 years have passed since the last 100-year flood, the risk of flooding in year 100 is still 1%. In contrast to other natural hazards, such as earthquakes, the probability of occurrence does not increase with time since the last event. (Earthquakes that release strain that builds gradually along fault lines do have an increased probability of occurrence as time passes and strain increases.)

In essence, engineers assume that the climate in the future will be the same as in the recently observed past. This may be the only approach possible until scientists learn more about global and regional climate mechanisms and can make better predictions about precipitation, runoff, and river flows. However, the period of record can be greatly extended, thereby making estimates of the frequency of major floods much more accurate than by extrapolating from 200-year records of daily river levels. Sediment cores from the Mississippi River and the Gulf of Mexico record several episodes of “megafloods” along the Mississippi River during the past 10,000 years. These megafloods were equivalent to what today are regarded as 500-year or greater floods, but they recurred much more frequently during the flood-prone episodes recorded in the sediment cores. The two most recent episodes occurred at approximately 1000 BC and from about 1250 to 1450 AD. There is independent archeological evidence that floods during these episodes caused disruptions in the cultures of people living along the Mississippi, according to archeologist T. R. Kidder.

These flood episodes occurred much more recently than the recession of the last ice sheet, and therefore they were not caused by melting of the ice or by catastrophic failures of the glacial moraines that acted as natural dams for meltwater. They were most likely caused by periods of heavy rainfall over all or portions of the Mississippi Basin. Until more is known about climate mechanisms, it is prudent to assume that such megafloods will happen again. Thus, this possibility must be taken into account in designing flood protection for New Orleans, especially if public officials are serious about their expressed desire to protect the city against 1,000-year floods.

Building a “new” New Orleans

If New Orleans is to be protected against both hurricane-generated storm surges from the sea and flooding from the Mississippi River, are there alternative cost-effective approaches other than just building levees higher, diverting floods around New Orleans, and continuing the struggle to keep the Mississippi River from taking its preferred course to the sea? Yes, as people in other parts of the world have demonstrated.

The Romans used the natural and free supply of sediment from rivers to build up tidal lands in England (and probably also in the southern Netherlands) that they wished to use for agriculture. People living along the lower Humber River in England developed this to a high art in the 18th century, in a practice called “warping.” They had the same problems with subsidence as Louisiana, but they encouraged the sediment-laden Humber River to flood their levee districts (called “polders”) when the river was observed to be most turbid and therefore carrying its maximum sediment load. The river at this maximum stage was referred to as a “fat river” and inches of soil could be added in just one flood event. People of this time also recognized the benefits of marsh cordgrass, Spartina, in slowing the water flow, thereby encouraging sedimentation, and subsequently in anchoring the new deposits against resuspension by wind-generated waves or currents.

Could the same approach be taken in the Delta, in the new Atchafalaya lobe? Advocates for rebuilding New Orleans in its current location point to the 1,000-year+ levees and storm surge gates that the Dutch have built. But the Netherlands is one of the most densely populated countries in Europe, with 1,000 people per square mile, so the enormous cost of building such levees is proportional to the value of the dense infrastructure and human population there. The same is not true in Louisiana, where there are approximately 100 people per square mile, concentrated in relatively small parcels of the Delta. This low population density provides the luxury of using Delta lands as a buffer for the relatively small areas that must be protected.

However, the Dutch should be imitated in several regards. First, planners addressing the future of New Orleans should take a lesson from the long-term deliberate planning and project construction undertaken by the Dutch after their disastrous flood of 1953. These efforts have provided new lands and increased flood protection along their coasts and restored floodplains along the major rivers. Some of these projects are just now being realized, so the planning horizon was at least 50 years.

Figure 3. The old parts of New Orleans, including the French Quarter, were built on the natural levees created by the Mississippi River (red areas along the river in the figure), well above sea level. In contrast, much of the newer city lies below sea level (dark areas). Flooding of the city occurred when the storm surge from Hurricane Katrina entered Lakes Pontchartrain and Borgne and backed up the Gulf Intracoastal Waterway (GIWW), the Mississippi River Gulf Outlet (MRGO), and several industrial and drainage canals. The LAKE walls of the canals were either overtopped or failed in several places, allowing water to flood into the city. (Courtesy of Center for the Study of Public Aspects of Hurricanes, as modified by Hayes, 2005.)

Planners focusing on New Orleans also would be wise to emulate Dutch efforts to understand and work with nature. Specifically, they should seek and adopt ways to speed the natural growth and increase the elevation of the new Atchafalaya lobe and to redirect sediment onto the Delta south of New Orleans to provide protection from storm waves and surges. A key question for the Federal Emergency Management Agency (FEMA), the FEMA equivalents at the state level, planners and zoning officials, banks and insurance companies, and the Corps of Engineers is whether it is more sustainable to rebuild the entire city and a higher levee system in the original locations or to build a “new” New Orleans somewhere else, perhaps on the Atchafalaya lobe.

Under this natural option, “old” New Orleans would remain a national historic and cultural treasure, and continue to be a tourist destination and convention city. Its highest grounds would continue to be protected by a series of strengthened levees and other flood-control measures. City planners and the government agencies (including FEMA) that provide funding for rebuilding must ensure that not all of the high ground is simply usurped for developments with the highest revenue return, such as convention centers, hotels, and casinos. The high ground also should include housing for the service workers and their families, so they are not consigned again to the lowest-lying, flood-prone areas. The flood-prone areas below sea level should be converted to parks and planted with flood-tolerant vegetation. If necessary, these areas would be allowed to flood temporarily during storms.

Work already is under way that might aid such rebuilding efforts and help protect the city during hurricanes. The Corps of Engineers, in its West Bay sediment diversion project, plans to redirect the Mississippi River sediment, which currently is lost to the deep waters of the Gulf, to the south of the city and use it to create, nourish, and maintain approximately 9,800 acres of marsh that will buffer storm waves and surges.

At the same time, the Corps, in consultation with state officials, should guide and accelerate sediment deposition in the new Atchafalaya lobe, under a 50- to 100-year plan to provide a permanent foundation for a new commercial and port city. If old New Orleans did not need to be maintained as a deepwater port, then more of the water and sediment in the Mississippi could be allowed to flow down the Atchafalaya, further accelerating the land-building. The new city could be developed in stages, much as the Dutch have gradually increased their polders. The port would have access to the Mississippi River via an existing lock (constructed in 1963) that connects the Atchafalaya and the Mississippi, just downstream of the Old River Control Structure.

This plan will no longer force the Mississippi River to go down a channel it “wants” to abandon. The shorter, steeper path to the sea via the Atchafalaya might require less dredging than the Mississippi route, because the current would tend to keep the channel scoured. Because the Mississippi route is now artificially long and much less steep, accumulating sediments must be constantly dredged, at substantial cost. Traditional river engineering techniques that maintain the capacity of the Atchafalaya to bypass floodwater that would otherwise inundate New Orleans also might be needed to maintain depths required for navigation. These techniques include bank stabilization with revetments and wing dikes that keep the main flow in the center of the channel where it will scour sediment.

The new city would have a life expectancy of about 1,000 years—at which time it would be an historic old city— before the Mississippi once again switched. The two-city option might prove less expensive than rebuilding the lowest parts of the old city, because the latter approach probably would require building flood gates in Lake Ponchartrain and new levees that are high enough and strong enough to withstand 500- or 1,000-year floods. In both scenarios, flood protection will need to be enhanced through a continual program of wetland restoration.

In evaluating these options, the Corps of Engineers should place greater emphasis on the 9,000 years of geological and archaeological data related to the recurrence of large floods along the Mississippi River. Shortly before the recent hurricanes hit the region, the Corps had completed a revised flood frequency analysis for the upper Mississippi, based solely on river gauge data from the past 100 to 200 years. Unless the Corps considers the prehistoric data, it probably will continue to underestimate the magnitude and frequency of large floods. If the Corps does take these data into account in determining how high levees need to be and what additional flood control works will be needed to prevent flooding in New Orleans and elsewhere, then the actual costs of the “traditional” approach are likely to be much higher than currently estimated. The higher costs will make the “working with nature” option even more attractive and economically feasible.

The Corps also should include in its assessments the gradual loss of storage capacity (due to sedimentation) in existing flood control reservoirs in the upstream Mississippi Basin, as well as the costs and benefits associated with proposed sediment bypass projects in these reservoirs. For example, the Corps undertook preliminary studies of a sediment-bypass project in the Lewis and Clark Reservoir on the upper Missouri River in South Dakota and Nebraska because the reservoir is predicted to completely fill with sediment by 2175, and most of its storage capacity will be lost well before then. By starting to bypass sediments within the next few years, the remaining water storage capacity could be prolonged, perhaps indefinitely. But studies showed that the costs exceeded the expected benefits. In these studies, however, the only benefits considered were the maintenance of water storage capacity and its beneficial uses, not the benefits of restoring the natural sediment supply to places as far downstream as the Delta. It is possible that the additional sediment would significantly accelerate foundation-building for “new” New Orleans and the rebuilding of protective wetlands for the old city. Over the long term, the diminishing capacities of such upstream storage reservoirs also will add to the attractiveness of more natural options, including bypassing sediments now being trapped in upstream reservoirs, utilizing the sediments downstream on floodplains and the Delta, and restoring flood conveyance capacity on floodplains that are now disconnected from their rivers by levees.

Action to capitalize on the natural option should begin immediately. The attention of the public and policymakers will be focused on New Orleans and the other Gulf cities for a few more months. The window of opportunity to plan a safer, more sustainable New Orleans, as well as better flood management policy for the Mississippi and its tributaries, is briefly open. Without action, a new New Orleans— a combination of an old city that retains many of its historic charms and a new city better suited to serve as a major international port—will go unrealized. And the people who would return to a New Orleans rebuilt as before, but with higher levees and certain other conventional flood control works, will remain unduly subject to the wrath of hurricanes and devastating floods. No one in the Big Easy should rest easy with this future.

Cyberinfrastructure and the Future of Collaborative Work

One of the most stunning aspects of the information technology (IT) revolution has been the speed at which specialized, high-performance tools and capabilities originally developed for specific research communities evolve into products, services, and infrastructure used more broadly by scientists and engineers, and even by the mass public. The Internet itself is the best example of this phenomenon. We have come to expect that the performance of one era’s “high-end” tool or application will be matched or exceeded by the next era’s desktop software or machine. Although the funding of IT-intensive research communities is justified by the results they produce in their own domains, the added value that they provide by proving concepts, developing features, and “breaking in” the technology to the point where it can be adapted to serve larger markets can in some cases have even greater social and economic benefit.

Cyberinfrastructure-enabled research is one area in which today’s cutting-edge researchers are building the foundation for a quantum leap in IT capability. Taking advantage of very-high-bandwidth Internet connections, researchers are able to connect remotely to supercomputers, electron microscopes, particle accelerators, and other expensive equipment so that they can acquire data and work with distant colleagues without traveling. There are several drivers of this trend.

First, in many fields the basic tools to do cutting edge research are now so prohibitively expensive that they cannot be bought by every lab and campus, or even by every country. In the case of high-energy physics, CERN’s Large Hadron Collider (LHC) in Europe promises capabilities unmatched by any other shared facility on the planet. The Superconducting Supercollider, which the United States considered building in the 1990s, would have been a rival, but Congress decided against funding it. As a result, U.S. high energy physicists who want to participate in the search for elemental particles or forces must conduct their experiments in Europe. Even in the life sciences, where the equipment is usually not as expensive, devices such as ultra-high-voltage electron microscopes are beyond the budgets of most universities, and the government funding agencies cannot afford to underwrite the cost for everyone. For example, there has not been a new high-voltage electron microscope fielded for use in biological or biomedical research in the United States for more than 30 years. For this work U.S. researchers depend on the generosity of the Japanese and Korean governments.

A second driver is that a growing number of science and engineering fields are becoming data and computation intensive. This has certainly happened in the life sciences, with the mapping of the human genome and the emergence of new data-intensive fields such as genomics, and is also occuring in the earth sciences and in other fields. Even though computing power continues to decline rapidly in cost, researchers are demanding access to more computing capacity and capability with supercomputers, linked “clusters” of commodity computers, and advanced high-bandwidth networks connecting these resources. This new model of a shared distributed system of advanced instruments and IT components involves deployment of elements that are much too expensive for most universities.

The third driver is the relentless progress of IT itself in making it easier and more affordable to share research data, tools, and computing power. The three elements of emerging distributed IT systems—data storage, networking, and computational capacity/capability—are all advancing at exponential rates, with the price of a given unit of each element continuing to drop rapidly. Networking advances more rapidly than storage, which in turn is on a steeper curve than computing power, but the general implications are clear. The falling prices make it more affordable to link scientific communities with advanced instruments and high performance computing and to connect distributed databases and other resources in end-to-end environments that can be accessed in real-time through a simple and user-friendly interface.

The challenge is to figure out how to use this advancing capability most productively and to create the institutional and policy framework that facilitates research collaboration through this cyberinfrastructure. The solution will differ somewhat from field to field, but it is helpful to look closely at one area that is leading in realizing the potential of the technology.

The Biomedical Informatics Research Network (BIRN) is a National Institutes of Health (NIH) initiative that fosters distributed collaborations in biomedical science by using IT innovations. Currently BIRN involves a consortium of 23 universities and 31 research groups that participate in infrastructure development or in one or more of three test-bed projects centered around structural and/or functional brain imaging of human neurological disorders including Alzheimer’s disease, depression, schizophrenia, multiple sclerosis, attention deficit disorder, brain cancer, and Parkinson’s disease. The BIRN Coordinating Center, which is responsible for developing, implementing, and supporting the IT infrastructure necessary to achieve distributed collaborations and data sharing among the BIRN participants, is located at the University of California at San Diego (UCSD).

ALTHOUGH ALL DISCIPLINES CAN LEARN FROM BIRN’S EXPERIENCE, THE SOCIOLOGICAL TRANSITION FROM PHYSICAL TO VIRTUAL RESEARCH COMMUNITY IS A PROCESS THAT EACH FIELD AND DISCIPLINE NEEDS TO GO THROUGH ON ITS OWN.

BIRN is using these initial test-bed studies to drive the construction and daily use of a data-sharing environment that presents biological data held at geographically separate sites as a single, unified database. To this end, the BIRN program is rapidly producing tools and technologies to enable the aggregation of data from virtually any laboratory’s research program to the BIRN data system. Lessons learned and best practices are continuously collected and made available to help new collaborative efforts make efficient use of this infrastructure at an increasingly rapid pace.

Another activity at UCSD that complements and supports BIRN and other cyberinfrastructure efforts in the life sciences is the Telescience Project, which emerged from the early efforts of researchers at the National Center for Microscopy and Imaging Research (NCMIR) to remotely control bio-imaging instruments. In 1992, NCMIR researcher demonstrated the first system to control an electron microscope over the Internet. Researchers at a conference in Chicago were able to interactively acquire and view images via remote control of one of the intermediate voltage electron microscopes at NCMIR and to simultaneously refine these data using a remotely located Cray supercomputer.

In the mid-1990s, Web-based telemicroscopy was made available to NCMIR’s user community in the United States and abroad, which was then able to effectively use the remote interface to acquire data. It became clear, however, that to make the most of this capability it would also be necessary to link this data acquisition with enhanced data computation and storage resources. The Telescience Project was developed to address this issue. The Telescience Project provides a grid-based architecture to combine the use of telemicroscopy with tools for parallel distributed computation, distributed data management and archiving, and interactive integrated visualization tools to provide an end-to-end solution for high-throughput microscopy. This integrated system is increasing the throughput of data acquisition and processing and ultimately improving the accuracy of the final data products. The Telescience Project merges technologies for remote control, grid computing, and federated digital libraries of multi-scale data important to understanding cell-structure and function in health and disease..

The Telescience Project serves as a “skunk works” for BIRN, allowing new concepts and technologies to be developed and tested before insertion into the BIRN production environment. In turn, building an end-to-end system is a great way to find out what works and what does not. If researchers are unhappy with the performance, they will not use it. On the other hand, if a system is easy to use, it opens a wealth of possibilities, as we have found in building and fielding the BIRN. Science is a dynamic, social process. The cyberinfrastructure-enabled environment facilitates new forms of collaborative research, whose dynamism further drives the research, which in turn further drives the development of new IT tools and capabilities.

For example, collaborating scientists in Kentucky and Buenos Aires could schedule an experiment on the $50million Korean high-energy electron microscope. The scientists could jointly drive the operation of the microscope. To ensure that they collect the most useful data, they can generate preliminary three-dimensional (3D) results that are immediately streamed for analysis to computers that are dynamically selected from a pool of globally distributed computational resources. The resultant output can then be visualized in 3D and used by the scientists to determine how to guide the work session at the microscope. Throughout this process, raw data, data intermediates, and meta-data can be automatically added to integrated databases. In order to complete this type of data-driven remote session in a reasonable time period, the networking paths among resources must be intelligently coordinated along with the data input and output of the variety of software applications being used. As material is added to the integrated databases, the same researchers or an entirely different team of scientists can be mining the data for other uses. Through the innovations of the Telescience Project and BIRN, these tightly integrated sessions are not only possible, they are increasingly routine, secure, and accessible via a web portal with a single sign-on. More important, through the convergence of usability, computational horsepower, and richly integrated workflows, the end-to-end throughput for generating scientific results is increasing.

Using the BIRN infrastructure, scientists are developing new insights and resources that will improve clinicians’ ability to identify and diagnose problems such as Alzheimer’s disease. BIRN researchers at the Center for Imaging Science at Johns Hopkins University, collaborating with other BIRN researchers, developed a pipeline to enable seamless processing of shape information from high-resolution structural magnetic resonance scans. In the initial study, hippocampal data from 45 subjects (21 control subjects, 18 Alzheimer’s subjects, 6 subjects exhibiting a rare form of dementia, called semantic dementia) were analyzed by comparing these 45 human hippocampi bilaterally to one another (4,050 comparisons for both left and right hippocampi). This large-scale computation required over 30,000 processor hours on the NSF-supported TeraGrid and produced over four terabytes of data that were stored for subsequent analysis on the NIH-supported BIRN data grid. Using the results of the shape analyses, BIRN researchers were able to successfully classify the different subject groups through the use of noninvasive imaging methodologies, potentially providing clinicians with new tools to assist them in their daily work.

THE DATA INTEGRATION AND NETWORKING CHALLENGES FEED ON EACH OTHER, SINCE GREATER ACCESS TO STORAGE LEADS TO GREATER DEMANDS FOR NETWORKS, AND VICE VERSA.

Work is now under way to extend the Telescience Project and BIRN to create an integrated environment for building digital “visible cells,” a multiscale set of interconnected 3D images that accurately represent the subcellular, cellular, and tissue structures for various cellular subsystems in the body. With the increasing availability of biological structural data that spans multiple scales and imaging modalities, the goal is to create realistic digital models of cellular subsystems that can be used to create simulations that will rapidly advance our understanding of the impact of structure on function in the living organism. Deciphering the structure-function relationship is one of the grand challenges in structural biology. The Telescience Project and BIRN are knitting together an IT fabric and coordinating interdisciplinary teamwork to build a shared infrastructure that integrates the tools, resources, and expertise necessary to accelerate progress in this fundamental domain of biology.

The benefits of this effort are not limited to cutting-edge research. Telemicroscopy has been successfully used in the classroom to expose students to interdisciplinary research, and the visible cell project will produce materials that are useful in K-12 education.

Cyberinfrastructure-supported environments such as BIRN are changing science. They can significantly improve the cost effectiveness of research so that society will get more basic research bang for its buck. This can help relieve some of the pressure on government funding agencies, such as NIH, which must emphasize translational-research accomplishments most often based on advances resulting from investments in basic science. Enhanced research productivity resulting from shared infrastructure can reduce costs attributable to geographic duplication of facilities, thus helping to balance the investments supporting applied and basic research and basic discovery or curiosity-driven endeavors—both of which are critical to addressing major societal needs.

A second possible implication is that cyberinfrastructure-supported environments will enable interdisciplinary teams to be more effective in attacking the big long-term or “stretch” goals in research. This could change the sociology of the research enterprise from a focus on individual goals, which are generally of a short-term character, and provide incentives leading to a greater emphasis on effective collaboration in interdisciplinary, interinstitutional settings. This will call for universities to change the way in which they evaluate and reward faculty members and will hopefully create new motivation for scientists to work together in ways that will speed progress.

PERHAPS THE LONG-TERM SOLUTION IS TO TRAIN A CORE OF SCIENTISTS AND ENGINEERS WITHIN EACH FIELD WHO HAVE SPECIAL EXPERTISE IN IT AS WELL AS THEIR PRIMARY SPECIALTY.

It will also allow researchers at smaller research institutions in the United States and around the world to participate in cutting-edge research in fields where cyberinfrastructure is sufficiently advanced. It will create the research equivalent of the global business networks that Thomas Friedman describes in his recent book The World is Flat.

Although all disciplines can learn from BIRN’s experience, the sociological transition from physical to virtual research community is a process that each field and discipline needs to go through on its own. At BIRN, we may develop portals and other specific tools with possible application to cyberinfrastructure efforts in other fields, but the hard work of designing, building, using, and continuously improving end-to-end production environments does not seem to be amenable to easy transfer from field to field because it must be adapted for the equipment used, the nature of the data collected, and the types of collaboration that are appropriate. But that also can change.

NSF Director Arden Bement has talked about his vision of the day when cyberinfrastructure joins “the ranks of the electrical grid, the interstate highway system and other traditional infrastructures. A sufficiently advanced cyberinfrastructure will simply work, and users won’t care how.” Bement’s remark points to what we believe will be the most important long-term effect of cyberinfrastructure. Just as the Internet has become a tool that is used by everyone, the research cyberinfrastructre we are creating today will evolve into the basic platform for all manner of collaborative knowledge work in the future, affecting business and commerce, entertainment, education, health care, and just about every other human activity.

If the cyberinfrastructure is to deliver on its enormous promise, progress is essential in two areas. First, more intensive work is needed on frameworks for data integration. As data storage becomes less expensive, we are faced with a rapidly growing mountain of data. In order to be useful, the data need to be accessible and integrated into other data sets. We need more effective ways to bring data together on the fly in ways that can be visualized and understood by a researcher. Google is a useful analogy. What we need is really a “Google on steroids.” The beginning of what is needed can be seen in the efforts of researchers to integrate the enormous amount of data accumulating about the diversity of human genotypes.

Although fundamentally a software problem, data integration is also an organizational challenge because there are different approaches being developed within the scientific community. One can choose a more prescriptive approach by requiring the definition of specific meta-data entities that must be used by all sources, as is done by the Cancer Bioinformatics Grid operated by the National Cancer Institute, or one can develop more flexible methods to bring together diverse data sources that may not be based on similar standards, as is being done in BIRN. Although the two approaches seem mutually exclusive, they are actually complementary and provide a fertile area for collaboration between these projects.

A second challenge involves networking architectures. Technological advances are creating the potential for networks to be managed much more efficiently. For example, OptIPuter, a project of the Cal-IT2 Institute of UCSD and the Electronic Visualization Laboratory at the University of Illinois at Chicago, is developing an innovative approach to using fiberoptic data connections. Many messages travel on that cable simultaneously, and the standard protocol is to convert large blocks of data into small packets that travel together with other messages from other users. In the OptIPuter approach, traffic is managed so that one project can own a specific wavelength (color) on optical fiber path (like a dedicated line) for an instant. This impulse-based management is better adapted to the nature of the data flows in scientific research and will also be better adapted to emerging network use patterns outside of research. But some of the entrenched, and occasionally struggling, commercial interests that control the telecommunications infrastructure have a large financial stake in staying with management systems that favor a constant, high level of traffic over the sequential, impulse-based approach. Taking the necessary regulatory steps to remove these sorts of barriers and provide incentives for adoption of revolutionary new technologies is a significant policy challenge.

In addition, the data integration and networking challenges feed on each other, since greater access to storage leads to greater demand for networks, and vice versa. This points to another area for necessary policy focus in the coming years: the need to build a broad base of support for investments in these areas.

Certain pieces and functions of cyberinfrastructure have developed dedicated policy communities and constituencies that focus attention on their current and future needs. High-performance computing, with its support base and practitioners in the agencies, national labs, academia, and industry, is a good example. Groups such as the Council on Competitiveness effectively stimulate discussion of issues and concerns related to computing needs. Other organized constituencies exist for research networking, where coalitions such as National Lambda Rail and Internet2 have emerged to provide effective focus and leadership, and digital archiving, where libraries and librarians effectively promote their needs.

Yet outside the affected domains themselves, and their traditional sources of support, there has been no broad constituency or leadership institution concerned with developing the integrative tools and systems needed to build end-to-end systems such as BIRN. It would be particularly useful if there were more effective support for interdisciplinary programs that engage appropriate parts of the computer science community in cooperative work with domain scientists addressing these problems. This is how the key tools of middleware—soft-ware that enables novice and expert alike to access resources and participate—are being developed now, but we are really only at the beginning of this process. New software architectures are needed to integrate huge distributed systems, but the computer science community and other stakeholders will not come to the table without the right incentives. Perhaps the long-term solution is to train a core of scientists and engineers within each field who have special expertise in IT as well as their primary specialty in a manner similar to how molecular biology has become embedded in nearly every bioscience subdiscipline over the past 20 years.

Several years ago Daniel Atkins at the University of Michigan chaired a blue-ribbon panel for NSF’s Computer and Information Science and Engineering Directorate that recommended a significant new interagency effort in cyberinfrastructure. Now it appears that NSF is starting to take on the necessary leadership role called for in the Atkins report. That is an encouraging step, but what is really needed now is a broader interagency initiative that takes advantage of the research and institutions supported by NIH, the National Oceanic and Atmospheric Administration, the Department of Energy, and others to accelerate the process of building new virtual research communities. Relatively modest investments by the Department of Defense, NSF, and other agencies led to today’s commodity Internet and the enormous new industries and markets that the Internet has made possible. With the right approach to cyberinfrastructure, the United States can again leverage investments in new research resources to build the foundation for the next IT revolution.

At Last

The last time that the look of Issues was updated was the fall of 1988, when I joined the magazine. The 1988 design put Issues at the forefront of the movement toward desktop publishing. Of course, the software was relatively primitive at the time, and we opted for a design that was much stronger on consistency and ease of use than on flexibility. The software, the desktop computer, and the printing industry have all made impressive technological progress since then, and we have finally marshaled the time and resources to take advantage of the new capabilities. The result is a new look that includes color, variety, and art.

The financial support of our sponsors was critical to the decision to redesign. National Academy of Sciences president Bruce Alberts began urging us to add more visual appeal to the magazine almost from the day he arrived in 1993, and he encouraged NAS to steadily increase its financial support over the years. For a period in the 1990s, Issues lost the financial support of the National Academy of Engineering and the Institute of Medicine, but Wm. A. Wulf renewed NAE’s support when he became president, and now Harvey Fineberg has renewed IOM’s support.

The University of Texas at Dallas has been supporting the magazine since 1992, and the university’s new president David E. Daniel, an NAE member, has endorsed UTD’s continued sponsorship with a generous financial commitment. With this firm foundation in place, we decided that it was finally safe to make the leap to full-color production and a new design.

Credit for the design belongs to Pamela Reznick, who also produced the previous designs and who has been responsible for producing Issues’ covers. Pam selected new typefaces (Minion and Myriad) and created a flexible template that will make it possible to give each article a distinctive layout. Jennifer Lapp of Pica & Points will continue to be responsible for implementing the design in future issues.

The most striking addition to the design is the use of art, and for that we owe a debt to J.D. Talasek, director of exhibits and cultural programs at the National Academies. J.D. is responsible for the numerous inspired art exhibits that are mounted in the Academies’ offices in Washington. These exhibits feature work that explores the relationships among the arts and science, engineering, and medicine. J.D. will be choosing samples from these exhibits as well other suitable art to reproduce in Issues. The work featured in each issue might bear some thematic relationship to the articles, but it should in no way be seen as illustrating the text. The art is all produced independently and stands on its own.

The use of art in a policy magazine might seem odd or merely decorative, but we consider it a substantive addition. The art will not recommend regulatory changes or funding increases, but by conveying the beauty, power, horror, anxiety, wonder, or confusion engendered by science, engineering, and medicine, it can help us to see contemporary issues in fresh ways. We encourage you to look at the art carefully and to read the artists’ descriptions of their work. You will see that there is much more there than meets the casual eye.

Envisioning a Transformed University

Rapidly evolving information technology (IT) has played an important role in expanding our capacity to generate, distribute, and apply knowledge, which in turn has produced unpredictable and frequently disruptive change in existing social institutions. The implications for discovery-based learning institutions such as the research university are particularly profound. The relationship between societal change and the institutional and pedagogical footing of research universities is clear. The knowledge economy is demanding new types of learners and creators. Globalization requires thoughtful, interdependent, and globally identified citizens. New technologies are changing modes of learning, collaboration, and expression. And widespread social and political unrest compels educational institutions to think more concertedly about their role in promoting individual and civic development. Institutional and pedagogical innovations are needed to confront these dynamics and ensure that the canonical activities of universities—research, teaching, and engagement—remain rich, relevant, and accessible.

Aware of these developments, in February 2000 the National Academies convened the Panel on the Impact of Information Technology on the Future of the Research University, which in November 2002 published the report Preparing for the Revolution: Information Technology and the Future of the Research University. As a follow-up to the report, in Fall 2002 the Academies launched the Forum on Information Technology and Research Universities, which conducted a series of discussions among university leaders at various locations across the country. The diversity of opinions and viewpoints that emerged during these meetings would be impossible to summarize in a report. Besides, no consensus is possible on such a complex and uncertain subject. Instead, in the spirit of continuing and broadening the discussion, we have asked several people who participated in the discussions to present their personal perspectives on particular aspects of the subject. These articles are meant to inform readers and stimulate further exploration. In no sense are they meant to convey the collective opinion of the organizing panel or the participants, but we do hope that they reflect the richness and seriousness of the discussions. Much more information is available at www7.nation-alacademies.org/itru/index.html.

The pace of change

In thinking about changes to the university, one must think about the technology that will be available in 10 or 20 years, technology that will be thousands of times more powerful as well as thousands of times cheaper. The effect of this technological progress on the university will affect all of its activities (teaching, research, service), its organization (academic structure, faculty culture, financing, and management), and the broader higher education enterprise as it evolves toward a global knowledge and learning industry.

Although it may be difficult to imagine today’s digital technology replacing human teachers, as the power of this technology continues to evolve, the capacity to reproduce all aspects of human interactions at a distance could well eliminate the classroom and perhaps even the campus as the location of learning. Access to the accumulated knowledge of our civilization through digital libraries and networks, not to mention massive repositories of scientific data from remote instruments such as astronomical observatories or high energy physics accelerators, is changing the nature of scholarship and collaboration in very fundamental ways.

The Net generation of students has incorporated IT completely into its vision of education and has begun to use it to take control of the learning environment. From instant messaging to e-mail to blogs, students are in continual communication with one another, forming learning communities that are always interacting, even in classes (as any faculty member who has been “Googled” can attest). Adept at multitasking and context switching, they approach learning in a highly nonlinear manner, which is a poor fit with the sequential structure of the university curriculum. They are challenging the faculty to shift their instructional efforts from the development and presentation of content, which is becoming readily accessible through open-content efforts such as MIT’s Open CourseWare initiative, to interactive activities that will transform lecturers into mentors and consultants to student learning.

Increasingly, we realize that learning occurs not simply through study and contemplation but through the active discovery and application of knowledge. From John Dewey to Jean Piaget to Seymour Papert, we have ample evidence that most students learn best through inquiry-based or “constructionist” learning. As the ancient Chinese proverb suggests “I hear and I forget; I see and I remember; I do and I understand.” To which we might add, “I teach and I master.”

But here lies a great challenge. Creativity and innovation are essential not only to problem solving but more generally to achieving economic prosperity and sustaining national security in a global, knowledge-driven economy. Although universities are experienced in teaching the skills of analysis, they have far less experience with stimulating and nurturing creativity. In fact, the current disciplinary culture of U.S. campuses sometimes discriminates against those who are truly creative, those who do not fit well into the stereotypes of students and faculty.

The university may need to reorganize itself quite differently, stressing forms of pedagogy and extracurricular experiences to nurture and teach the art and skill of creativity and innovation. This would probably imply a shift away from highly specialized disciplines and degree programs to programs placing more emphasis on integrating knowledge. To this end, perhaps it is time to integrate the educational mission of the university with the research and service activities of the faculty by ripping instruction out of the classroom—or at least the lecture hall—and placing it instead in the discovery environment of the laboratory or studio or the experiential environment of professional practice.

An equally profound transformation is going to occur in university laboratories. The ease of communication, the growth of shared electronic databases, and the ability to use sophisticated equipment such as supercomputers, telescopes, and particle accelerators through Internet connections is dramatically altering the way researchers work and creating an essential cyberinfrastructure. Clearly, cyberinfrastructure is not only reshaping the way people work but actually creating new systems for science and engineering research, training, and application. Once the microprocessor was imbedded in instrumentation, Moore’s Law of rapidly increasing capacity took over scientific investigation. The availability of powerful new tools such as computer simulation, massive data repositories, ubiquitous sensor arrays, and high-bandwidth communication are allowing scientists and engineers to shift much of their intellectual activity from the routine collection and analysis of data to the creative work of posing new questions to explore. IT has created, in effect, a new modality of scientific investigation through simulation of natural phenomenon, which is serving as the bridge between experimental observation and theoretical interpretation. Globalization is a particularly important consequence of the new forms of scientific collaboration enabled by cyberinfrastructure, which is allowing scientific collaboration and investigation to become increasingly decoupled from traditional organizations such as research universities and corporate R&D laboratories as new communities for scholarly collaboration evolve.

Institutional upheaval

While promising significant new opportunities for scientific and engineering research and education, the digital revolution will also pose considerable challenges and drive profound transformations in existing organizations such as universities, national and corporate research laboratories, and funding agencies. Here it is important to recognize that the implementation of such new technologies involve social and organizational issues as much as they do technology itself. Achieving the benefits of IT investments will require the co-evolution of technology, human behavior, and organizations.

Although the domain-specific scholarly communities, operating through the traditional bottom-up process of investigator-proposed projects, should play the lead role in responding to the opportunities and challenges of new IT-enabled research and education, there is also a clear need to involve and stimulate those organizations that span disciplinary lines and integrate scholarship and learning. Perhaps the most important such organization is the research university, which despite the potential of new organizational structures, will continue to be the primary institution for educating, developing, and financing the U.S. scientific and engineering enterprise. Furthermore, because the contemporary research university spans not only the full range of academic disciplines but also the multiple missions of education, scholarship, and service to society, it can—indeed, it must—serve as the primary source of the threads that stitch together the various domain-focused efforts.

Many in the research university community expect to see a convergence and standardization of the cyberinfrastructure necessary for state-of-the-art research and learning over the next several years, built upon open source technologies, standards, and protocols, and they believe that the research universities themselves will play a leadership role in creating these technologies, much as they have in the past. For the IT-driven transformation of U.S. science and engineering to be successful, it must extend beyond the support of investigators and projects in domain-specific science and engineering research to include parallel efforts in stimulating institutional capacity.

The primary issue in managing the IT environment involves the balance between the centralized control and standardization necessary to achieve adequate connectivity and security, and the inevitable chaos that characterizes the university IT environment because of highly diverse needs and funding sources—particularly in the research arena. A balance must be achieved between infinite customizability and institution-wide standards that protect the organization. University leaders must be willing to tolerate freedom— even anarchy—in domains such as research, while demanding tight control and accountability in areas such as telecommunications and financial operations.

Although some institutions are still striving for centralized control, most have recognized that heterogeneity is a fact of life that needs to be tolerated and supported. It is important to move beyond the contrasts between academic and administrative IT and instead recognize the great diversity of needs among different missions such as instruction, research, and administration as well as among early adopters, mainstream users, and have-nots. The faculty seeks both a reliable platform (a utility) as well as the capacity to support specific needs; researchers would frequently prefer no administration involvement, because their grants are paying for their IT support. The students seek the same robust connectivity and service-orientation that they have experienced in the commodity world, and they will increasingly bring the marketplace onto the campus. In some ways, executive leadership is less a decision issue than a customer relationship management issue.

In a sense the library has become the poster child for the impact of IT on higher education. Beyond the use of digital technology for organizing, cataloguing, and distributing library holdings, the increasing availability of digitally-created materials and the massive digitization of existing holdings is driving massive change in the library strategies of universities. Although most universities continue to build libraries, many are no longer planning them as repositories (since books are increasingly placed in off-campus retrievable high-density storage facilities) but rather as a knowledge commons where users access digital knowledge on remote servers. The most common characteristic of these new libraries is a coffee shop. They are being designed as a community center where students come to study and learn together, but where books are largely absent. The library is becoming a people place, providing the tools to support learning and scholarship and the environment for social interaction.

IMAGINE THAT THE EXTRAORDINARY ADVANCES IN COGNITIVE SCIENCE, NEUROSCIENCE, AND LEARNING THEORIES ACTUALLY BEGAN TO BE APPLIED IN EDUCATIONAL PRACTICE, YIELDING SIGNIFICANTLY IMPROVED OUTCOMES AT LOWER COST.

What is the university library in the digital age? Is it built around stacks or Starbucks? Is it a repository of knowledge or a “student union” for learning? In fact, perhaps this discussion is not really about libraries at all, but rather the types of physical spaces universities require for learning communities. Just as today every library has a Starbucks, perhaps with massive digitization and distribution of library holdings, soon every Starbucks will have a library—indeed, access to the holdings of the world’s libraries through wireless connectivity.

Libraries must also consider their critical role in the preservation of digital knowledge, now increasing at a rate an order of magnitude larger than written materials. Without a more concerted effort for the standardization of curation, archiving, and preservation of digital materials, we may be creating a hole in our intellectual history. Traditionally this has been a major role of the research university through its libraries. The stewardship of knowledge will remain a university responsibility in the future, but it will have to be done in a more collaborative way in the digital age.

In a sense, the library may be the most important observation post for studying how students really learn. If the core competency of the university is the capacity to build collaborative spaces, both real and intellectual, then the changing nature of the library may be a touchstone for the changing nature of the university itself.

Few, if any, institutions have the capacity to go it alone in technology development and implementation, particularly in the face of monopoly pressures from the commercial sector. This growing need to build alliances is particularly apparent in the middleware and networking area. A new set of open educational resources (open-source tools, open content, and open standards) is being created by consortia such as Open Knowledge Initiative, Sakai, and the Open CourseWare project and being made available to educators everywhere. Networking initiatives, grid computing, and other elements of cyberinfrastructure are gaining momentum through alliances such as Internet2 and the National Lamba Rail.

Just as in the IT industry itself, there are emerging trends where universities are cooperating in areas such as cyberinfrastructure and instructional computing that allow them to compete more effectively for faculty, students, and resources. The growing consensus on the nature of the IT infrastructure of research universities over the next several years—based on open-source standards and outsourcing stable infrastructure—will demand cooperative efforts.

Lack of vision

Although most university leaders agree that IT will have the most profound effect over a decadal scale, they still devote most of their attention to managing the next few years. The major research universities have long histories of adapting readily to change and sustaining leadership in areas such as technology. The richest universities may well be able to ignore these technology trends, pull up the lifeboats, and feel secure with business as usual. But such complacency appears ill-advised when one considers how much the corporate world is changing in response to IT developments.

There is remarkably little conversation about the major changes occurring in scholarship and learning, driven in part by technology. Although there is recognition that new IT-based communities are evolving for faculty (e.g., cyberin-frastructure-based, global research communities) and students (e.g., social learning communities based on instant messaging), there is little discussion about how the universities could take advantage of this in their educational and research missions.

There is also little evidence that these leaders understood just how rapidly this technology is driving major structural changes in other sectors such as business and government. Today, an industry chief information officer is expected to reduce IT costs for given productivity by a factor of 10 every few years. Although university leaders are aware of the productivity gains enabled by a strategic use of technology in industry, they find it difficult to imagine the structural changes in the university capable of delivering such improvement.

University leaders must be willing to consider scenarios that push them out of their comfort zone:

  • Suppose the digital generation were to take control of its learning environments, demanding not only the highly interactive, collaborative learning experiences but the sophistication and emotional engagement of gaming technology and the convenience of other IT-based services.
  • Imagine that the extraordinary advances in cognitive science, neuroscience, and learning theories actually began to be applied in educational practice, yielding significantly improved outcomes at lower cost. What would happen if some adventuresome lower-tier universities were able to offer demonstrably better educations? What would that do to their competing colleges and universities? Would the top tier emulate them?

If students vote with their feet (and fingers) and their dollars, what changes would they demand? If courses based on game technology, excellent graphics, and pleasant surroundings compete with current offerings such as 8 AM lectures in uncomfortable auditoriums, what changes would result?

  • What happens if the Google digitization project creates in every Starbucks access to all the world’s libraries?
  • What are the deeper implications of new collaborations enabled by cyberinfrastructure that allow scholars to do their work largely independently of the university?
  • Could these emerging scientific communities compete with and break apart the feudal hierarchy that has traditionally controlled scientific training (particularly doctoral and postdoctoral work), empowering young scholars and enabling greater access to scientific resources and opportunities for collaboration and engagement?

What will be the impact of cyberinfrastructure on publication, collaboration, competition, travel, and the ability of participants to assume multiple roles (master, learner, observer) in various scholarly communities? Will the relative importance of creativity and analysis shift when cyberinfrastructure expands access to powerful new tools of investigation such as computer simulation and massively pervasive sensor arrays?

Change already in motion

The report characterizing the first phase of the National Academies study of the impact of information technology on the university was entitled Preparing for the Revolution. But what revolution? To a casual observer he university today looks very much like it has for decades: still organized into academic and professional disciplines; still basing its educational programs on the traditional undergraduate, graduate, and professional discipline curricula; still financed, managed, and led as it has been for many years.

COULD THESE EMERGING SCIENTIFIC COMMUNITIES COMPETE WITH AND BREAK APART THE FEUDAL HIERARCHY THAT HAS TRADITIONALLY CONTROLLED SCIENTIFIC TRAINING?

Yet if one looks more closely at the core activities of students and faculty, the changes over the past decade have been profound indeed. The scholarly activities of the faculty have become heavily dependent on the developing cyberinfrastructure, whether in the sciences, humanities, arts, or professions. Although faculties still seek face-to-face discussions with colleagues, these have become the catalyst for far more frequent interactions over the Internet. Most faculty members rarely visit the library anymore, preferring to use far more powerful, accessible, and efficient digital resources. Many have ceased publishing in favor of the increasingly ubiquitous preprint route. Even grantsmanship has been digitized with the automation of most steps in the process from proposal submission and review to grant management and reporting. And as we have noted earlier, student life and learning are also changing rapidly, as students of the net generation arrive on campus with the skills to apply this technology to forming social groups, role playing (gaming), accessing services, and learning—despite the insistence of their professors that they jump through the hoops of the traditional classroom culture.

In one sense it is amazing that the university has been able to adapt to these extraordinary transformations of its most fundamental activities with its organization and structure largely intact. Here one might be inclined to observe that technological change tends to evolve much more rapidly than social change, suggesting that a social institution such as the university that has lasted a millennium is unlikely to increase its pace of change to match technology’s progress. But other social institutions such as corporations have learned the hard way that failure to keep pace can lead to extinction. On the other hand, it could be that the revolution in higher education is well under way, at least with the early adopters, and simply not sensed or recognized yet by the body of the institutions within which the changes are occurring.

Universities are extraordinarily adaptable organizations, tolerating enormous redundancy and diversity. It could be that information technology revolution is actually an evolution, a change in sea level that universities can float through rather a tsunami that will swamp them. Evolutionary change usually occurs first at the edge of an organization or an ecosystem rather than in the center where it is likely to be extinguished. In this sense the cyberinfrastructure now transforming scholarship or the communications technology enabling new forms of student learning have not yet propagated into the core of the university. Of course, from this perspective, recent efforts such as the Google library project take on far more significance, since the morphing of the university library from stacks to Starbucks strikes at the intellectual soul of the university.

It is certainly the case that futurists have a habit of overestimating the impact of new technologies in the near term and underestimating their effect over the long term. There is a natural tendency to assume that the present course will continue, just at an accelerated pace, and thus to fail to anticipate the disruptive technologies and killer apps that turn predictions topsy-turvy. Yet we also have seen how rapidly IT has advanced and know with some precision how quickly it will continue to improve. When one takes into account the rate of development in biotechnology and nanotechnology, almost any imaginable scenario becomes possible.

The discussions that took place under the Academies IT Forum reinforced the good-news, bad-news character of digital technology. The good news is that it works, and eventually it is just as disruptive as predicted. The bad news is the same: this stuff works, and it is just as disruptive as predicted.

Precedents

During the 19th century, in a single generation following the Civil War, essentially everything that could change about higher education in the United States did in fact change: small colleges, based on the English boarding school model of educating only the elite, were joined by the public universities, with the mission of educating the working class. Federal initiatives such as the Land Grant Acts added research and service to the mission of the universities. The academy became empowered with new perquisites such as academic freedom, tenure, and faculty governance. Universities increased more than 10-fold in enrollments. The university at the turn of century bore little resemblance to the colonial colleges of a generation earlier.

Many in the university community believe that a similar period of dramatic change has already begun in higher education. In fact, some are even willing to put on the table the most disturbing question of all: Will the university, at least as we know it today, even exist a generation from now? Perhaps the focus of our study should not be “the impact of technology on the future of the research university” but “the impact of technology on scholarship and learning, wherever they may be conducted.”

Certainly the monastic character of the ivory tower is lost forever. Although there are many important features of the campus environment that suggest that most universities will continue to exist as identifiable places, at least for the near term, as digital technology makes it increasingly possible to emulate human interaction, perhaps we should not bind teaching and scholarship too tightly to buildings and grounds. Certainly, both learning and scholarship will continue to depend heavily on the existence of communities, since they are, after all, highly social enterprises. Yet as these communities are increasingly global in extent and detached from the constraints of space and time, we should not assume that today’s version of the scholarly community will dictate the future of our universities. Even in the near term, we should be aware that these disruptive technologies, which initially appear to be rather primitive, are stimulating the appearance of entirely new approaches to learning and research that could not only sweep aside the traditional campus-based, classroom-focused approaches to higher education but also seriously challenge the conventional academic disciplines and curricula. For the longer term, no one can predict the impact of exponential growth in technological capacity on social institutions such as universities, corporations, and governments.

To be sure, there will be continuing need and value for the broader social purpose of the university as a place where the young and the experienced can acquire not only knowledge and skills, but also the values and discipline of an educated mind, which are so essential to a democracy; where our cultural and intellectual heritage is defended and propagated, even while our norms and beliefs are challenged; where leaders of our governments, commerce, and professions are nurtured; and where new knowledge is created through research and scholarship and applied through social engagement to serve society. But just as it has in earlier times, the university will have to transform itself once again to serve a radically changing world if it is to sustain these important values and roles.

Tech talk

At a meeting on Bali a decade ago, just as the Internet bubble was inflating, University of California, Berkeley, Nobel laureate Charlie Townes held an audience of Asian entrepreneurs spellbound as he reminisced about inventing the laser. Townes described numerous contributions from the open, wide-ranging discussions he had with colleagues “across and down the hall” from his Bell Labs’ office and at Columbia University, discussions actively nurtured in those institutional environments of fierce intellectual exploration and passionate debate. Although Townes was credited with inventing the laser, his humble point was that the invention, like all innovation, was largely a social enterprise, built on the work of those who had come before, harnessing the insights of contemporaries.

Although the story is not included in their thoughtful new addition to the literature on innovation, Massachusetts Institute of Technology professors Richard Lester and Michael Piore have taken Townes’s lesson to heart. Indeed, Innovation—The Missing Dimension is a 200-page brief for bringing creative, interpretive conversation back into the highly analytic and engineering-driven product-design process and for protecting the quasi-public spaces such as universities where, they claim, such conversations best flourish. Therein lie the book’s many strengths and some predictable shortcomings.

Using case studies that range from cell phones and medical devices to the laundering of blue jeans, the authors set up two archetypal product-design methodologies: the analytical and the interpretive. Analysis, which they identify as today’s dominant industrial management and engineering practice, is the goal-directed process that treats an innovative concept as a problem to be solved as efficiently as possible by reducing it to a set of engineering requirements. The neglected approach is interpretation, which they liken to orchestrated, structured conversations among all the critical actors, including designers, product and process engineers, marketers, customers, and others who have a stake in an innovative concept.

In the process of interpretation, the authors argue, what “emerges from these conversations is a language community within which new products are conceived and discussed.” By essentially creating a new common language through conversation, the stakeholders can fully explore ideas that the analytic approach might truncate, giving rise to new interpretations of the product to be designed. Unexpected innovations can result.

Like the particle and wave properties of Townes’s laser light, analysis and interpretation are simultaneously necessary and apparently opposed. Analysis is project-driven, aimed at closure, and intent on reducing concept to practice. Interpretation is open-ended, thrives on ambiguity, and aims at entirely redefining the product concept. According to Lester and Piore, successful innovation is the result of striking a balance between these apparently contradictory activities.

These are heuristic archetypes, of course, so neither accurately captures the messier “mixed-mode” design process at most firms—large or small, established or startup. In startups doing cutting-edge innovation, for example, where the risks of market rejection are high, there is often continuous “interpretive” feedback that leads to product changes even as engineering specs are defined and production moves forward. It is likely that established larger companies strike a different balance and freeze product design earlier. Such real-world differences and their consequences might be telling for managers struggling with innovation, but the authors choose to gloss over them.

Lester and Piore generalize that firms are using the straightforward analytic approach as a low-risk strategy for dealing with the intense competitive pressures wrought by globalized markets and the impedance-free movement of investment capital. Ambiguity gives way to certainty, internal conversations are foreshortened, troubling stakeholders are excluded, and new products reach the market more quickly. But, the authors imply, retreating to the low-risk strategy will sacrifice opportunities for more novel innovation that an extended interpretive conversation might produce.

To the extent that firms have indeed embraced the analytic approach to the exclusion of interpretation, the authors make a timely point: U.S. firms in particular must generate novel innovation to compete effectively in increasingly price-driven, commoditizing markets subject to global competition. This is true not only for those firms that compete on the basis of their new products or technologies, but also for companies such as Dell, Wal-Mart, and eBay that succeed because of innovative business models.

Innovation is not a how-to book, and the authors consequently make no real attempt to systematize their insights into practical advice for man-agers—something that would have required more hard-headed engineering and less interpretation. Instead, as they make clear in the book’s last two chapters, their intent is not so much to instruct as to throw light on a little-recognized national trend.

They argue that the nation’s “spaces for interpretation” have narrowed precipitously over the past two decades. As evidence, they point variously to the sad demise of Bell Labs and the concomitant competitive cut-back of wide-ranging corporate research (as opposed to development), and especially to the closer coupling of university research to commercial industry that resulted from tech-bubble greed and fiscal problems.

The result, they contend, is to call into question the nation’s ability to continue to generate the kind of innovation that sustained the United States in the 1990s. By phase-shifting levels of analysis, the authors introduce a burden that they cannot overcome: the need to provide detailed evidence. The contention is intriguing and might even be true. It deserves a more thorough treatment.

A fuller treatment would have identified promising trends to the contrary. In my part of the world, where venture finance seeks the next wave of innovation, tremendous “interpretation” is again taking place. The conversational ferment is in some of the usual places— especially around the leveraging of a near-ubiquitous digital infrastructure that is an order of magnitude lower in cost than what existed a mere 10 years ago—and in a number of new spots as well. For example, it is increasingly at the intersection of different engineering disciplines and with the interaction of engineering and information technology with biology, energy, nanoscale materials and devices, and environmental sciences.

To be sure, there are major threats to the ability of the United States to generate and then benefit from the next waves of commercial innovation. Foremost among these might well be the current administration’s dangerous preoccupation with security, which can forestall the inward migration of crucial intellectual capital, truncate debate and intellectual exploration, and narrow R&D funding from long-term broad-based objectives to short-term military needs.

Lester and Piore provide one lens for amplifying such problems. To the extent that national innovation policy is currently dominated by one side’s peculiar faith-based “analytic,” the authors’ call for broadening the conversation and constructing a new common language community is to be applauded. Credit Lester and Piore with creating a useful syntax for discussion. By exploring the limits of conventional analysis and the opportunities presented by interpretation, they help to set up the debate. Carrying it forward must be a social enterprise.

The Challenge of Protecting Critical Infrastructure

In protecting critical infrastructure, the responsibility for setting goals rests primarily with the government, but the implementation of steps to reduce the vulnerability of privately owned and corporate assets depends primarily on private-sector knowledge and action. Although private firms uniquely understand their operations and the hazards they entail, it is clear that they currently do not have adequate commercial incentive to fund vulnerability reduction. For many, the cost of reducing vulnerabilities outweighs the benefit of reduced risk from terrorist attacks as well as from natural and other disasters.

The National Strategy for Homeland Security, released on July 16, 2002, reflects conventional notions of market failure that are rapidly becoming obsolete: “The government should only address those activities that the market does not adequately provide—for example, national defense or border security. For other aspects of homeland security, sufficient incentives exist in the private market to supply protection. In these cases we should rely on the private sector.” The Interim National Infrastructure Protection Plan (NIPP), released by the Department of Homeland Security (DHS) in February 2005, takes a similar position.

Although some 85% of the critical infrastructure in the United States is privately owned, the reality is that market forces alone are, as a rule, insufficient to induce needed investments in protection. Companies have been slow to recognize that the border is now interior. National defense means not only sending destroyers but also protecting transformers. In addition, risks to critical infrastructure industries are becoming more and more interdependent as the economic, technological, and social processes of globalization intensify. Just as a previous generation of policymakers adapted to the emergence of environmental externalities, policymakers today must adapt to a world in which “security externalities” are suddenly ubiquitous.

The case of CSX Railroad and the District of Columbia illustrates the tensions that have emerged over the competing needs for corporate efficiency and reduced public vulnerability to terrorist acts. Less than a month after a January 2004 train crash in South Carolina resulted in the release of deadly chlorine gas that killed 9 people and hospitalized 58 others, the District’s City Council passed an act banning the transportation of hazardous materials within a 2.2-mile radius of the U. S. Capitol without a permit. The act cited the failure of the federal government “to prevent the terrorist threat.” Subsequently, CSX petitioned the U.S. Surface Transportation Board (USSTP) to invalidate the legislation, claiming that it would “add hundreds of miles and days of transit time to hazardous materials shipments” and adversely affect rail service around the country. USSTP ruled in CSX’s favor in March 2005, putting an end to the District’s efforts.

Shortly after the decision, Richard Falkenrath, President Bush’s former deputy homeland security advisor, highlighted in congressional testimony the severity of the threat that the act was intended to address: “Of all the various remaining civilian vulnerabilities in America today, one stands alone as uniquely deadly, pervasive, and susceptible to terrorist attack: toxic-inhalation hazard (TIH) of industrial chemicals, such as chlorine, ammonia, phosgene, methylbromide, hydrochloric and various other acids.”

If industry itself is not motivated to invest in protection against attack and the federal government does not take the initiative, who will take responsibility for protecting chemical plants, rail lines, and other critical infrastructure? Who will make it harder for terrorists to magnify the damage of an attack by first attacking the infrastructure on which effective response depends? Who will ensure that these and other elements of the infrastructure are not used as weapons to kill or maim thousands of people in our cities? Is there, then, an adequate combination of private organizational strategies and public policies that will ensure reliable and resilient service provision in the long term? As the CSX case illustrates, consensus on how best to protect critical infrastructure has not emerged, despite the urgency created by terrorist threats, as well as the ongoing challenges of dealing with natural disasters. (Editor’s note: This article was completed before Hurricane Katrina struck the Gulf Coast.)

An infrastructure is “critical” when the services it provides are vital to national security. The list of infrastructures officially considered critical is growing. In addition to the chemical sector, they are transportation, the defense industrial base, information and telecommunications, banking and finance, agriculture, food, water, public heath, government services, emergency services, and postal and shipping.

The threat of catastrophic terrorism has created a new relationship between national security and routine business decisions in private firms providing infrastructure services. Managers of these firms, like business managers elsewhere, are highly motivated to seek efficiency increases. The never-ending race for economy of scale and of scope and just-in-time processes that guarantee better results also leads to reduced redundancy, concentrated assets, and centralized control points. The new double-deck Airbus A380, the largest aircraft ever built, is designed to carry as many as 550 passengers in the quest for a decrease in the cost per passenger seat. More than half of the chickens destined for our supermarket shelves are processed by a handful of firms in Arkansas. Transformers used in primary power distribution have become so large that installations contain only one of them. The Internet is increasingly relied on for critical communications in the event of attack or disaster, not withstanding its well-known vulnerabilities to a variety of disruptions.

Before 9/11 and particularly since, much work has been directed at identifying vulnerabilities at the scale of individual firms, of industries, and more recently, of geographical regions. Many sound engineering-based proposals exist for reducing vulnerability. Large apartments and office towers can employ ventilation systems that detect and trap poisonous gases. Power distribution plants can better protect their largest transformers and store replacement units in safe places. Local governments can install LED traffic lights with trickle-charged batteries that will not fail during a blackout. Trains carrying toxic and explosive materials can be routed around cities. A 2002 National Academies study, Making the Nation Safer: The Role of Science and Technology in Countering Terrorism, contains other proposals to reduce vulnerabilities through the use of technology; subsequent work has added to the list. However, neither vulnerability assessments nor studies based on principles of engineering design address the competitive pressures and other incentives that have led private firms to build infrastructures in their current forms.

Comprehensive public policy for critical infrastructure protection must begin with an understanding that “protection” per se should not be the goal. As a means of reducing vulnerability to attack at the regional or countrywide scale, it is minimally effective. In an open society, higher fences and thicker walls do little to reduce aggregate vulnerabilities. In many instances, protection simply shifts the focus of terrorists to other, less heavily fortified targets.

Even if one accepts the word “protection,” what is being protected is not the infrastructure itself but the services it provides. With regard to terrorist threats, the policy goal should be to build capabilities for prevention of attacks that interrupt such services and for effective response and rapid recovery when such attacks do occur.

Sustainable policy must account for both the potential tradeoffs that exist at the firm level between efficiency and vulnerability and for the institutions and the incentives potentially affecting that tradeoff. Ultimately, policy must

  • structure incentive systems for investment that enhance prevention of, response to, and recovery from the most likely and damaging attacks;
  • ensure adequately robust internal operations of private firms, including greater system reliability for their services;
  • limit imposed costs on firms to guarantee the competitiveness of our economy; and
  • do all of the above in a manner that can be sustained by a complacent public with a short memory that may tire of the high costs and consumer inconvenience that government policies aimed at making critical industries less vulnerable may entail.

Although efficiency and vulnerability are produced jointly, they are not assessed together. A market economy routinely accounts for improved efficiency, because shareholders are always looking for the best return on investment in the short term. However, vulnerability may be assessed only after it has been exposed by active study or system failure.

Organizations are most likely to account for vulnerabilities that are linked to their own core activities. They will ignore equally serious consequences of attacks on, or attacks that employ, an infrastructure service that is assumed to be reliably available. Airlines, for example, have an inherent incentive to become intimately familiar with the factors that determine the risk of a crash. Beginning in the 1970s, airline managers were also compelled to systematically address hijacking threats. However, before 9/11, none were truly prepared for the possibility that passenger jets would be used as weapons of large-scale destruction.

Accountability for and accounting of vulnerabilities distant from core business activities are relatively uncommon, particularly when the perceived probabilities of occurrence are very low. Although economic incentives drive the accounting of core-business vulnerabilities, legal, organizational, and political dynamics drive the response to vulnerabilities that lie outside the core business concerns of any single firm or industry.

IF INDUSTRY ITSELF IS NOT MOTIVATED TO INVEST IN PROTECTION AGAINST ATTACK AND THE FEDERAL GOVERNMENT DOES NOT TAKE THE INITIATIVE, WHO WILL TAKE RESPONSIBILITY FOR PROTECTING CHEMICAL PLANTS, RAIL LINES, AND OTHER CRITICAL INFRASTRUCTURE?

Traditional tools of risk assessment and risk management have become very sophisticated in recent years as a result of environmental, health, and safety regulations. Nonetheless, such tools remain largely inadequate in coping with high-impact, low-likelihood events. For many large technological network systems, the challenge of ensuring reliable operations has increased because operations both within and among firms have become increasingly interdependent. Elements of infrastructures in particular have become so interdependent that the destabilization of one is likely to have severe consequences for others.

As the scale and reach of these large technological systems have increased, the potential economic and social damage of failures has increased as well. The sources of such major disruptions lie in technical and managerial failures as well as natural disasters or terrorist attacks, as illustrated in the Northeast Blackout of August 2003. Economic and social activities are becoming more and more interdependent as well, so that the actions taken by one organization will affect others.

In this context, the incentives for any single organization to invest in prevention, response, and recovery are blunted. Without a global approach to understanding interdependencies and security externalities, determining the source of disruptions and quantifying the risk of such disruptions are difficult. Private decisionmakers will have neither adequate information nor adequate motivation to undertake investments that are more than justifiable from the standpoint of the system as a whole.

As terrorist attacks have emerged as potential threats to infrastructure, private-sector executives and policymakers must grapple with far greater uncertainties than ever before. This is particularly challenging because terrorists can engage in “adaptive predation,” in which they purposefully adapt their strategies to take advantage of weaknesses in prevention efforts. In contrast, actions can be taken to reduce damage from future natural disasters with the knowledge that the probability associated with the hazard will not be affected by the adoption of these protective measures. The likelihood of an earthquake of a given intensity in Los Angeles will not change if property owners design more quake-resistant structures. The likelihood and consequences of a terrorist attack are determined by a mix of strategies and counterstrategies developed by a range of stakeholders and changing over time. This dynamic uncertainty makes the likelihood of future terrorist events extremely difficult (if not impossible) to estimate and increases the difficulty of measuring the economic efficiency of public policies and private strategies.

In that context, although private-sector actors can reduce system vulnerability by reducing dependence on vulnerable external services, decentralizing critical assets, decentralizing core operational functions, and adopting organization practices to improve resiliency, these organizations may face sanctions from markets for taking such actions if they reduce efficiency, raise costs, or reduce profits. Indeed, markets rarely reward investments that reduce vulnerability to events so rare that there is no statistical basis for quantitative risk assessment.

Steering between market approaches that tend to remove system slack (increasing vulnerability to catastrophic system failure) and the redesign of private infrastructures to ensure more reliable functioning (though at higher cost to consumers), the federal government has generally opted for policies of partnerships between private-sector operators and government agencies. The Interim NIPP emphasizes the role of sector-specific agencies in coordinating private actors. The federal government also has overseen the development of a number of reliability regimes that involve combinations of government oversight and private-sector enforcement.

In both public/private and private/private partnerships, the tension between organizational autonomy and the independence of the constituent units of the large-scale system makes communication and coordination critical. When managers are not fully informed (or worse, are misinformed) regarding the actions and status of remote units, effective decisionmaking is not possible. Actions in one unit can have unintended and perhaps serious consequences elsewhere. Providing sufficient slack, encouraging constant and clear communications, and creating a consistent belief structure and safety-embracing culture help reduce this problem. Large-scale systems need to be flexible in adapting to rapidly changing situations.

Extraordinary levels of coordination of many organizations, public and private, will be required to secure any improved level of prevention, response, and recovery. Continuous but expensive organizational learning will be essential to producing an auto-adaptive response capability that will enable infrastructure service providers to deal more effectively with adaptive predators and dynamic uncertainty.

Yet such imperatives are unlikely to be tolerated by most private-sector organizations operating under normal business conditions, where bottom lines matter, where threats are difficult to discern, and where attacks are extremely infrequent. Sustaining watchfulness and the ability to deal with low-probability, high-impact events is the single most difficult policy issue facing critical infrastructure providers and homeland security agencies today. Critical infrastructure protection needs to be understood as not only deploying a tougher exoskeleton, but also developing organizational antibodies of reliability that enable society and its constituent parts to be more resilient and robust in the face of new, dynamic, and uncertain threats.

Even though technological and managerial procedures may be in place to limit the occurrence of a devastating event and reduce its effects through resilient infrastructure, the possibility of suffering a large loss must still be seriously considered. Should that happen, the question of who should pay for the economic consequences is likely to take center stage. In the 2002 National Strategy, the White House considered recovery as a fundamental element of homeland security. In most developed countries, insurance is one of the principal mechanisms used by individuals and organizations for managing risk. Indeed, insurance is a key mechanism not only for aiding in recovery after an attack but also in inducing investments to make an attack less likely.

A well-functioning insurance market plays a critical role in ensuring social and economic continuity when large-scale disaster occurs. Private insurers paid about 90% of the $23 billion in insured losses that resulted from the four hurricanes that hit Florida in 2004. Two-thirds of the $33 billion in insured losses from the 9/11 attacks were paid by reinsurance companies (mostly European) that operate at a larger level worldwide. Because of the huge payouts, however, these companies either substantially increased their prices or stopped offering terrorism coverage altogether.

The collapse of the market for terrorism insurance after 9/11 motivated the passage of the Terrorism Risk Insurance Act of 2002 (TRIA). TRIA established a three-year risk-sharing partnership between the insurance industry and the federal government for covering commercial enterprises against terrorism for losses of up to $100 billion. Under TRIA, insurers are required to offer terrorism coverage to all of their commercial policyholders, who can in turn accept it or refuse to buy it. About 50% of firms nationwide purchased terrorism insurance in 2004. In case of an attack by foreign interests, the federal government would pay 90% of the insured losses above an insurer’s deductible, providing free upfront reinsurance to insurers. The government can recoup ex post part of its payment against all commercial policyholders, whether or not they bought terrorism insurance. The statute creating TRIA ends December 31, 2005, and it is not clear whether Congress will renew it in its current form, modify it, or let it go.

IN ADDITION TO ITS PRIME ROLE IN RECOVERY, INSURANCE CAN BE A POWERFUL TOOL IN INDUCING CRITICAL INFRASTRUCTURE INVESTMENTS THAT ENHANCE PREVENTION AND RESPONSE.

The TRIA and Beyond report, a detailed 10-month study by the Wharton School’s Center for Risk Management in collaboration with a broad spectrum of public and private organizations, was released in August 2005. It concludes that TRIA “has provided an important and necessary temporary solution to the problem of how terrorism insurance can be provided to commercial firms,” but does not constitute “an equitable and efficient long-term program and should be modified.”

Routine government involvement in most catastrophic risk coverage programs (floods, hurricanes, earthquakes, and terrorism) is an implicit recognition of the necessity for the public sector to protect insurance infrastructure. Yet the respective roles and responsibilities of public and private actors in providing adequate protection to victims of terrorist attacks remain unclear. The creation by Congress or the White House of a national commission on terrorism risk coverage before permanent legislation is enacted, as urged by the Wharton team, would certainly help create a more efficient and equitable long-term solution that includes a necessary safety net for ensuring that insurance can play its traditional role in the recovery from major disasters or attacks involving critical infrastructure.

In addition to its prime role in recovery, insurance can be a powerful tool in inducing critical infrastructure investments that enhance prevention and response. As a third party between government and private firms, the insurance industry would play a key role in this domain. A firm or an individual investing in security and mitigation measures should be eligible to receive lower insurance rates. On the surface, the analogy to homeowner’s insurance and hurricane and flood insurance, where this policy is prevalent, seems compelling. But a closer look reveals that the potential role of insurance in inducing private investments in prevention and response is more red herring than silver bullet.

First, it is possible that having insurance will induce a manager to engage in riskier behavior than would have otherwise been the case—the moral hazard problem in insurance. Second, the link between the price of insurance and security/mitigation measures in the context of terrorism threats is tenuous at best. The current evidence is that almost none of the insurers providing coverage for terrorism in the United States have thus far linked in any way the price of coverage to the security measures in place. Why is this the case? The significant decrease in the price of terrorism insurance four years after 9/11, combined with the relatively high price for reinsurance (when available), does not provide a large window for price reduction. Perhaps more important, the dynamic uncertainty due to adaptive predation by terrorists makes it extremely difficult to measure and to price the efficiency of any security measure. Without a price, there cannot be a market.

Enhancing capabilities in the United States for prevention, recovery, and response relating to attacks on critical infrastructure will not be easy. In the long term, responding to this challenge will not only require changes in the technologies and structures adopted by threatened firms. It will also require improving the effectiveness of private strategies and public policies, reflecting an emerging balance of public and private roles and responsibilities.

Institutional capabilities to identify, negotiate, and implement such policies are at least partly in place. The union of 22 federal bodies under the umbrella of the new DHS, along with the current reform of the intelligence services, is the most significant federal reorganization of the past half-century, although DHS has yet to give priority to addressing the vulnerability of critical infrastructures. The transformations induced by the corporate trust crisis and the new Sarbanes-Oxley era are also changing how most firms operate, but it is unclear that the changes taking place will add anything to counterterrorism capability in private industry. Finally, for the general public, an important issue will be willingness to sacrifice for security, whether through higher prices and taxes or through loss of freedom and privacy.

Beyond infrastructure vulnerability assessments, continuity of operations planning, and deliberate investment in a small set of obviously cost-effective technologies, the following structural, organizational, and financial strategies should be considered to improve the capacity of the critical infrastructure service providers and public authorities to perform their functions:

  • strike a balance between strategies that emphasize anticipation (reducing the likelihood of attack) and those that emphasize resilience (reducing the damage resulting from attack);
  • recognize and support high-reliability organizations and reliability professionals;
  • enhance the capabilities of autoadaptive response systems of various types;
  • reward information-sharing about technological and organizational changes and encourage organizations to emphasize safety;
  • promote dialogue among citizens and stakeholders to define priorities and explore options for action; and
  • develop incentive programs to induce private investment in security by relying on market forces to the extent possible.

Strategies to protect critical infrastructure are not viable unless they are politically and economically sustainable. Sustainability may be enhanced by a deliberate policy of seeking win-win options that promise public and private benefits beyond vulnerability reduction. Public relations, reputation, and the possibility of tort liability may motivate some firms to invest even without additional government pressure. Understanding the motives, constraints, and capabilities of potential attackers may inform decisions regarding investments in prevention, response, and recovery.

The challenge of critical infrastructure protection is a multifaceted one requiring a variety of responses. Market mechanisms and engineering design both have roles, but neither is sufficient. As national security increasingly finds its way into the boardrooms of U.S. corporations, rigid and limited public/private partnerships must give way to flexible, more deeply rooted collaborations between public and private actors in which trust is developed and information is shared. By directly addressing at an operational level the potential tradeoffs between private efficiency and public vulnerability, such collaborations will lead to better, if not definitive, solutions.

Archives – Fall 2005

Bruce Alberts

Bruce Alberts recently finished a distinguished 12-year run as president of the National Academy of Sciences. Among his many accomplishments, Bruce played a key role in a successful development program that enhanced NAS’s financial strength and independence, the overhaul of the National Research Council’s procedures for selecting committee members and conducting studies that significantly increased openness and public participation, and the creation of the InterAcademy Council, an organization that will enable many of the world’s academies of sciences to work together to help address critical global concerns.

But no NAS activity was closer to Bruce’s heart than his tireless efforts to improve the quality of science and math education, particularly in grades K-12. He led the NAS effort to forge national standards for science education and helped found the Center for Education at the Academies and the independent Strategic Education Research Partnership to ensure that reform efforts continue.

Above all else, Bruce preached the virtue of hands-on exploratory science learning. His goal was to replace a system that buried passive students in facts with a process in which active students discovered the adventure and excitement of scientific discovery.

Patagonia dreaming

There once was a world so lush with life that colonies of animals stretched as far as the eye could see. Of course this is not news. We’ve had glimpses of this world before, in descriptions of the vast bison herds of the pre-settler Great Plains and in accounts of stocks of cod so plentiful off New England waters that early explorers had the sense they could walk across the water on their backs. These are images of a world we consider long gone. Yet a fragment of that world exists even today, in the remote reaches of Patagonia. In Act III in Patagonia, William Conway transports us there, breathing life into scientific descriptions of the wildlife of the Southern Cone and in so doing animating the images we have of how the world must once have looked.

But even this rich land is now in danger. Where once there may have been 50 million guanacos (the New World version of the camel), there are now only about half a million. Other species of the steppes have declined, some even more precipitously. Coastal and marine life has declined as well. Act III in Patagonia chronicles this decline and examines the daunting conservation challenges ahead.

The book is organized into three sections: part historical accounting, part travelogue, and part natural history essay and conservation epic. Conway, senior conservationist and former president of the Wildlife Conservation Society, begins by describing the pristine Southern Cone of millennia gone by. Herds of guanaco were so numerous that the wildebeest herds of Africa pale in comparison, and they shared the arid landscape with horses that had evolved on the continent and with vast numbers of flightless birds such as the rhea. The 2,000-mile Argentine coastline of high cliffs and shining pebble beaches supported huge colonies of sea lions, fur and elephant seals, terns, cormorants, penguins, and other colonial species. But Act I in Patagonia, even though it spanned thousands of years, is described relatively briefly, for little is known about either the ecology of the region or the indigenous peoples who shared this magnificent landscape with the animals. What is known is that the marvelous bounty of Patagonia was soon discovered by the Europeans (Act II), who in decades started to undo what thousands of years of evolution had produced.

Although the interior of the Southern Cone is sparsely populated by settlers and the coast remains relatively pristine, humans have made their mark in dramatic and unsettling ways. The story of the guanaco, which Conway considers an indicator species—the canary in a coal mine whose fate suggests the trends in the entire steppe ecosystem— is a case in point. The 95% decrease in its population correlates with a rise in the numbers of sheep, now estimated at 43 million. The voracious grazing of sheep has caused the degradation of 60% of the rangelands. Guanaco are also being hunted to reduce their perceived competition with the growing population of livestock.

The unregulated growth of sheep farming in Patagonia has had consequences for other species as well. The populations of sheep predators such as foxes and pumas have boomed, which in turn has led to dramatic reductions in the populations of native species such as Darwin’s rhea and the giant Mara mouse. The former has declined by roughly 85% in only 10 years; the latter has declined even more dramatically, though actual population numbers are unknown. To add insult to injury and underscore the sometimes hopelessly careless nature of humans, the endangered Mara mouse is also hunted as dog food. “Much wildlife is seen as competitive, as a pest, a danger or game,” Conway writes, “[and] ranchers seem to dwell in a permanent state of ecological ignorance and denial.”

IT SEEMS CLEAR THAT PROTECTED AREAS ALONE WILL NOT BE ABLE TO STEM THE TIDE OF DEGRADATION THAT IN PATAGONIA, AS ELSEWHERE, IS DRIVEN BY HUMAN GREED, COMPETITION, AND INEFFECTIVE GOVERNANCE.”

Declines have been steep for avian populations as well. Flamingos living and breeding in the highly specialized habitat of the altiplano (a high-altitude area shared by Bolivia, Peru, Chile, and Argentina) suffer from egg poaching, and the burrowing parrot continues to be hunted not for food but because it is considered a crop pest. In fact, the Argentine government officially designated it a pest in 1984 and began a widespread poisoning program. Although this was soon discontinued amid protests from the budding conservation community, the pest stigma has stuck in the minds of many local people. Patagonia’s wonderful Magellan penguins are also in a precarious position, because they must travel farther and farther to find fish.

Unsustainable commercial fishing is also increasing, despite recently adopted regulations. The reduction in fish is affecting the numbers of marine mammals such as seals and sea lions, making it more difficult for these creatures to rebound after a long period of mass hunting.

Conservation challenges

The Act III in the book title refers to the new era of conservation efforts. The big challenge facing conservationists is that the people living in Patagonia are generally content with their current lifestyles and practices, often wary of the animals, and uneducated about the potential value of ecotourism. Conway writes: “Today, many of the strange assemblages of wild animals live in frontier-like associations with settlers, who both use them and are unsettled by them. Most of these people are economic immigrants with no history on the land or with its wild creatures.” Thus, hunting continues, and this, in conjunction with sheep farming, fishing, and oil and gas drilling, has created a situation that would be easy to call hopeless.

Yet Conway is brimming with optimism; hope is the leitmotif that runs through all of Act III. Still, I confess that I found it hard to muster a similar faith that this beautiful land would ever be restored to its former glory. Indeed, I found it hard not to believe that Patagonia would instead follow the usual trajectory toward increasingly unsustainable use of resources and inevitable conflict. Where Conway sees hope in the growing grassroots environmental movement in Argentina and the adoption of the Patagonian Coastal Zone Management Plan, I see the unresolved problem of spreading sheep farms destroying natural habitats, introduced species affecting vulnerable indigenous ones, and massive and largely unregulated fisheries that are both wasteful and destructive. Where Conway sees dedicated individuals (many of whom have been supported by Conway’s organization), I see the seemingly insurmountable problems of rampant government mismanagement and corruption. Conway looks at the dramatic declines in populations of various animals and sees hope in the fact that only one species, the Falklands fox, has actually become extinct. I suspect other readers would share my inability to find too much solace in that fact, given where most of the wild species of the region seem to be headed. But then, I have never visited Patagonia. By contrast, Conway knows the place almost better than anyone, so I suppose we must give him the benefit of the doubt and try to share his hope for Patagonia’s future.

The book is filled with detail about the life histories of Patagonia’s most interesting creatures and, to a lesser extent, some of the humans on both sides of the conservation battles. These chapters are colorful and engaging, drawing the reader into a world that most of us will never see firsthand. Some of Conway’s best writing is in his descriptions of field biologists observing their research animals and the animals caught in the act of being observed. In places his writing is witty and poetic, reminiscent of the writings of the late Stephen Jay Gould and John McPhee. Personally, though, I would have liked to have seen more information on humans, because this is a book about conservation, and conservation is, after all, about changing human rather than animal behavior. It is hard to know what needs changing or how one might go about it (indeed, whether it is even feasible), without understanding what drives individual and collective human behavior. Giving his human characters more depth would have strengthened Conway’s case for optimism about future conservation efforts.

There is another reason I have for wishing that Conway had devoted more space to the life stories and personalities of conservationists. In the course of my own conservation work, I have come to believe in the paramount importance of identifying and supporting a champion to drive conservation efforts. The power of a dedicated individual in shifting public opinion and shaping public policy cannot be overstated. Yet this is sometimes overlooked (or deliberately ignored) by conservation organizations and funding agencies alike. Without the charismatic and committed hero to lead the way, the best priority-setting exercises, strategic planning efforts, and conservation management plans can lead to naught. People like elephant seal expert Claudio Campagna and penguin expert Dee Boersma are the real heroes.

Conway also puts too much emphasis on the need for further research on the decline of wildlife in Patagonia. Certainly, good science is essential for sound policymaking. Yet it is already clear that unsustainable agricultural, hunting, and fisheries policies are what need to be addressed if the threats are to be diminished. Ecological and natural history studies must continue, but policy changes should not be held up in the interim. There are policy prescriptions in the book that make great sense and could do much to avert the continued decline in wildlife: explore and develop the market for guanaco wool to allow wildlands “ranching” as an alternative to sheep farming on the steppes; expand the nascent ecotourism industry to demonstrate the high value of species currently considered nothing but pests by some of the locals; provide greater funding for surveillance and enforcement of existing protected areas, including Chile’s flamingo reserve system and Argentina’s national parks; and put into operation the well-conceived (thanks in large part to Conway himself) Coastal Zone Management Plan. Almost all these initiatives call for the establishment of more protected areas, but creating and managing these takes enormous effort and significant funding. Conway believes that many of these initiatives can be self-financing, especially by developing the ecotourism industry to its full potential. But although Patagonia may seem a wondrous place worthy of protection, this potential may not be recognized in an area beset by financial uncertainty and growing human population, and in the end conservation priorities may not be national priorities at all.

Even though I consider myself a strong advocate of protected areas in conservation, in the last analysis it seems clear that protected areas alone will not be able to stem the tide of degradation that in Patagonia, as elsewhere, is driven by human greed, competition, and ineffective governance. Conway himself quotes H. L. Mencken’s observation that seems so pertinent here: “For every complex question there is a simple answer, and it is wrong.” Yet for all their complexity, the steps toward rectifying mismanagement and corruption are obvious. This is perhaps the most poignant part of this conservation tale—not the relentless slaughter of sea lions and fur seals, nor the commonplace though accidental drowning of helpless Chaco’s tortoises in canal dams and weirs, nor the vulnerability of burrowing parrots when they frantically circle a fallen comrade and make themselves easy targets for poachers. The really sad thing is that we all know what needs to be done to save Patagonia from the fate of most of the other now-ravaged great places on earth, and yet we are somehow powerless to make it happen. Though conservationists ought to be straightforward in calling for changes in the behavior of the human species, our own animal behavior is perhaps as much a mystery as that of the enigmatic wildlife of remotest Patagonia.

Even Universities Change

U.S. research universities are going to change, and education leaders would be wise to begin now to direct that change. This will not be easy, but they have the advantage of being able to learn from the experience of many U.S. corporations that have reinvented themselves to respond to the market changes caused by the rapid advances in information technology (IT). Because of their long history and tradition, universities are often viewed as unchanging, but in reality the U.S. university system seems to undergo radical transformations about twice a century. Notable examples include the creation of the state land-grant colleges in the middle of the 19th century and the transformation of public and private colleges into massive research universities after World War II. The IT revolution will force the universities to undertake the next radical (and overdue) reinvention.

The IT-driven changes that have occurred in industry are coming to universities as well, albeit a little later. They are likely to affect the style and environment of the research university, as well as collegiality, economics, and effectiveness. The IT revolution will affect all university activities, altering the cost and process associated with each. Remote interactions and the sharing of data are already commonplace. Many faculty members interact more frequently with their worldwide disciplinary communities than with faculty in different fields on their campus. Students share experiences (and test questions) through mobile communications. The recent research literature is more easily available electronically than in person, and a researcher at a less prestigious university can see preprints of important work at the same time as a peer at a famous university. In the future, raw research data will also be broadly accessible.

The transformation of industry

A quick survey of the many ways in which IT has changed industry practices illustrates how pervasive its influence has been and provides hints about the university practices that are likely to undergo similar change. In the past decade, technological developments have shifted the relative costs of the factors of production. In particular, huge drops in the costs of information storage, distribution, and analysis have changed what tasks are practical or easy to do. We have better tools to support collaboration. There are exciting improvements in our ability to model, simulate, understand, and then change the structure of organizations and the processes they use. Some old skills have been devalued and new opportunities have been created. Bookkeepers are no longer prized for their ability to add rapidly and accurately. Middle managers are not paid for remembering facts about recent business performance. Most large companies outsourced their payroll departments years ago. Many are now outsourcing other administrative parts of human resources departments, such as pension plan management. Large international specialized manufacturing firms assemble (and in many cases design) many of the products that carry the names of more famous companies. In the same way, university librarians are not valued primarily for their knowledge of what books are in the collection.

Many tasks that once required creative individual attention can now be standardized and then performed repeatedly and efficiently by computer. It is also possible to evaluate formalized activities objectively and compare them with alternative means of achieving the same goals. Analysis of a customer’s interests is now performed automatically with “recommender” systems on Web sites such as Amazon.com, and rating systems such as those found on eBay have given consumers a new way to evaluate products. Similar effects are seen in student-run campus course guides and the use of citation indices for grant and promotion decisions.

Costly custom services can become commodities that are marketed on the basis of price or fashion. Basic personal computers (PCs) are now a commodity, and some models are sold on the basis of the color, shape, or the metal of their case rather than on the electronics within. Although this trend might appear to be a recipe for a race to the lowest acceptable quality, it can also lead to competition on the basis of quality and a raising of expectations and standards: Today’s PCs are expected to work well right out of the box, and customers have become blasé about these engineering marvels. The move to standardized services, with well-defined interfaces and qualities of service, is part of a profound shift in business structure. Outsourcing is feasible when interfaces and expectations are very clear. Call centers can easily be relocated if the operators get all their information from a screen. Payroll checks can be issued by independent specialist companies. X-rays can be analyzed by radiologists 10,000 miles away. Standardized tests can be given anywhere, and are graded at many sites, with processes that enforce uniform scoring even of essay questions.

New activities and functions can result from combinations of services and applications. Advanced e-business applications integrate multiple activities and can support coordination across organization and corporate boundaries. The management of a modern supply chain, as seen in almost any large manufacturing or retail company, involves many interactions between buyers and sellers; decisions about replenishment and delivery are often made automatically through shared sets of rules and authority. We have even seen the birth of a new business sector: the hundreds of thousands of part-time and full-time sellers on eBay. New tools and computing standards are making it easy to provide new services that use existing services such as databases, geographic images, and accounting data. The Internet2 research network has supported not only remote lectures and global data-sharing, but artistic performances, with performers at great distances from each other.

Customer expectations are rising rapidly with experience. As people become accustomed to new services and levels of capability, they rapidly lose patience with older levels of technology and service. Personal experience with services such as home Web surfing and banking raises general expectations about the operations of services in other contexts. Google and Amazon run all night, as do ATMs and vending machines. IBM and many other companies have had no choice but to offer round-the-clock service. Students will ask why the university library should ever be closed. They will not want to wait for official office hours to ask a question of the professor, and they will want online review sessions to be held at 11 p.m. if desired.

Tempos have been accelerating, and time scales for decision and operation will become even shorter. Companies have learned to respond to disruptions in complex supply chains in hours or days, not months. PC companies frequently introduce new models every few months and keep very lean stocks of components that will predictably become obsolete in a matter of weeks. Call center operators get information about new models and problems just before they become available. At the university, the online course catalog does not have to be finalized months in advance. Extra sections of a course can be added when enrollment surges, and corresponding reading materials can now be printed overnight or provided online. Networking permits students to check professors’ assertions and references during lectures, sometimes to their consternation.

Campus-specific change

Although one can find relatively straightforward parallels between many industry and business activities, universities are engaged in many activities that are particular to education and research. There might not be industry changes to serve as literal models, but one can certainly speculate about how IT is likely to drive change in these areas.

JUST AS NEW BUSINESS MODELS SUCH AS EBAY AND IMPROVED MEANS FOR CONSUMERS TO SHARE THEIR EVALUATION OF PRODUCTS ARE PUTTING INCREASED PRESSURE ON BUSINESSES, ALTERNATIVE APPROACHES TO EDUCATION CAN IMPINGE ON THE CORE OF THE TRADITIONAL UNIVERSITY MISSION.

Business process analysis and service-oriented implementation make many of the relationships explicit and visible. Good automation depends on having an accurate description of the steps to be performed, as well as any constraints, decision rules, and relationships. (Scheduling a class requires assigning a professor and a classroom, putting a course description in the catalog, checking student prerequisites, and so forth.). The costs of each step and the opportunity costs (what else could have been done in that classroom, how many students clamored for this course or another) become clear. Allocation decisions then must be made consciously or according to a definite rule, rather than being based on tradition. The university then can make better use of scarce resources (such as highly desired professors or specialized laboratory equipment) and ensure that students can actually take the courses they want or need.

With improvements in computer simulation, many lab courses or experiments could conceivably be offered online. Similarly, demands on lab space and resources could be addressed through creative uses of IT such as distributed computing and remote access to expensive equipment. Game technologies create the potential to develop interactive virtual experiments and simulations that have real educational value. Medical schools are making pioneering use of simulated patients for training and for detailed monitoring of student learning, thus improving their ability to certify the students and reducing risks to patients. Performing surgery in a computer simulation with realistic feedback can be a valuable exercise before attempting a procedure on a living person.

The extent and capabilities of the campus network and services will increasingly determine how attractive a campus is to students and faculty members, independent of their fields of interest. The convenience of mobile access and the amount of off-campus bandwidth will be crucial. Students in the performing arts will be at least as interested as those in computer science in responsive networking and media. Just as computer gaming has been pushing the frontiers of personal computing, students and faculty are likely to be among the most demanding customers for networking capability.

IT is making it increasingly easy to work together across distance, thus increasing the value and functionality of “invisible colleges” based on mutual research interests. The shift puts further strain on campus-based departments and programs as well as investments in collegiality. Collaboratories are a successful example of the tight integration of distant research groups. The time could come when it is less important who is on campus and more important to know how well connected the faculty is to colleagues around the world.

Like businesses, universities have become aware of the importance of branding, but the criteria for measuring the quality of a university are in flux. Is the perceived value of the university created by the quality of students in residence, by their accomplishments as graduates, or by the fame of the faculty? In the future, a transcript might list the name of the instructor and personal comments rather than a school and grade. The size of a university’s library and the quality of its scientific instruments will matter little if massive collections are digitized and researchers have online access to remote instruments.

Recipe for a research university

Our current understanding of the functions of a research university is the result of a long history that has been shaped by economics, government policy, and a rich mix of social and cultural forces. One can argue about the wisdom of the current collection of missions and priorities (see “Research University Ingredients”), but the important question is what one would include in the recipe for a university that makes optimum use of the technological capabilities that are on the horizon to meet the needs of the society that will exist in coming decades. Industry’s experience in the past decade should make it clear that nothing is sacred. If IBM can move out of the PC business, what roles might no longer be appropriate for the research university?

IF IBM CAN MOVEOUT OF THE PC BUSINESS, WHAT ROLES MIGHT NO LONGER BE APPROPRIATE FOR THE RESEARCH UNIVERSITY?

U.S. education leaders must be willing to question the value of everything the university does. They can already see institutions such as professional schools, for-profit colleges, and universities in other countries that work with a very different list of ingredients. But the range of possibilities that they should consider is far more varied. How many of the current ingredients would be combined into a single organization if one started afresh? Which are unique to the research university or best performed by it? Which provide joint value through horizontal or vertical integration? Which provide an enjoyable lifestyle for employees? The e-business approach disentangles these many strands and their joint and separate values, and clarifies alternatives. It enables the university to decide rationally about the effects of spinning off activities that others can perform as well (such as bookstores and dormitories) and bringing others in-house (such as formerly independent research labs), just as firms decide to buy companies and to sell off some of their own divisions to maximize efficiency and strategic value.

Competitive forces, weakening legal constraints, and softening political support may allow providers with different cost structures and missions to offer some of these services more effectively and/or inexpensively. We have already seen such encroachments at a trivial level, such as school stores run by bookstore chains, dormitories run by hotel operators, and campus dining services provided by restaurant franchises. Some students are engaged in a form of education arbitrage, taking basic (highly profitable) courses at low-cost institutions and transferring credits to an elite university where they concentrate on taking advanced and unusual (money-losing) courses. Should universities contract out introductory courses to the for-profit University of Phoenix? Should they limit the number of credits that can be transferred? Can they find a more cost-effective way to offer mass courses as well as the highly specialized courses?

Just as new business models such as eBay and improved means for consumers to share their evaluation of products are putting increased pressure on businesses, alternative approaches to education can impinge on the core of the traditional university mission, and the increasing credibility of external rating agencies and certification bodies can affect a school’s reputation and the value of its diplomas. Business schools have become particularly sensitive to magazine ratings, and undergraduate programs grouse about the influence of the ratings from U.S. News & World Report.Colleges with little research activity but a sharp focus on the education experience can teach basic courses well and inexpensively. For-profit universities with practitioner instructors are delivering professional training. Non-university and off-campus research institutes can advance knowledge and even train researchers through intense focus and strong funding. Which functions other than professional discipleship, tenured faculty positions, and encouraging assortative mating are the sole preserve of the research university?

Universities are superb at resisting pressure and maintaining the status quo, but financial strain (from increased costs and decreasing government subsidies), shifting student interests and expectations, the changing expectations of faculty, and competition are forcing soul-searching and hard but salutary decisions. Even universities change. And if university leaders make wise decisions, it will be for the better.

Forum – Fall 2005

In defense of defense spending

CHARLES V. PEÑA’S “A REALITY CHECK on Military Spending” (Issues, Summer 2005) falls short of confronting the broad challenge of how the United States might best engage on the issues of global security. His approach seems almost to be a casual excuse for cutting the budget: settle for playing the role of a “balancer of last resort” and let others “take greater responsibility for their own regional security.”

His defense budget strategy would not create an alternative paradigm for security that the rest of the world could live with. His abrupt reduction of U.S. forces and overseas deployments would only result in eventual challenges to U.S. security, with no institutions or capabilities in check to counter them, ensuring regional and global security chaos.

Three pieces are missing from Peña’s vision. The first involves the need to put in place a regional and global security architecture that would ensure stability, peaceful transitions, and the ability to confront danger, allowing the United States to play a more restrained role. European militaries are working toward, but still fall short of, assuming a security role that could eliminate the need for U.S. forces.

Africa lacks the institutions and capabilities to ensure regional security and will need considerable outside help. There is no Middle Eastern or north or Southeast Asian security arrangement like NATO, and only a few bilateral agreements in which the United States plays a role (such as with Australia, South Korea, and Thailand). Taiwan has no other security guarantor but the United States and will not accept a regional alternative.

Second, Peña comes up short in describing what the U.S. military’s role should be in dealing with the major global challenges that the United States and others face: terror, proliferation, and instability in failed states. U.S. forces are poorly trained for these missions, yet as Peña recognizes, they are missions for which forces are needed. Here he contradicts himself—these missions are global in scope, not purely regional, but he wants U.S. forces to withdraw from a global presence.

Third, Peña leaves undiscussed how the entire tool kit of statecraft and allied relationships might be used to deal with the security dilemmas the world faces today, dilemmas that underlie terror and proliferation: the global poverty gap; the need for stable, effective, and responsive governance in vast regions of the globe; the raging conflicts of ethnicity and belief that inflame current tensions; and achieving an affordable, secure energy supply.

Peña collapses one security tool— the military—but offers no security vision that addresses these dilemmas with an integrated set of other tools: foreign assistance, diplomacy, public diplomacy, and allied cooperation. Without such an integrated strategy, eliminating the U.S. military just pours fuel on the fire.

GORDON ADAMS

Director, Security Policy Studies

Elliott School of International Affairs

The George Washington University

Washington, DC


CHARLES V. PEÑA NOTES, correctly, that “ever-increasing defense spending is being justified as necessary to fight the war on terrorism.” Then, however, instead of trying to correct that erroneous justification, he falls into the common trap of wanting to design the U.S.’s armed forces to meet only that most imminent of threats to U.S. and allies’ national security without regard to other long-term risks to that security and the military requirements for sustaining it. His article consequently contains some serious errors in strategic reasoning as well as some technically incorrect statements about military systems currently in acquisition.

ADOPTING THE “BALANCER-OF-LAST-RESORT” STRATEGY THAT PEÑA RECOMMENDS WOULD BE TAKEN BY THE OUTSIDE WORLD AS A SIGNAL THAT THE UNITED STATES IS RETREATING INTO THE ISOLATIONISM OF AN EARLIER DAY.

To deal with the latter first:

Peña says that the F-15 Eagle is not challengeable by any potential adversary. However, the Russian Sukhoi Su-30 has similar performance, with the additional maneuverability advantage of vectored thrust, so that superiority in air combat against such aircraft will depend to a great extent on pilot proficiency, tactics, and the quality of the air-to-air weapons, in which the United States will not necessarily be superior. This was demonstrated in recent mock combat exercises, when an Indian Air Force contingent including Su-30s and other Russian and French aircraft “defeated” a force of F-15s. The Russian and French aircraft and their missiles are for sale to any willing buyer.

He says the F-22 Raptor was designed for air superiority against Soviet fighters, but that is only partly true. It is also designed to be better able than current fighters to penetrate Russian ground-based air defense systems that are also for sale on world markets.

He says, incorrectly, that helicopters can perform the same mission as the incoming V-22 Osprey. The Navy’s CH-53 Sea Stallion helicopter in its various versions, which can carry about the same payload as the V-22, has significantly less range and flies at a much slower speed than the V-22. It has less ability to penetrate to the depths that may be necessary in future regional conflicts along the Eurasian periphery, and greater vulnerability to shoulder-fired antiaircraft missiles. In addition, it is an old aircraft and a large maintenance burden to the naval forces that use it. Improving performance and reducing maintenance cost are major reasons why systems are replaced, in civilian as well as military life.

He says the Navy’s F/A-18 Super-Hornet is an “unneeded” tactical fighter. It, too, however, has significantly more range and combat capability than the F-18C/D that it is replacing. This difference appears in many aspects of total system design and cost, including, for example, the need for less tanker capacity to help the aircraft penetrate to distances such as those from the Indian Ocean to Afghanistan, as was necessary during the campaign to eliminate al Qaeda’s established presence in that country. The bomber force alone couldn’t carry that whole task, because its sortie rate is much lower than that of the carrier attack force.

Finally, he says that the Virginia-class submarine is no longer needed because the Soviet submarine threat has gone away. This disregards the facts that China has a significant number of nuclear attack submarines and that quiet conventionally powered submarines are proliferating in waters that U.S. and allied shipping and naval forces will have to transit in any Eurasian regional conflict.

In the strategic area, Peña proposes that we cut our armed forces in half and drop back to a strategy of letting our allies or other regional powers handle conflicts as they arise, with our forces on call if help is needed. This seems to neglect the fact that our ground forces are already stretched thin by operations in Iraq and Afghanistan, to which we are committed for an indefinite period. Where would the additional forces come from if we were called on to help in another regional conflict? Such conflicts are possible in Korea or over Taiwan, or even with Iran or Pakistan, should internal events in those countries turn them into regional antagonists threatening U.S. allies or other countries (such as Taiwan) over which we have extended our protective umbrella.

Peña makes light of the contribution that our 31,000 troops in the Republic of Korea (ROK) make to that country’s defense. He notes that the 700,000-strong ROK army should be sufficient to defend against the million-man North Korean armed forces. This neglects the fact that the North Koreans once before in history demonstrated powerful fighting capability, and the possibility that scarce resources in that economically deprived country may be deflected from supporting the civilian population to keeping their armed forces in top condition to support the bellicose North Korean foreign policy statements. It also neglects the fact that the U.S. Army units in Korea are deployed to defend Seoul, which is only an hour or so march by armored forces from the North Korean border. Withdrawal of our forces would thin the ROK defenses against a formidable potential foe and unacceptably expose the most critical point in an ally’s defenses and survival to imminent capture.

Peña quotes Harold Brown’s Council on Foreign Relations task force as saying that China is 20 years behind us militarily, and he seems to rely on the subsequent statement that we can retain that lead. But he appears to ignore the very important conditional clause in the same statement: “…if the United States continues to dedicate significant resources to improving its military forces,” and counsels stopping that continued improvement. He thus advocates giving China the needed breathing space to pull even with us, in the face of its threats to regional stability over Taiwan and its expressed desire to extend its maritime control 2,500 kilometers into the waters adjacent to its coasts—areas where we have many strategic interests, including those in Japan, Korea, South and Southeast Asia, and Australia/New Zealand.

Finally, Peña seems to neglect the fact that U.S. national security depends on our globally oriented economic wellbeing and strategic leadership among many allies and affiliated nations. Our national strategy is based on the premise that as the world becomes more democratic, the occurrence of destructive wars will decrease. Whether one favors the relatively passive approach of the Clinton strategy, which was based on the assumption that democracy would naturally spread in a world of free and open trade, or the more assertive attempts to spread democracy adopted by the Bush administration, the fact that we have the world’s dominant economy and military forces requires that, in our own interest, we must be a leader of what used to be called the “Free World.” Adopting the “balancer-of-last-resort” strategy that Peña recommends would be taken by the outside world as a signal that the United States is retreating into the isolationism of an earlier day. One cannot be a leader by saying “you go take care of it, and if you get into trouble and Congress approves I’ll come and help you, with forces I may or may not have.” We saw what happened when we tried to adopt such a strategy vis-à-vis the Balkans, and it wasn’t pretty.

As to the affordability of the armed forces under such a leadership strategy, there is little value in comparing our military expenditures with those of any other combination of nations. We can always find some such combination to add up to what we spend, and the more nations we include in the comparison the more profligate we can be made to appear. However, if in protecting our own security and that of our allies we need armed forces that have what the U.S. Air Force has called “global reach, global power,” then we shall have to pay what it costs, in terms of our own cost structures and force needs. Although, as Peña points out, the current defense budget, including military operations in Iraq and Afghanistan, absorbs 3.7% of gross domestic product (GDP), we were able to sustain defense budgets of about 5% of GDP for some years running during the so-called Reagan Buildup.

The current world strategic situation may not look as critical at first glance as the situation when the Soviet threat was foremost in our consciousness. However, a careful look will show that the current situation is more dangerous than it was then. There are more kinds of threats, stretching into the indefinite future, ranging from the Islamist extremists’ jihad against us and our allies to the possibility of major regional wars threatening our world interests. Are we to argue that we cannot afford now what we did at that earlier, simpler time?

SEYMOUR DEITCHMAN

Bethesda, Maryland


IT WOULD HAVE BEEN NICE if an article that uses the words “reality check” in its title were in fact based on reality. Unhappily, Charles V. Peña’s attack on virtually every aspect of U.S. defense policy is not only unreal, it borders on the surreal.

Peña starts by tying his critique of the size of the U.S. military and the resulting defense budget to the war on terror. This is only one of the military’s missions, even at present. As he well knows, the military must prepare for and be capable of prosecuting other conflicts, while also providing support to homeland security, humanitarian relief, and other missions. But even in the war on terrorism, Peña conveniently ignores the contribution that aircraft carriers, strategic bombers, tactical fighters, and armored forces are making to this struggle. Moreover, although conventional forces have demonstrated utility in the war on terror, the light counterterrorism capabilities Peña advocates would be relatively useless if a serious conflict were to occur.

Peña then resorts to the hoary device of comparing U.S. defense spending to that of other countries without providing any context for comparison. The United States spends more than other countries for defense. It also spends more, per capita, on automobiles, health care research, and lawn furniture than any other country. Last year, Americans spent as much on their pets as the entire economies of North Korea, Kenya, or Paraguay. If we are not going to tie our level of spending in other areas to that of foreign countries, why should national security be any different?

Dollar expenditure comparisons are meaningless, particularly when many of the countries he mentions have conscript armies; little in the way of health care requirements, pension plans, or environmental laws; and are willing to risk their soldiers, sailors, and airmen in inferior tanks, ships, and fighter planes. There are differences in cost structures that result in higher U.S. defense budgets. For example, the military has a brigade’s worth of lawyers 卌 in uniform, costing around $600 million dollars annually. China’s defense budget, if normalized for U.S. practices and prices, would be at least four times the figure Peña cites. He also fails to account for the fact that the U.S. defense budget includes funds for power projection capabilities that many other countries do not need, because they are where the action is likely to take place while we are not. The realities of defense costs are much more complex than Peña acknowledges.

From here Peña’s argument becomes surreal. He proposes a 50 percent reduction in the size of the U.S. military, based largely on the arguments that major conventional wars will be rare in the future and that other countries should be required to defend themselves. Although both points are reasonable, they are also inadequate. America’s conventional wars have always been rare. But when they occur, we always strive to win them decisively. Peña’s proposal would drop the United States from the world’s second-largest and unquestionably best military to approximately eighth on the list. But of course that number is misleading because we could never focus all those forces against a single adversary. Thus, the effective size of the U.S. military would be somewhere between 15th and 20th in the list of military powers, behind such world giants as North Korea, Pakistan, Turkey, Iran, Egypt, Algeria, Vietnam, and Syria.

Peña proposes not only cutting the size of the U.S. military in half but simultaneously undermining the technological superiority that has enabled U.S. forces to win their wars for more than a century. Peña argues for eliminating the F-22, V-22, and Virginia-class attack submarine because these programs were started during the Cold War. He wrongly claims that the F-15 has no prospective adversary and that the V-22 is an unproven platform. Without the Virginia-class submarines, the United States will soon have not merely an aging undersea force but none at all. Peña wants the new, smaller U.S. military to fight its future wars with technology that today is 20 years old in the case of attack submarines, 30 years old for fighters, and 40 years old for helicopters.

Peña’s argument amounts to nothing more than slightly warmed-over isolationism. We are to be the “balancer of last resort.” Such a strategy made sense, perhaps, before the advent of a globalized economy, instantaneous communications, high-speed transportation, energy dependence, and the proliferation of weapons of mass destruction. But it is wrongheaded and even dangerous in the 21st century. A four-division Army and a one-division Marine Corps would not make us the balancer but the patsy. Remember the fate of the British Expeditionary Force in France in 1941? This was a force about the same size as that Peña proposes. Remember Dunkirk?

Peña does leave himself an escape clause, the old “intervene only when truly vital interests were at stake” line. However, Peña clearly would define vital interests so narrowly that there would never be a reason to fight. As the United States discovered in two world wars, the Cold War, Desert Storm, and the global war on terror, although the location of U.S. vital interests may change, they are always out there and they always need defending. Peña’s last-resort strategy is more of a forlorn hope.

Ultimately, Peña’s proposals would have the United States defend its territory, friends, and interests with a military smaller than most potential adversaries, one without any technological advantages to make up for the dearth of soldiers and lacking the overseas bases from which to operate. Perhaps we could rely on allies to help us out. Oops, I forgot. Peña’s proposals would leave us without allies too. It seems to me the one who needs to undergo a reality check here is Peña.

DANIEL GOURE

Vice President

The Lexington Institute

Arlington, Virginia


CHARLES V. PEÑA MAKES SOME excellent observations but is blind to the strategic interests of the United States and how best to secure them. He is correct that the United States should use military force only to defend vital national interests and should cut unnecessary defense programs. His argument that the U.S. military requires further transformation is also right. However, his reasoning for these policies and how to achieve them ignores the strategic realities of the 21st century and the precedents that have largely defined them.

He suggests that U.S. overseas bases are vestiges of Cold War-era containment strategy and that it can slash spending by significantly reducing its presence around the world. If it adopted a so-called “balancer-of-last-resort” strategy, the United States would no longer attempt to use a global presence to shape events but reserve the capability to intervene in conflicts that affect its vital interests as they emerge. Peña also points to the disparity of U.S. defense expenditures compared to those of our allies, and again proposes removing U.S. troops and leaving allies to their own local security to save defense dollars.

If the interests of host nations defined U.S. presence, then this assertion would be correct, but it is not. Instead the United States maintains a robust overseas presence because it advances U.S. interests. The Cold War may be over, but it has not been replaced by a more stable world or a world in which U.S. strategic interests have disappeared. Instead, America’s strategic interests remain intact, but the means to secure them have rapidly evolved.

Instead of retreating, the United States must reorganize its overseas bases to be consistent with the modern world, technologically and strategically. Not doing so harms its vested interests by underestimating the stabilizing role of U.S. forces and the negative impact, economically and politically, of removing them. Besides, leaving Europe, the Pacific, or the Middle East would create a vacuum that could be filled by a hostile power.

Also, although the United States does spend more on defense than its allies, it has the most to lose from major changes in the global status quo. Inadequate spending by allies should not be answered by the United States spending less, but by those allies spending more.

One of Peña’s strongest arguments, the need for military transformation, is one of the reasons for recent increases in defense spending. Outfitting a military force for the future is not cheap. Additionally, his assertion that Iraq and the war on terror should not be the gauge for future force requirements is correct. However, his balancer-of-last-resort strategy would cut U.S. active forces from about 1.4 million to around 700,000. As noted above, the United States must maintain a global presence, and therefore such reductions would be unwise.

The military posture of the United States must be defined by strategic interests and not by funding levels. Although it may seem that it is time to bring the troops home, policymakers must consider that perhaps the lack of major threat to U.S. security is a testament to the criticality of its ongoing overseas mission.

JACK SPENCER

Heritage Foundation

Washington, DC


Digital education

HENRY KELLY’S “GAMES, COOKIES, and the Future of Education” (Issues, Summer 2005) provides an excellent synthesis of challenges and opportunities posed by technology-based advances in personalized entertainment and services. An aspect of this situation deserves further discussion: Children who use new media extensively are coming to school with different and sophisticated learning strengths and styles.

Rapid advances in information technology have reshaped the learning styles of many students. For example, the Web, by its nature, rewards comparing multiple sources of information that are individually incomplete and collectively inconsistent. This induces learning based on seeking, sieving, and synthesizing, rather than on assimilating a single “validated” source of knowledge as from books, television, or a professor lecturing.

Also, digital media and interfaces encourage multitasking. Many teenagers now do their homework by simultaneously skimming the textbook, listening to a MP3 music player, receiving and sending email, using a Web browser, and conversing with classmates via instant messaging. Whether multitasking results in a superficial, easily distracted style of gaining information or a sophisticated form of synthesizing new insights depends on the ways in which it is used.

Another illustration is “Napsterism”: the recombining of others’ designs into individual, personally tailored configurations. Increasingly, students want educational products and services tailored to their individual needs rather than one-size-fits-all courses of fixed length, content, and pedagogy. Whether this individualization of educational products is effective or ineffective depends both on the insight with which learners assess their needs and desires and on the degree to which institutions provide quality customized services, rather than Frankenstein-like mixtures of learning modules.

During the next decade, three complementary interfaces to information technology will shape how people learn.

  • The familiar “world-to-the-desktop” interface, providing access to distant experts and archives, enabling collaborations, mentoring relationships, and virtual communities of practice. This interface is evolving through initiatives such as Internet2.
  • “Alice-in-Wonderland” multiuser virtual environment (MUVE) interfaces, in which participants’ avatars interact with computer-based agents and digital artifacts in virtual contexts. The initial stages of studies on shared virtual environments are characterized by advances in Internet games and work in virtual reality.
  • Interfaces for “ubiquitous computing,” in which mobile wireless devices infuse virtual resources as we move through the real world. The early stages of “augmented reality” interfaces are characterized by research on the role of “smart objects” and “intelligent contexts” in learning and doing.

The growing prevalence of interfaces with virtual environments and ubiquitous computing is beginning to foster neomillennial learning styles. These include (1) fluency in multiple media, valuing each for the types of communication, activities, experiences, and expressions it empowers; (2) learning based on collectively seeking, sieving, and synthesizing experiences; (3) active learning based on experience (real and simulated) that includes frequent opportunities for reflection by communities of practice; and (4) expression through nonlinear associational webs of representations rather than linear “stories” (such as authoring a simulation and a Web page to express understanding, rather than a paper).

All these shifts in learning styles have a variety of implications for instructional design, using media that engage students’ interests and build on strengths from their leisure activities outside of classrooms.

CHRIS DEDE

Wirth Professor of Learning Technologies

Harvard University Graduate School of Education

Cambridge, MA 02138


HENRY KELLY’S ARTICLE PROVIDES readers with a timely and comprehensive look at what is needed to address glaring shortfalls in the U.S. education system. The article underscores the lack of investment in R&D on new educational techniques that would use the up-to-date technology currently available. By conveying how increased investment in educational R&D can improve teaching and learning, Kelly is making an excellent case for the adoption of the Digital Opportunity Investment Trust (DO IT) legislation.

BY CONVEYING HOW INCREASED INVESTMENT IN EDUCATIONAL R&D CAN IMPROVE TEACHING AND LEARNING, KELLY IS MAKING AN EXCELLENT CASE FOR THE ADOPTION OF THE DIGITAL OPPORTUNITY INVESTMENT TRUST (DO IT) LEGISLATION.

Although the article notes the low rankings of U.S. students as compared to international students in recent studies, not enough emphasis is placed on the fact that our students are performing alarmingly poorly in the fields of math and science. A study conducted in 2004 found that U.S. students ranked 24th in math literacy and 26th in problem-solving among 41 participating nations and concluded that U.S. students “did not measure up to the international average in mathematics literacy and problem-solving skills” (Program for International Student Assessment at www.pisa.oecd.org). Additionally, U.S. students are becoming less interested in math and science. There has been a steady decrease in bachelor degrees earned in mathematics and engineering in U.S. universities during the past decade.

While our students are not meeting global standards in mathematics and science and are losing interest in these subjects altogether, the United States has become increasingly reliant on foreign talent in these fields. In 2000, 38% of all U.S. science and engineering occupations at the doctoral level were filled by foreign-born scientists (up from 24% in 1990). Filling these critical occupations with foreign talent has become a more complex issue with the war on terror and as global competition for the best and the brightest in science and engineering increases dramatically. During the 1990s, the Organization for Economic Cooperation and Development saw a 23% increase in researchers, whereas the United States saw only an 11% increase.

There is a critical need to change these trends in math and science. We need to build up domestic talent and interest in these crucial areas and provide necessary incentives to attract foreign talent. Increased investment in R&D and educational technology, as outlined in Kelly’s article, can begin to address this need.

Kelly’s article highlights efforts by the National Science Foundation, Department of Education, Department of Defense, and Department of Homeland Security to improve training and educational technologies, but does not stress enough that DO IT is a comprehensive effort that will research and improve teaching and learning techniques that can permeate all U.S. educational institutions. It is important to stress that DO IT legislation would help to fill the current market failure that Kelly mentions (“Conventional markets have failed to stimulate the research and testing needed to exploit the opportunities in education”). DO IT will foster collaboration among educators, cognitive scientists, and computer scientists to research and develop the most effective methods of teaching and learning, using today’s technologies. DO IT will help to ensure that the U.S. education system does not continue to fall behind all other sectors and nations that have embraced the potential of technology.

We are facing a crisis in public education and math and science education; Kelly presents an excellent case for the need to increase educational R&D and succeeds in demonstrating how currently underutilized technologies can improve the learning process.

EAMON KELLY

Distinguished Professor in International Development

President Emeritus

Tulane University

New Orleans, Louisiana


Cyberinfrastructure for research

IN “FACING THE GLOBAL Competitiveness Challenge” (Issues, Summer 2005), Kent H. Hughes persuasively argues that future productivity and economic growth depend on a society’s success at innovation and outlines a series of proposed policy steps that could foster a higher-performance U.S. innovation engine.

One such action that could provide a turbocharged boost to U.S. innovation would be to aggressively implement the February 2003 recommendations of the National Science Foundation Advisory Panel on Cyberinfrastructure, commonly called the Atkins Report (referring to the panel’s chair, Daniel E. Atkins of the University of Michigan).

The study notes that a third way of doing science has emerged: computational science, complementing theoretical and experimental techniques. The report also finds that new information technologies make forms of collaboration in science and engineering possible today that were not possible even a decade ago. “Cyberinfrastructure” refers to a broad web of supercomputers, vast data servers, sensors and sensor nets, and simulation and visualization tools, all connected by high-speed networks to create computational science tools far more powerful than anything we have known in the past. Cyberinfrastructure also refers to the software that links these distributed resources, the security needed to protect them, and the people and institutions needed to maintain and exploit them.

The Atkins Report promises a “revolution in science and engineering” if we invest a billion new dollars a year in these tools and organize ourselves intelligently to use them.

A July 8, 2005, memo from John Marburger and Josh Bolten to federal R&D agencies outlining fiscal year 2007 budget priorities calls for an emphasis on “investments in high-end computing and cyber infrastructure R&D.”

In his recent book, The Past and Future of America’s Economy, Robert D. Atkinson supports investing $1 billion per year in an Advanced Cyberinfrastructure Program, which he asserts “would also lay the groundwork for the next-generation Internet to dramatically expand its possibilities.”

The second step for policymakers, to ensure that investment in scientific infrastructure will pay off, is to lay out a bold national vision for providing broadband for all Americans. Congress and the Federal Communications Commission are revisiting the rules that govern telecommunications and the Internet. They should revise those rules with an eye toward a far horizon, not just rearrange the deck chairs among existing competing providers.

A coalition of 10 higher education organizations has called for adopting “as a national goal a broadband Internet that is secure, affordable, and available to all, supporting two-way, giga-bit-per-second speeds and beyond.” If we were to achieve national broadband connections to every U.S. home and business, supporting synchronous gigabit communications, the technologies developed in a scientific cyberinfrastructure program could propagate to enhanced innovation in e-business, distance education, telemedicine, telecommuting, and expanded forms of leisure and entertainment. The innovation sparked by the first wave of Internet technologies would be dwarfed by the second wave.

Hughes’ vision of a high-performance innovation economy can be spurred in part by a bold national program in advanced cyberinfrastructure for science and engineering, coupled with telecommunications policies designed to bring that power to every U.S. home and business.

GARY R. BACHULA

Vice President for External Relations

Internet2

Bachula formerly served as Deputy Under Secretary of Commerce for Technology.


Supercomputing revival

“BOLSTERING U.S. SUPERCOMPUTING,” by Susan L. Graham, Marc Snir, and Cynthia A. Patterson (Issues, Summer 2005), correctly notes that “Restrictions on supercomputer imports have not benefited the United States nor are they likely to do so.” In fact, import restrictions have impaired U.S. science and probably U.S. industry. For example, it was my privilege to head the Scientific Computing Division (SCD) of the National Center for Atmospheric Research (NCAR) from 1987 to 1998. Climate modeling is a major area of research in the atmospheric sciences and, as noted in the article, it is computationally intensive. Consequently, in the early 1990s the NCAR SCD routinely offered parallel processing on parallel vector processors (PVPs), such as the Cray Y-MP/8 and C90/16. Climate models, in particular, made efficient use of this capability.

In 1995, NCAR climate modelers were completing a new climate model that required computational capability that was an order of magnitude greater than that of its predecessors. The model could efficiently use at least 32 vector processors in parallel. This requirement was a major consideration in a supercomputer procurement that we conducted in 1995–1996. The NEC SX-4, with 32 processors, demonstrated decisively superior performance relative to other offers, so we selected it. A few months later, congressional efforts to overturn the selection, coupled with an antidumping investigation, resulted in severe import restrictions that prevented acquisition of the SX-4. Had we been successful in obtaining the SX-4, an enormous amount of science would have been done in the United States in the late 1990s that was either deferred or accomplished overseas. Further, if the import restrictions not had been imposed, almost certainly some U.S. organization(s), private or public, would have configured an ensemble of SXs in the United States that would have been competitive with the Japanese Earth Simulator. Imagine the possibilities lost!

U.S. industry was probably impaired by import restrictions as well. For example, from the mid-1990s, European and Japanese automobile manufacturers have had the benefit of using Japanese PVPs. Today, many of their products are competing handsomely in the international marketplace. During the same time frame, Airbus has substantially advanced its competitive position in the world market. It would be fascinating to know what role the use of Japanese PVPs may have played in both industries.

Graham et al. also correctly note that “Supercomputing is critical to advancing science.” The supercomputer is a tool of science much like telescopes, microscopes, and particle accelerators, all of which enable us to see and understand things that are otherwise not knowable. Clearly, any government that denies its scientists and engineers access to the best tools available places the nation’s future at risk, militarily and economically. Simply put: Those with the best tools win.

BILL BUZBEE

Fellow, Arctic Region Supercomputing Center

Westminster, Colorado


I COMMEND SUSAN L. GRAHAM, Marc Snir, and Cynthia A. Patterson for clearly laying out the current supercomputing situation in the United States. Their call for increased federal investment, as well as coordination, in this area is much needed. Advances in science and engineering require access to cutting-edge computing capabilities. These capabilities are needed by researchers analyzing increasingly voluminous data sets, as well as those involved in modeling and simulation, and the future promises many new applications that will require supercomputers. National supercomputing resources are currently provided by the National Science Foundation (NSF) and Department of Energy Office of Science (DOE-SC) supercomputing centers, although flat or declining budgets have limited their ability to satisfy the growing needs of the research community.

The National Center for Supercomputing Applications has been successfully providing “killer micros” for scientific computing for nearly a decade. However, a number of important scientific and engineering applications are not well suited to this architecture. Further, the architecture of killer micros is changing as chip designers face the problems associated with high chip frequencies (the traditional means of increasing computer power). Now is the time to reconsider the design of supercomputers for scientific and engineering applications, realizing that matching application to architecture will maximize scientific productivity and minimize cost. The high-energy physics community is already exploring custom supercomputer architectures for their applications.

In designing a new generation of supercomputers, we must not be misled by the “Top 500” list. U.S. computer vendors (IBM and SGI) hold the top three spots on the most recent list, but this ranking is not a reliable indicator of performance on many applications critical to advancing scientific discovery and the state of the art in engineering. Further, to achieve the stated performance levels for the top two entries (both IBM BlueGene/L computers), applications must run efficiently on 40,000 to 65,000 processors; today, few applications scale to more than a few thousand processors. The Japanese Earth Simulator, which follows the supercomputer design philosophy articulated by the late Seymour Cray, is still considered by many to be the world’s preeminent supercomputer for real-world applications. It is ranked as number four on the Top 500 list and achieves that performance level with just 5,120 processors.

Supercomputing is more than hardware. Realizing the benefits of supercomputers requires new computing systems software to enable thousands of processors to effectively work together and scientific applications that achieve high performance levels and scale to 10,000 or more processors. In 2000, the DOESC created the Scientific Discovery through Advanced Computing (SciDAC) Program, which was targeted at just this problem. Although funding for the SciDAC Program was relatively modest, its recent 5-year program review illustrates the remarkable advances that can be made by teaming disciplinary computational scientists, computer scientists, and applied mathematicians. NSF’s Information Technology Research program supported similar activities, although it is now being ramped down. Research, development, and deployment of supercomputing software must be supported if we are to realize the full potential of supercomputing.

THOM H. DUNNING JR.

National Center for Supercomputing Applications

University of Illinois at Urbana-Champaign


Nanotechnology for development

I ENJOYED READING the thoughtful piece on “Harnessing Nanotechnology to Improve Global Equity,” by Peter A. Singer, Fabio Salamanca-Buentello, and Abdallah S. Daar (Issues, Summer 2005). It is a larger commentary on how knowledge today is both a global currency and a global social responsibility. This is particularly the case with so-called transformative technologies, such as biotechnology, information and communications technology, and nanotechnology, where issues of who has access, who benefits, and who controls the knowledge are critical if nations are to be able to manage their own development and destiny. As Singer argue, “The inequity between the industrialized and developing worlds is arguably the greatest ethical challenge facing us today.”

It should not be surprising to Issues readers to see the emerging nanotechnology capacity in innovative developing counties such as China, India, Korea, South Africa, Brazil, and Mexico. These nations (and others) have spent time and resources in developing strategic national approaches to their science and technology capacity for development, in most cases with the concomitant required political leadership.

What is perhaps more interesting in this paper is the linkage between investments in nanotechnology and the United Nations Millennium Development Goals. The authors are correct to point out that the rationale for supporting what to some is high-end research needs to be anchored firmly to the social and economic needs of a nation; more so in the case of countries that have no margin for error with such investments.

The notion of a global governance for nanotechnology raised by the authors is a critical one. This will be the issue of the next few decades as the research and technology outpace social capacity to absorb the impacts. We have seen elements of this argument before in the biotechnology debates, but the growing nanotechnology dialogue provides an opportunity for the developing world to have its say and stake its claims to this emerging knowledge arena.

But more than this will be required. The South must strengthen its capacity in education, research, governance, and training. It must be able to outline its needs clearly, with knowledge and innovation being a strong component of national planning strategies and with legislatures and finance and treasury ministries acknowledging the vital importance of science and technology for development. And it must engage its society in a meaningful debate about options. Only in this way will developing countries be able to effectively respond to the introduction and diffusion of new technologies such as nanotechnology. Singer et al. offer some useful insight on this score that can serve to prime the global debate that is now emerging.

PAUL DUFOUR

Senior Adviser for International Affairs

Office of the National Science Advisor

Toronto, Canada


PETER A. SINGER and his colleagues have made a number of important contributions to our understanding of how truly transformational advances in nanotechnology are likely to be, not only for science but for society, and not only for the developed world but for our global community.

The authors’ analysis and recommendations are to be commended. There is, however, one policy issue to which the authors give too little attention: South-South cooperation.

The rising scientific and technological competence of some developing countries, combined with the potential of nanotechnology to address critical social and economic needs (and thus to attract the interest of governments), offer an opportunity for developing countries to cooperate with one another on an unprecedented scale.

Such cooperation would likely have four interrelated benefits: It would help scientifically lagging countries in the developing world build their scientific capacities; it would narrow the scientific gap between the North and South and within the South; it would advance the frontiers of science in nanotechnology on a global scale; and, perhaps most important, it would increase the prospects that applications of nanotechnology will address concerns of critical interest to the developing world, enabling the technology to serve as a valuable tool for addressing issues of extreme poverty and environmental degradation. All of this would make nanotechnology a truly transformational technology, exerting impacts that would extend far beyond the scientific arena itself.

MOHAMED H. A. HASSAN

Executive Director

The Academy of Sciences for the Developing World

Trieste, Italy

President

African Academy of Science

Nairobi, Kenya


Nanotechnology politics

SENATOR GEORGE ALLEN is a significant supporter of the National Nanotechnology Initiative (NNI), sometimes referred to as the National Nanotechnology Program. He is an important asset to the NNI and the physical science and engineering establishment, as well as to agencies such as the National Science Foundation. However, his recent article on “The Economic Promise of Nanotechnology” (Issues, Summer 2005) is guilty of some exaggeration and hyperbole on at least three levels.

First, his rhetoric apes that of most proponents of the NNI. It includes remarks like “nanotechnology will transform almost every aspect of our lives,” etc. However, nanoscience has made inroads into a host of luxury products (such as cosmetics, clothing, paint, and sports equipment) but still has not produced a “home-run” application (in areas such as new diagnostics, drug delivery and treatment, electronics. etc.). It has improved a broad range of products, especially sensors, but it remains to be seen whether nanoscience when applied becomes a general-purpose technology like the steam engine or the electrical grid were. Nanotechnology is evolutionary, not revolutionary, and hyperbole mostly produces false expectations. When budgets tighten and no revolutionary products have been created, support for the NNI may evaporate. In addition, there is some conflation of the nanoscience-based version of nanotechnology and the science-fictional variant, which is equally problematic because the fictional one is revolutionary while the scientific one is evolutionary.The preponderance of luxury products has also undercut the rhetorical tactic of tagging world social problems as the potential beneficiaries of nanoresearch. It is time to put the hurdy-gurdy away and use more appropriate rhetoric. Engendering support for nanoscience simply does not require exaggeration, and if it does, maybe we need to reexamine the initiative.

Second, Allen’s call for national competitiveness may be misguided. If nanotechnology is revolutionary, it will foment complete replacement of a product or process. There is some evidence to suggest that domestic research will locate some of the high-paying jobs in the United States, but that does not guarantee that the production jobs will be similarly situated. Indeed, experience with the micromechanical device industry has made this abundantly clear. Although we need to support education and basic research at home, as well as promoting intersections within academe and industry, it might behoove us to understand that the world of applied nanoscience may be the quintessential illustration of the flattened-world thesis described in Thomas Friedman’s recent book The World Is Flat . Allen is correct in his assessment of graduation rates of engineers here and abroad, but no one to date has developed a strategy to significantly increase the number of homegrown engineers. Visas are more difficult to get for foreign students and opportunities at home make it less likely that they will want to stay here after graduation. A better solution might be to stick with globalization and internationalize efforts to develop a “nanotechnology economy” and back away from what is beginning to read too much like nationalistic rhetoric.

Third, the jury is still out on the Congressional Nanotechnology Caucus and its 30 members, only 7 of whom are on its Web site, one of whom is no longer in the Senate. It is unclear what “huge successes” it has achieved to date. Maybe we will just have to watch and wait.

DAVID M. BERUBE

Professor of Communication Studies

University of South Carolina

Columbia, South Carolina


SENATOR GEORGE ALLEN IS, as usual, right on the mark about the United States’ need for more home-grown engineers and scientists. I admire Allen for his willingness to take a leadership stance on this most important technology and to initiate serious funding support for our universities. Now we need bright students, bright professors, and, especially, bright technologists to transfer nanotechnology out of the lab and into the market. The country that does the best job of commercializing nanotechnology will reap the benefits of our investment, and it is not a foregone conclusion that we will be that country. We also need more entrepreneurs, need to take more calculated risks, and need to celebrate our successful businesses instead of punishing them with more poorly thought-out regulations like Sarbanes-Oxley.

JAMES R. VON EHR II

Chief Executive Officer

Zyvex Corporation

Richardson, Texas


Is very small beautiful?

IN “GETTING NANOTECHNOLOGY RIGHT the First Time” (Issues, Summer 2005), John Balbus, Richard Denison, Karen Florini, and Scott Walsh correctly point to the weaknesses in both methodology and regulation surrounding manufactured nanoparticles. This weakness is demonstrated in Europe as well as the United States. Although a number of studies have recognized the problem, there is no strategy for action and regulation. The onus is put onto business to behave responsibly. It may well be that few or some, rather than all, nanoparticles prove to display unacceptable toxicity, but in the meantime, where should the benefit of the doubt lie in their release and management while their characterization remains unclear? Because nanoparticles are already finding their way into cosmetics and medical dressings, this is not an idle question.

Further, the development of nanotechnology exists in a broader context than that of consumer risk. Where will the emphasis of nanotechnology exploitation be, and what direction should the public sector be providing?

Will nanotechnology be used for the development of renewable energy or to cheapen the extraction and use of fossil fuels? Will it be used for producing stain-resistant trousers or providing drinking water for the poor? The easy answer would be both. But where will the emphasis lie? And how is the prioritization made? The market will not necessarily prioritize socially or environmentally beneficial R&D.

These are not just theoretical questions. It can be argued that the technologies we are researching now will have a more significant impact on future society than the term of a president or prime minister. And although the decisions of the latter are subject to immense critical debate and scrutiny, the outcomes from the former occur by happenstance and accident. There is no bigger statement of society’s aspirations than the purposes behind where it puts its R&D money, and these aspirations, if not the specific funding decisions, should be open to debate.

Of course, these questions are no different for nanotechnology than for many other technological developments, but if the nanotechnology stuff isn’t just hype, then they are more important here than elsewhere.

The role of government should be to provide direction for public R&D expenditure and to provide a framework for the deployment of technology that is sympathetic to socially beneficial purposes.

DOUGLAS PARR

Chief Scientific Adviser

Greenpeace UK

London, England

http://www.greenpeace.org.uk


LIKE JOHN BALBUS, Richard Denison, Karen Florini, and Scott Walsh, we believe that the immense potential benefits of nanotechnology can only be realized if the development of the technology is balanced to avoid adverse public health, occupational, and ecological risks. Naturally occurring nanomaterials have been with us for centuries, but the revolutionary development of manufactured nanomaterials raises new questions that deserve answers.

At a National Research Council Workshop on Nanotechnology (March 24, 2005), representatives of the American Chemistry Council (ACC) and Environmental Defense(ED made presentations with very similar points on the need for a much-expanded research effort on the environment, health, and safety (EHS) issues associated with nanotechnology. The ED and the ACC CHEMSTAR Nanotechnology Panel, in a joint statement of principles (70FR24574—Docket OPPT—2004-0122, 23 June 2005), stated, “A significant increase in government investment in research on the health and environmental implications of nanotechnology is essential.”

If we want to get nanotechnology right the first time, we must have a high-quality, comprehensive, and prioritized international research agenda that is adequately funded. The agenda should (1) focus on the risk assessment goal, which will require information on the continuum of exposure, dose, and response; (2) develop new interdisciplinary partnerships that bring visionary thinking to the field; (3) support better understanding of the fundamental properties of nanomaterials that are important in the exposure/ dose/response paradigm; and (4) develop processes for establishing validated standard measurement protocols, so that individual or categories of materials can be studied.

Balbus et al. call for a significant increase in funding for a federal research program on the EHS implications of nanotechnology. We support that recommendation and urge its rapid implementation. Developing a truly strategic research strategy of appropriate quality and breadth requires the credibility and talent that organizations like the National Research Council can bring to bear. That research strategy needs to provide for expanded international involvement. A highly credible research strategy would provide evidence to funding organizations that the monies would be spent efficiently and effectively, and help demonstrate that EHS risk questions have answers.

It is refreshing to see that so many parties with different organizational objectives recognize common ground. Devoting energy to such coalitions to build a safe technology is so much more valuable to society than choosing sides for debates.

CAROL J. HENRY

Vice President, Science and Research

LARRY S. ANDREWS

Chair, ACC CHEMSTAR Nanotechnology Panel

American Chemistry Council

Arlington, Virginia

Integrated Pest Management: A National Goal?

The original intent of Integrated Pest Management (IPM) was the coordinated use of multiple tactics for managing all classes of pests in an ecologically and economically sound way. Pesticides were to be applied only as needed, and decisions to treat were to be based on regular monitoring of pest populations and natural enemies (or antagonists) of pests in the target system. The use of a wide range of compatible or nondisruptive practices, such as resistant crop varieties and selective pesticides that preserve antagonists of pests, would ultimately lead to reduced reliance on chemical pesticides.

In principle, IPM would appear to be a worthy national goal. But after 30 years of research, it is debatable whether IPM as originally envisioned has been implemented to any significant extent in U.S. agriculture. The predominant approach to pest management in many agricultural sectors continues to emphasize pesticides and is sometimes referred to as “integrated pesticide management.” In insect management, for example, crops are monitored, insecticides are applied when pests reach a predetermined threshold, different insecticides are juggled to manage pest resistance to the insecticides, and new insecticides are evaluated for input substitution. In recent years, “resistance management” has evolved into a respected discipline in its own right—an apparent attempt to portray an admission of failure as a sign of progress.

In California, there has been some progress in real IPM, but integrated pesticide management remains the dominant practice for many crops. In an analysis of pesticide use from 1993 to 2000, Lynn Epstein and Susan Bassein (University of California, Davis) concluded that there were no obvious trends in decreased use of most pesticides used to treat plant disease. For insecticides, they reported reductions in the use of organophosphates, but attributed them to the substitution of newer pesticides such as synthetic pyrethroids. The California Department of Pesticide Regulation’s (DPR’s) January 2005 report on agricultural pesticide use revealed that pesticide use in most categories actually increased in 2003 as compared to 2002. As a result, the DPR director has asked the department’s Pest Management Advisory Committee to develop a “blueprint for IPM progress.”

Federal policy

The first official government use of the term IPM occurred in 1972, when President Nixon directed federal agencies to advance the concept and its application. In 1979, President Carter established the interagency IPM Coordinating Committee to ensure the development and implementation of IPM. In 1993, the Clinton IPM Initiative was launched, with a goal of having 75% of U.S. crop acreage under IPM by 2000. To qualify as an IPM farmer, it was necessary to use three of four key tactics: prevention, avoidance, monitoring, and suppression. Three out of four might sound good, but that made it possible to exclude monitoring, which is an essential IPM component. In addition, there was no requirement for integration or for the use of compatible suppressive tactics. This was illusory IPM.

A 2001 General Accounting Office (GAO) report criticized federal efforts to implement IPM and reduce pesticide use. The GAO found that “IPM as implemented to this point has not yet yielded nationwide reductions in chemical pesticide use. In fact, total use of agricultural pesticides, measured in pounds of active ingredient, has actually increased since the beginning of USDA’s IPM initiative.” The report concluded that, “federal efforts to support IPM adoption suffer from shortcomings in leadership, coordination, and management.”

In 2002, the U.S. Department of Agriculture (USDA) launched the National Road Map for Integrated Pest Management to identify strategic directions for IPM research, implementation, and measurement that would ensure that the economic, health, and environmental benefits of IPM adoption were realized. The need for the Road Map is a tacit admission that the Clinton IPM Initiative was a failure. The latest revision of the Road Map was issued in May 2004, with the stated goal to “increase nationwide communication and efficiency through information exchanges among federal and non-federal IPM practitioners and service providers.” However, the current working definition of IPM is sufficiently vague to perpetuate the status quo. Also, the central focus of the Road Map is on how to use pesticides to maximize economic returns while reducing risks to public health and the environment.

If the IPM approach does not really prevail in practice, why do such government initiatives remain so popular? Two reasons come to mind. First, for pest/crop consultants, pesticide companies, professional societies, government bureaucrats, and politicians, IPM sounds benign and enlightened. Of course, with more than 60 definitions of IPM now in circulation, it is relatively easy to find one that fits what one is already doing. Second, IPM is a fund-raising tool for land-grant scientists whose institutions are becoming more and more dependent on external sources of money to carry out their mission. It is not surprising that otherwise objective land-grant scientists can be reluctant to engage in a dispassionate, open, and honest debate on the status of IPM. There is simply too much research funding at stake.

Thus, IPM has become a “feel-good” term that offers a way for anyone to imply that he or she is addressing environmental and health issues associated with pesticides, when the reality could be quite different. The result is a “win-win situation” for most everyone, including powerful interest groups, and it is reinforced by the general lack of a skeptical, agribusiness press corps (the editor of California Farmer is a notable exception).

Congress should take note

The recent history of federal IPM initiatives has been one of redefining the mission rather than accomplishing it. A “time out” is in order, and the proponents of the National IPM Road Map should find the nearest rest stop. Redefining IPM to make it more achievable does not address the real problem. Congress should investigate why the federal government spends millions of dollars each year promoting a concept that, in the minds of many, is not practiced to any significant extent in many sectors of U.S. agriculture.

The first order of business is to promulgate one clear and workable definition of IPM. Then, several key questions need to be answered:

  • What is the status of IPM implementation? In which crops and states is it working or not working?
  • Should IPM be the national goal? If so, should there be a certification program for practitioners, perhaps patterned after that for organic farmers?
  • Has pesticide use decreased, stabilized, or increased in recent years? Are any declines in pesticide use the result of IPM?
  • Should pesticide-use reporting be mandatory in the states, as well as for all federal agencies that use pesticides?

It is critical to have a status report that is independent of those who have a vested interest in IPM so that we can begin to address the underlying reasons for its success or failure.

If IPM is to be the national goal, then we must ask: Is the land-grant system up to the challenge of delivering IPM at the farm level? Land-grant scientists typically are underfunded, continue to work along disciplinary lines, and are subject to an incentive system that rewards individual effort. Although IPM is a professional opportunity for these scientists, it is not necessarily a professional obligation. Departments or units organized around pests (for example, entomology, plant pathology, and weed science) tend to perpetuate the problem. None of this is conducive to solving an interdisciplinary problem such as IPM, and it does not bode well for the U.S. farmer. If the land-grant system cannot deliver, then the National IPM Road Map will lead to nowhere.

Managing the Digital Ecosystem

Benjamin Disraeli’s ironic comment, “I must follow the people. Am I not their leader?” aptly describes the feelings of many college and university administrators as they develop institutional plans for information technology (IT) that will support research, teaching, and learning in the coming decades. The context in which we are expected to lead our institutions in IT decisions has changed dramatically. We are experiencing unprecedented technological change emerging from a much greater diversity of sources than ever before. Students, faculty, and staff arrive at the beginning of each school year with new ideas (and associated hardware and software) for using information technologies that will enable them to accomplish their diverse goals— educational, professional, and personal. Technology firms along with increasingly influential open-source software development efforts present us with a staggering number of technologies that hold promise for enabling our fundamental missions of creating and transferring knowledge. What leadership strategies are appropriate in such a complex, dynamic, and unpredictable context?

We’ve gone through a qualitative rather than merely a quantitative transition in the nature of IT. We have recently become accustomed to thinking of IT as one of the many infrastructures we provide on campus and for the national academic community. But the complexity and dynamism of IT, especially in academe, now warrants thinking about it in different terms: as an IT ecosystem. Computing in higher education has evolved from islands of innovation, to activities that depend on campuswide and worldwide infrastructures, to an ecosystem with many niches of experimentation and resulting innovation. These niches are filled by faculty and students who do what they do with IT following whatever motivations they may have, undirected by a central authority. But, as in any ecosystem, they are connected to the whole and often depend on a set of core technical, physical, and social services to survive. Many innovations depend on services—networking, authentication and authorization mechanisms, directories, communication protocols, domain-naming services, global time services, software licensing, etc.—becoming widely available beyond the niche in which they develop. A simple example is the dependence of almost any IT innovation that uses the network to share information on the “domain-naming system” to find the devices with which it needs to communicate. Managing ecosystems calls for a very different set of strategies than those used to administer islands of innovation or the more static, top-down services that characterize many infrastructures.

Academic computing ecosystems, especially (but not exclusively) at research universities, are tremendously complex, and understanding this complexity is crucial for selecting effective management strategies. These ecosystems evolve and change, often very quickly, as innovations developed by faculty and students are absorbed by the ecosystem. Unlike other campus infrastructural systems, IT systems have important feedback loops. For example, successful implementations of “course management systems” (Web-based applications for posting syllabi, readings, assignments, etc.) require continuous communication between those providing the central service and the faculty and students using it. The result is evolutionary modification, often at a rapid pace, of both the technological tool itself and the ways in which the faculty choose to use it. If this mutual modification through feedback is ineffective, the tool will quickly become irrelevant and die. Many on-campus innovations rapidly become “necessities” and redefine the nature of the core services needed to sustain them. Unlike natural ecosystems, the IT ecosystem cannot evolve without some external management. Nevertheless, top-down planning, uninformed by the diverse niches of use and innovation, will not work.

Leadership evolution

Twenty-five years ago computers were relatively large, relatively rare, and used for computation. The communities at the university interested in computing were small, isolated, and largely self-sufficient. Institutional leadership and planning entailed helping these communities get resources to acquire the next fastest mainframe or the newest micro- or minicomputer for word processing or data acquisition and data processing.

By 1990, personal computers (PCs) were ubiquitous. Each had computing power that exceeded most of the mainframes of the previous decade, and some of them were networked together and to the NSFnet. IT planning had become more complex, but the shape of leadership needed from the central administrations and the associated strategies were fairly clear: Find resources to build campus networks; invest in wide-area networking for research; fund opportunities for faculty to experiment with using PCs for research, teaching, and learning; and hire staff to look after the support needs of those using computing for a wide variety of purposes. In short, build and sustain an IT infrastructure for our individual institutions and for higher education as a whole.

Over the past 15 years, computers have become useful for communication, new forms of knowledge representation, knowledge management, visualizing complex data, customer relations management, music storage and transfer, video editing, gaming of all kinds, grid computing, and something else new seemingly every day. The user community now includes everyone on campus. IT tools have become essential utilities without which we cannot function. At the same time, they are a defining force in shaping the future of our core missions of knowledge creation and transfer. Furthermore, the cheap microprocessor and the Internet created a tipping point sometime during this period, when the hub of innovation moved from a small core of experts to a vast number of users.

By the turn of the 21st century, the flow of IT innovations was no longer largely from the university to the students. Except perhaps for supplying the bandwidth for Internet connections and educational discounts on expensive software, many colleges and universities have little to offer to their students who arrive with computers (often more than one), cell phones, personal digital assistants, iPods, digital cameras, Sony PlayStations, Xboxes, LCD TVs, Bluetooth-enabled devices, e-mail accounts, personal Web pages, blogs, and high expectations for the role IT will play in their education.

There is a similar story regarding faculty, not just in science and engineering but in all disciplines. They depend on a wide variety of software, operating systems, computer configurations (including the emerging small supercomputers known as “compute clusters”), multiple servers under their control, and access to all of their resources at any time from any place in the world. They are increasingly involved in cross-institutional research groups that require everything from sharing large data sets to remote digital access to supercomputers and state-of-the-art research instruments.

For students and faculty, the number and kinds of information technologies they expect the university to support is more diverse than ever before and more rapidly changing than most would have imagined when universities began their commitment to IT as a fundamental infrastructure. Documents such as the National Science Foundation report Revolutionizing Science and Engineering through Cyberinfrastructure highlight some of the issues. For example, faculty routinely work with colleagues at distant institutions, depending on e-mail, instant messaging, wikis, videoconferencing, file-sharing protocols, cybertrust mechanisms, and many other IT tools. Grid computing (the use of many computers working together to solve large-scale problems) will require not only high-bandwidth network connections but data architectures and authentication and authorization protocols that ensure the coordination and validity of the calculations and data.

The management challenge

Although many have noted the need to adapt to the changing IT environment, few have offered clear suggestions about how university leadership must change to deal with the complexity of the situation. We believe that some unusual management strategies are needed, and some very difficult decisions will have to be made in response to a few important trends, including:

  • The proliferation of niches of IT applications has made providing the core services for sustaining the system as a whole more complex to manage.
  • The costs of providing the levels of IT services expected by both faculty and students working in these niches are increasing because of this complexity.
  • Information technologies emerging from niche innovation are being rapidly identified as essential tools for research, teaching, and learning—tools without which success cannot be achieved—and the rate at which this is happening is accelerating.

A core business of higher education is innovation in research and teaching, which is one of the ways in which IT ecology at the university differs from that in the commercial sector. In the latter, some efficiencies in IT are produced by the standardization of hardware and software. Experimentation with new hardware and software is centrally planned and approved. In contrast, a guiding principle for IT support strategies at universities has been to encourage experimentation by both faculty and students. We keep the networks open and provide at least best-effort support for a wide range of software and hardware. Not surprisingly, the result has been a proliferation of technologies and high expectations that universities will create core services that support whatever niche users evolve. Diversity is both a value for our ecosystem and a drain on scarce resources.

That computing is a sine qua non of contemporary scientific research requires no additional arguments from us. What is less well known is how widely the dependence on IT has spread through all disciplines. It would have been hard to imagine 30 years ago that academic philosophers would be deeply involved in fields such as computational logic and computational epistemology or that the theater department would join with computer scientists to create an entirely new academic enterprise called “entertainment technology.” And what will be upsetting to some is that IT is becoming equally essential in teaching and learning. The history of technology-based or technology-enhanced learning is not one of substantial demonstrable improvement in learning outcomes. But things are changing. As those developing e-learning tools draw increasingly from the body of knowledge and techniques of cognitive science, effective technology-enhanced learning is becoming a reality.

Carnegie Mellon faculty have had measurable success in using cognitively informed computer-based instruction in several areas. The earliest results came from intelligent tutoring systems developed using John Anderson’s theory of cognition. Researchers used painstaking talk-aloud sessions to understand how novices and experts solve a problem. They then developed software that compares steps taken by the user with these cognitive models of how others solve problems to provide intelligent individualized feedback to students working their way through problems. The result was Cognitive Tutors for middle- and high-school students that have produced documented improvements in the learning of algebra, geometry, and other subjects. Expanding the application of cognitive science (and Cognitive Tutors) to online courses for postsecondary education, we have had considerable success with courses developed as part of Carnegie Mellon’s Open Learning Initiative (OLI), which is devoted to developing high-quality, openly available, online courses. The OLI’s goal is to embed in the course offering all of the instruction and instructional material that an individual novice learner needs to learn a topic at the introductory college level. These efforts are part of a worldwide movement to develop IT tools to provide effective transfer of knowledge.

An additional development at universities is the pace at which successful IT experiments are expected to become part of the IT utility. Consider the difference between e-mail and wireless networking. E-mail was invented as part of the ARPANET in 1971. Yet it was not really an expected part of universities’ IT infrastructure until the late 1980s and early 1990s. Very early wireless networking was first deployed as an experiment at Carnegie Mellon between 1994 and 1997. By 2001, not only was the entire campus covered with a commercial 802.11b network, but students and faculty expected that network to be ubiquitous and always available. The RIM Corporation introduced the first Blackberry wireless handheld device in 1998. By 2002, many faculty and staff on campuses started to use Blackberries. Today, many expect and depend on Blackberry (or Treo or other cellular technology) connectivity to keep up with their e-mail and calendars as they travel. Students, faculty, and staff will arrive this fall with a range of wireless devices that they expect to connect to their university e-mail, course management systems, central calendars, the university portal, and other core information services provided by the university.

We should anticipate even more challenges. The expectation of our constituents is that if a technology is available and it can help them accomplish their goals, the university should provide whatever core services are required to support it.

Management strategies

We propose the following as critical strategies for addressing new challenges posed by IT ecosystems in higher education.

Creating more robust feedback loops between the members of the academic community who are generating IT innovations and those responsible for supplying a sustainable environment. We have described how IT over the past 25 years has changed from innovation and use by the few to innovation and use by the many on campus. Without gathering data from the many, no central organization can effectively predict what the university must do to provide the core services that that will both sustain the everyday uses and enable the evolution of innovative uses. The diversity of sources of use and innovation requires new and more aggressive techniques for gathering data. Central IT organizations must engage in information-gathering outreach unheard of in earlier times. University leadership must create the conduits of communication and encourage active participation by faculty, students, and staff in that communication to develop services to best support their needs and expectations.

Collaborating with other universities to develop shared solutions for both intra- and interinstitutional support for academic IT. There are many examples of collaborative efforts among universities in IT. These range from regional education and research networks to consortia for software licensing to joint software development. Recent examples include the development of the National Lambda Rail, a project by a consortium of research universities, along with Internet2, to create an all-optical, extremely high-bandwidth network that will serve the bandwidth and network research needs of higher education, and “Sakai,” a project led by MIT, Indiana University, the University of Michigan, and Stanford University to develop an open-source course management system (technology to support Web posting of course materials and collaborative work) for higher education. Universities could do much more, but resources are limited. As IT organizations at our various universities are called on to provide more and more services without additional resource input, collaborative work does and will suffer. Leaders are faced with the difficult decisions entailed in choosing between keeping up with increasing daily pressures on basic IT services and supporting collaborative projects with other institutions to build sustainable and evolving IT core services for the future.

Selecting for adaptive and nonadaptive technologies for resource allocation based on their contributions to our fundamental missions. Under resource limitations, not all IT applications can be equally supported. Thus, difficult decisions are required to select which expectations to meet and which to disappoint. Creating the robust feedback loops mentioned above will help universities better adapt to changing needs and expectations. But more than understanding use and user expectations is required. We also need to make choices based on solid data about the contribution of the many IT applications to our central research and teaching missions. Although faculty are in the best position to identify what IT effectively supports their research, even here there are questions about whether the methods they use are globally effective and efficient. For example, several universities are now encouraging faculty to forego having their compute clusters near their offices in favor of locating them in central machine rooms. The theory is that the university will spend less overhead and be able to provide better professional IT support for central farms of clusters than for clusters distributed all over campus. If this is true, it is reasonable for university leaders to find ways to encourage shared-resource strategies at the cost of some individual convenience. The same principle applies to many core services that are currently replicated across our institutions for the sake of convenience at the potential cost of global inefficiencies and lack of interoperability.

Judgments about the relative contributions of educational technologies to fulfilling the core mission of knowledge transfer are also essential. Despite being burned repeatedly by claims that technology will transform learning outcomes, central leadership is nevertheless reluctant to deny requests for potentially promising new technologies for teaching and learning, Although finding effective technologies has often been done through trial-and-error experimentation, there is increasing information from cognitive and learning sciences about what is likely to help and what is likely to hurt that can give us guidance about where to place our bets. We have not yet really broken the pattern of deploying new technology with only the hope that we will find effective pedagogical applications for it.

Take the fairly old notion of a laptop requirement on campus. Many vendors have sold K-12 districts and universities on the notion that equipping all students with a laptop will “obviously” improve learning outcomes. Yet it is nearly impossible to find a study that reports anything more than anecdotes about use or user satisfaction at “laptop universities” or that controls for other educational innovations introduced at the same time as the laptops. In a recent study conducted by the Office of Technology for Education and the Eberly Center for Teaching Excellence at Carnegie Mellon, we learned that distributing laptops to students in a case study actually discouraged some collaborative learning behaviors, which learning sciences have shown improve learning outcomes. The study also indicated positive consequences of laptop ownership. The point is not that universal laptop ownership is not a good thing; rather, that before widely deploying a technology, we should understand both what problems we are trying to address and what we already know about how that technology might solve the problems.

In the absence of rigorous data, we cannot afford to invest in proliferating devices and software that merely appear to hold some promise to improve learning. Another example is the increasingly vociferous claim that educational software needs to exploit the fact that the current generation of learners are “natural multitaskers.” This seems an increasingly dubious or at least complicated claim in the light of developing evidence that multitasking is accompanied by reduced cognitive attention. Leadership is required to say “no” in the face of unsubstantiated claims that a technology will transform teaching and learning.

Revitalizing commitment to open standards to ensure the sustainable and evolvable development of IT in academe. Finally, education must do more to return some sanity to the IT standards movements. Our current IT ecosystem’s (somewhat fragile) stability depends now on far-sighted work in “open standards” that have allowed software and hardware to interoperate and have structured, shared services. Most of these standards originated at individual universities. Members of broader academic communities helped guide them through sanctioning bodies and lobbied for vendor acceptance. Enabling diverse IT ecosystem configurations depends on the existence of open standards that allow many different devices and pieces of software to coexist, communicate, and use common features of the environment. It has always been a challenge to convince vendors to adhere to open standards if there is any commercial advantage to be gained by creating features that step outside those protocols. The situation is not getting better. Indeed, there is a subtle deception about claims of adherence to standards. If a product mostly follows a standard, the vendor will say it “adheres.” But the hard reality faced by those implementing the products is that anything short of complete adherence often results in complete failure of the service to work properly or requires time-consuming customization. Very large vendors can often succeed by stepping outside the standards and encouraging the adoption of their proprietary systems as a “top-to-bottom” solution for an institution. But even smaller vendors will often opt to ignore open-standard options if they deem them too great an obstacle to getting their product to market in a timely fashion.

UNIVERSITY LEADERS MUST INSIST THAT STANDARDS BE USABLE, THAT THEY BE DEVELOPED AND DOCUMENTED IN A TIMELY MANNER, AND THAT THEY CAN BE EASILY ADOPTED BY COMMERCIAL VENDORS AND INDEPENDENT OPEN SOURCE DEVELOPERS.

Indeed, more than the vendors must bear the blame for open standards not playing the role they should in sustaining academia’s diverse and evolving array of applications. Standards definitions have often become too cumbersome, and the ratification process too slow. The communities developing standards seem to lose touch with both vendors and users. open source movements that should be using their standards in the creation of products. The effort to develop the IMS and SCORM standards to allow interoperability of course management systems and repositories of related materials has become a multiyear marathon that is producing standards so detailed that vendors and university software developers are reluctant to use them. This could result in each university opting for its own closed standard, which would make it impossible for universities to take advantage of shared core services and much more difficult for successful innovations to spread.

University leaders must insist that standards be usable, that they be developed and documented in a timely manner, and that they can be easily adopted by commercial vendors and independent open-source developers. Collectively, colleges and universities constitute a large market and are thus in a position to play a powerful role in the standards game. We can use our intellectual strengths to lobby researchers in academia and the corporate sector who are engaged in creating standards, as well as organizations such as the Institute of Electrical and Electronics Engineers, who are responsible for ratifying them. Finally, we can use our buying power to reward vendors that build features that really adhere to open standards.

The implications of the transition from IT as infrastructure to IT as ecosystem are profound for leaders in higher education. We are already making tough choices about how to structure IT organizations and allocate resources to IT. Our arguments here suggest that IT organizations should start to look and act differently than they do today: They should be part of robust feedback loops with the dynamically changing niches of innovation throughout the university community; they should be looking beyond their own walls to collaboration with other institutions; and they should help revitalize the open-standards movements that are so critical to sustaining diversity in our ecosystems. But this means hard choices for university leadership outside of IT. University leaders must partner with faculty, students, and IT leadership to make some hard choices about how to sustain those niches in the ecosystem that are most valuable for our core missions. It means some paths of innovation and some core services will be starved. It means diverting resources from services we could provide now to fuel the collaboration and development of standards that will sustain us into the future. In a context in which IT gives us all instant gratification—we want to use it now!—these will not be popular decisions. Few leadership decisions to sustain an ecosystem for the future at the cost of current expectations and needs are.

Reversing the Incredible Shrinking Energy R&D Budget

The federal government and private industry are both reducing their investments in energy research and development (R&D) at a time when geopolitics, environmental concerns, and economic competitiveness call instead for a major expansion in U.S. capacity to innovate in this sector. Although the Bush administration lists energy research as a “highpriority national need” and points to the recently passed energy bill as evidence of action, the 2005 federal budget reduced energy R&D by 11 percent from 2004. The American Association for the Advancement of Science projects a decline in federal energy R&D of 18 percent by 2009. Meanwhile, and arguably most troubling, the lack of vision on energy is damaging the business environment for existing and startup energy companies. Investments in energy R&D by U.S. companies fell by 50 percent between 1991 and 2003.

This decline occurred despite numerous calls from expert groups for major new commitments to energy R&D. A 1997 report from the President’s Committee of Advisors on Science and Technology and a 2004 report from the bipartisan National Commission on Energy Policy each recommended that federal R&D spending be doubled. The importance of energy has led several groups to call for much larger commitments, on the scale of the Manhattan Project of the 1940s.

A comparison with the pharmaceutical industry is revealing. In the early 1980s, energy companies were investing more in R&D than were drug companies; today, drug companies invest 10 times as much in R&D as do energy firms. Total private sector energy R&D is less than the R&D budgets of individual biotech companies such as Amgen and Genentech. The nation’s ability to respond to the challenge of climate change and to the economic consequences of disruptions in energy supply has been significantly weakened by the lack of attention to long-term energy planning. The current energy bill is a collection of subsidies without any such vision.

Comparison to previous major government research programs suggests that a serious federal commitment to energy R&D could yield dramatic results. Using emissions scenarios from the Intergovernmental Panel on Climate Change and a framework for estimating the climate-related savings from energy R&D programs developed by Robert Schock from Lawrence Livermore National Laboratory, we calculate that energy R&D spending of $15-30 billion/year would be sufficient to stabilize CO2 at double pre-industrial levels. This 5- to 10-fold increase in spending from current levels is not a pie-in-the-sky proposal; in fact it is consistent with the growth seen in several previous federal programs, each of which took place in response to clearly articulated national needs. In the private sector, U.S. energy companies could increase their R&D spending by a factor of 10 and would still be below the average R&D intensity of U.S. industry. Past experience indicates that this investment would be repaid several times over in technological innovations, business opportunities, and job growth.

R&D investment is an essential component of a broad innovation-based energy strategy that includes transforming markets and reducing barriers to the commercialization and diffusion of nascent low-carbon energy technologies. The economic benefit of such a bold move would repay the country in job creation and global economic leadership, building a vibrant, environmentally sustainable engine of new economic growth.

Precedents for federal investment

For each of eight federal programs in which annual spending either doubled or increased by more than $10 billion during its lifetime, we calculate a baseline level of spending that would have occurred if funding grew 4.3 percent per year (the 50-year average historical growth rate of U.S. R&D). The difference between the actual spending and the baseline during the program we call extra program spending. We also examined the thesis that these large programs crowd out other research and found that the evidence for this contention is weak or nonexistent. In fact, large government R&D initiatives were associated with higher levels of both private sector R&D and R&D in other federal programs.

  PEAK YEAR
(2002$ Billions)
PROGRAM DURATION
(2002$ Billions)
Program Sector Years Spending Increase Spending Extra Spending Factor Increase
Manhattan Project Defence 1940-45 $10.0 $10.0 $25.0 $25.0 n/a
Apollo Program Space 1963-72 $23.8 $19.8 $184.6 $127.4 3.2
Project Independence Energy 1975-82 $7.8 $5.3 $49.9 $25.6 2.1
Reagan defence Defence 1981-89 $58.4 $27.6 $445.1 $100.3 1.3
Doubling NIH Health 1999-04 $28.4 $13.3 $138.3 $32.6 1.3
War on Terror Defence 2002-04 $67.7 $19.5 $187.1 $29.6 1.2
5x energy scenario Energy 2005-15 $17.1 $13.7 $96.8 $47.9 2.0
10x energy scenario Energy 2005-15 $34.0 $30.6 $154.3 $105.4 3.2

Source: (National Science Foundation, Division of Science Resources Statistics, 2004).

Declining energy R&D investment

Since 1980, energy R&D as a percentage of total U.S. R&D has fallen from 10 percent to 2 percent. Since the mid-1990s, both public and private sector R&D spending has been stagnant for renewable energy and energy efficiency, and has declined for fossil fuel and nuclear technology. The lack of industry investment suggests that the public sector needs to play a role in not only increasing investment directly but also correcting the market and regulatory obstacles that inhibit investment in new technology.

Sources: R. M. Wolfe, “Research and Development in Industry” (National Science Foundation, Division of Science Resources Statistics, 2004); M. Jefferson, et al., “Energy Technologies for the 21st Century” (World Energy Council, 2001); R. L. Meeks, “Federal R&D Funding by Budget Function: Fiscal Years 2003-05” NSF 05-303 (National Science Foundation, Division of Science Resources Statistics, 20040; R. Margolis, and D. M. Kammen “Underinvestment: The energy technology and R&D policy challenge”, Science, 285, 690-692 (1999).

Patent data confirms problem

Patenting provides a measure of the outcomes of the innovation process. We use records of successful U.S. patent applications as a proxy for the intensity of innovative activity and find strong correlations between public R&D and patenting across a variety of energy technologies. Since the early 1980s, all three indicators—public sector R&D, private sector R&D, and patenting—exhibit consistently negative trends. The data include only U.S. patents issued to U.S. inventors. Patents are dated by their year of application to remove the effects of the lag between application and approval.

Source: U.S. Patent and Trademark Office patent database.

Source: U.S. Patent and Trademark Office patent database.

Source: U.S. Patent and Trademark Office patent database.

Source: U.S. Patent and Trademark Office patent database.

Source: U.S. Patent and Trademark Office patent database.

Highly cited patents offer hope

In the same way that journal citations can be used as a measure of scientific importance, patent citation data can be used to identify “high-value” patents. In the energy sector, valuable patents do not occur randomly; they cluster in specific periods of productive innovation. In each year, between 5 percent and 10 percent of the patents examined qualified as high value. The drivers behind these clusters of valuable patents include R&D investment, growth in demand, and exploitation of technical opportunities. These clusters reflect both successful innovations and productive public policies, and mark opportunities to further energize emerging technologies and industries.

Source: B. H. Hall, A. B. Jaffe, M. Trajtenberg, “The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools.” (NBER, 2001).

Source: B. H. Hall, A. B. Jaffe, M. Trajtenberg, “The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools.” (NBER, 2001).

Source: B. H. Hall, A. B. Jaffe, M. Trajtenberg, “The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools.” (NBER, 2001).

The fuel cell exception

One bright spot in the nation’s energy innovation system is the increased investment and innovation in fuel cells. Despite a 17 percent drop in federal funding, patenting activity intensified by nearly an order of magnitude, from 47 in 1994 to 349 in 2001, with much of the activity driven by private sector investment fuelled by rising stock prices. The relationship between fuel cell company stock prices and patenting is stronger than that between patenting and public R&D. The five firms shown account for 24 percent of patents from 1999 to 2004. Almost 300 firms received fuel cell patents between 1999-2004, reflecting participation both by small and large firms.

Source: U.S. Patent and Trademark Office patent database.

Federal energy patents seldom cited

Patent citations can be used to measure both the return on R&D investment and the health of the technology commercialization process, as patents from government research provide the basis for subsequent patents related to technology development and marketable products. The difference between the U.S. federal energy patent portfolio and all other U.S. patents is striking, with energy patents earning on average only 68 percent as many citations as the overall U.S. average from 1970 to 1997. This lack of development of government-sponsored inventions should not be surprising given the declining emphasis on innovation among private energy companies.

Sources: B. H. Hall, A. B. Jaffe, M. Trajtenberg, “The NBER Patent Citation Data File: Lessons, Insights and Methodological Tools.” (NBER, 2001); G. Nemet and D. M. Kammen, (2005) Energy Policy, submitted.

In Agricultural Trade Talks, First Do No Harm

Trade negotiators at the World Trade Organization (WTO) are struggling to meet a self-imposed deadline of December 2005 to agree on the broad outlines of new trade rules that would cover global commerce in agricultural products, manufactured goods, and a wide array of services. Negotiations in each of these sectors pose tough political and economic choices for the 148 countries involved, but the key bottleneck is agriculture. Developing countries threaten to block progress on trade liberalization for manufactured goods and services unless their fears and interests in the agricultural sector are addressed—and with good reason. They are home to the almost 3 billion people who live on less than $2 a day, and most of the impoverished survive on small-scale farming. Unless negotiators from the United States and other wealthy countries make special provisions in the global trade regime to deal with trade’s impact on those most vulnerable farmers, the already poor will be made worse off and whole countries could slip backward economically. The United States and Europe have made vague commitments to treat these trade talks as a “development round” but have resisted translating those sentiments into practical proposals on agriculture. There is a clear solution: Treat all crops cultivated by small-scale farmers in developing countries as special products that are exempt from any further reductions in tariffs or increases in import quotas.

Developing countries’ worries about the WTO agricultural negotiations are well founded. Most are home to large numbers of subsistence farmers who have few other employment prospects. Global farm trade poses risks to them in two ways. First, government subsidies paid to farmers in wealthy countries allow them to sell their products on world markets at less than the cost of production, thus driving down the prices that poor farmers receive for the same crops. Second, many subsistence farmers cannot compete with global crop prices even without the distorting effects of subsidies, because their small landholdings, dependence on rain rather than irrigation, and lower technology in inputs such as seeds and machinery raise their production costs. If their governments cut the tariffs that now shield them from cheaper imported crops, the resulting lower prices they receive would reduce poor farmers’ already low incomes or drive them off the land altogether.

The greater the proportion of the workforce in agriculture, the greater is the risk of increasing poverty. In low-income countries, an average of 68% of the population makes its living through farming. Even in middle-income countries, 25% of the population is engaged in agriculture. In China, farmers make up about 50% of the total workforce; in India, about 60%. In countries with large numbers of subsistence farmers, it would be impossible for sufficient job opportunities to be created in other sectors in the short to medium term to absorb these displaced farmers.

Some developing countries’ farmers are globally competitive, and they would do well in a tariff-free world if wealthy countries would reduce domestic and export subsidies. Brazilian sugar and orange producers, West African cotton farmers, and Thai rice farmers fall into this category. However, even in countries where some farmers are competitive, there are many subsistence farmers who cannot compete. Also, in terms of employment, the internationally competitive crops often are land- and capital-intensive, not labor-intensive. Even if those sectors grow in response to trade liberalization, total employment in agriculture may decline if lower tariffs allow cheaper imports to displace the crops that are grown by the more numerous small farmers.

Agricultural imports have offsetting benefits if they drive down food prices for consumers. In terms of poverty, if there are more urban poor than rural subsistence farmers, overall poverty could decline because urban workers could afford to buy more. Worldwide, however, far more of the poor are in rural than in urban areas. Even in those few countries with more urban than rural poor, if displaced farmers migrate to urban areas and compete for scarce jobs there, urban wages might decline as well, again making the country, or at least the poor, worse off. It is the overall equilibrium of the diverse effects of trade liberalization that must be taken into account if poverty is to be reduced.

A long economic transition

The movement of workers from low-productivity, low-income subsistence farming to higher-productivity agriculture or to work in sectors such as manufacturing and services is a desirable step in the process of development. However, moving large numbers of people from basic farming to higher-productivity occupations has taken most countries decades or even centuries. Under the most favorable domestic and international conditions (for example, in Japan and South Korea in the post–World War II era), this transition was accomplished in one or two generations. However in India, sub-Saharan Africa, and elsewhere, the process has moved much more slowly.

Trade advocates often point to the utility of trade-opening as a mechanism to drive forward the process of shifting labor and capital from less productive to more productive uses. However, historical evidence suggests that without complementary policies, trade alone is unlikely to foster the absorption of large agricultural workforces into more productive sectors. Countries that have experienced successful long-term development at tolerable transitional human costs have been those that were able to grow fast enough in other sectors to gradually absorb excess agricultural labor. This occurred in Japan and Korea and is now underway in China. When rural-to-urban migration has been driven primarily by worsening poverty in rural areas, the result has been increasing rural poverty and the growth of low-productivity informal sectors in urban areas. This pattern has been observed in many developing countries, such as Mexico.

Forcing poor farmers to compete with global agriculture will not hasten an increase in their productivity if they do not have sufficient land as well as access to credit, high-yield seed, water, technical assistance, and other necessary inputs. Also needed are adequate roads and other infrastructure to allow small farmers to get their products to market and government policies on taxation, tariffs, and health and educational services that do not discriminate against small farmers and rural areas. Cutting farm tariffs without these complementary measures will deepen, not alleviate, rural poverty.

If trade displaces subsistence farmers, will they find work elsewhere? Manufacturing, which traditionally has provided jobs for low-skilled farmers as countries modernized, currently faces a glut of capacity and labor supply at the global level, especially in labor-intensive low-skill manufacturing. In sectors such as textiles, apparel, toys, furniture, and simple electronics, the integration of the Chinese labor force into the global production system during the past decade has attracted additional investment in production facilities there, while global demand for these goods has not grown at a commensurate pace. The result has been intense competition among developing countries for available orders and generally falling prices. In the case of textiles and apparel, the end of a global quota system in 2005 has ushered in a period in which most developing countries will see their share of global markets for those goods shrink as China and a few other countries dominate the market.

In the service sector, employment opportunities created by liberalized trade are typically out of reach for poor farmers because of education, health, and mobility constraints that poor rural households commonly face. In India, for example, the dramatic increases in service-sector industries and jobs have improved prospects for some urban dwellers, mainly those with higher education, but have had virtually no impact on rural poverty and unemployment.

Poor farmers who are displaced will find alternative incomes out of necessity, because most developing countries have no unemployment schemes or other social safety nets. However, history shows that most will be forced to accept low-productivity work in the informal sector or to emigrate.

Wealthy countries have strong reasons to be concerned about subsistence farmers in developing countries, for reasons of economic self-interest, basic decency, and global stability and security. The broad economic self-interest of developed countries lies in a stable and expanding global economy. Global growth is impeded in a world where the billions of people who live on less than $2 a day have little ability to purchase the goods and services produced by developed countries. With 70% of the developing world’s poor living in rural areas, improving the incomes of small-scale farmers is an essential step toward achieving a world of sustained growth and consumption. More immediately, it is unlikely that wealthy countries will be able to conclude the current trade talks and achieve their goals of exporting more goods and services unless the agricultural concerns of developing countries are sufficiently addressed.

Extreme and increasing poverty in numerous countries calls into question the decency and acceptability of the global economic system. In sub-Saharan Africa, the number of extremely poor people (those living on less than $1 per day) increased from 164 million in 1981 to 313 million in 2001, and their average income also fell, from 64 to 60 cents a day. They are overwhelmingly concentrated in subsistence agriculture.

With respect to global stability and security, a world in which poverty is rising in a significant number of countries is less politically stable. Political volatility and war in many parts of Africa and the Andean region have coincided with increases in poverty. Although there is no direct causal link between poverty and extremism, it cannot be ruled out that political instability, fueled by rising poverty, may provide opportunities for extremist groups. Countries that have seen increases in rural poverty and in which extremist groups have been active include Indonesia, Kenya, Morocco, Pakistan, and Sierra Leone.

The current negotiations

The international agreement that launched the current round of WTO trade talks in Doha, Qatar, in 2001 promised to place special emphasis on the needs of developing countries, which had gained much less than was promised from earlier trade deals. At Doha, wealthy countries agreed that agricultural issues and other sectors critical to the developing world would be given “special and differential treatment,” although they did not commit to any specific measures. In 2003, the United States and Europe attempted to cut a two-way deal that would continue to favor their own rich farmers. Developing countries walked out of the talks in Cancun, Mexico, in protest. The trade talks were resuscitated last summer when, among other concessions, poor countries extracted the following broad pledge from their wealthy counterparts: “Developing country Members will have the flexibility to designate an appropriate number of products as Special Products, based on criteria of food security, livelihood security and rural development needs. These products will be eligible for more flexible treatment.”

DEVELOPING COUNTRIES SHOULD BE ALLOWED TO DESIGNATE AS “SPECIAL PRODUCTS” ALL CROPS THAT ARE CULTIVATED BY THEIR SMALL-SCALE FARMERS.

The vagueness of the call for “more flexible treatment” reflects continued disagreement among the United States and other high-income countries over whether such products will be exempt from any further liberalization during this round of trade negotiations or will merely benefit from smaller tariff cuts or quota increases. This question and the criteria for designating special products form the crux of current negotiations that affect subsistence farming.

U.S. and other negotiators at the WTO should acknowledge the pivotal role of farming as a major source of employment in developing countries when they negotiate the criteria and treatment for “special products.” This employment intensity of agriculture distinguishes the situation of developing countries from that of the developed world, where only a miniscule share of the workforce is engaged in agriculture.

Developing countries should be allowed to designate as “special products” all crops that are cultivated by their small-scale farmers. These products should be exempted from any further reductions in tariffs or increases in import quotas. It is important that all crops cultivated by such farmers be covered by this special treatment, because the farmers’ livelihoods usually depend on a balance and interchangeability among different crops, depending on climatic and market conditions. There should be no numerical limit on the number of products that can be designated, provided that they are cultivated by small-scale farmers and farm workers.

Such an agreement would allow developing countries with large numbers of small farmers and farm workers to retain the policy flexibility necessary to avoid the displacement of agricultural livelihoods before there are alternative economic opportunities to absorb those who are displaced. This does not imply that they will necessarily use this flexibility with respect to all subsistence crops. It is entirely possible that governments might choose to apply lower tariffs or higher quotas in practice, as harvests vary, world prices change, and farmers gradually migrate out of agriculture. However, these determinations must be left to national policymakers, who are both familiar with the specific circumstances in their countrysides and democratically accountable to their own populations.

The main source of opposition to excluding subsistence farming from new liberalization measures comes from global actors who would benefit from increased access to developing countries’ markets for the products now grown there by small farmers. The self-interest of grain-trading firms, commercial farm sectors, and the governments that depend politically on the support of these corporations and sectors drives the opposition at the negotiating table.

In addition, some economists argue against continued protection for subsistence agriculture. The most common economic argument is built on three related propositions: global agricultural prices are held down by overproduction by subsidized farmers in wealthy countries; if these subsidies are eliminated, global agricultural prices will rise; and rising prices for agricultural crops will benefit small farmers. On examination, each of these propositions has limitations. Large subsidies by countries with substantial production capacity undoubtedly influence global agricultural prices. But prices are set by the combination of global supply and demand. Factors other than subsidies, including climate, also have profound and sometimes volatile effects on global supply. Global demand is also in flux. For example, the Asian financial crisis reduced demand and sent global prices down. Recent rapid growth in China has stimulated demand for some crops, such as soybeans, driving prices up. Future economic cycles and shocks can be expected to influence global prices separately from subsidies. If wealthy country farm subsidies are reduced, farmers in other countries may increase production, keeping global supply from falling. The assumption that food prices will increase and remain at higher levels is borne out neither by theory nor by the historical patterns for commodity prices, which generally have shown long-term declining trends. The proposition that the rural poor will benefit from higher world prices is also problematic. In practice, economies of scale and the role of marketing intermediaries mean that much of the benefit of any price increases that do occur will flow to large-scale farms and trading firms rather than to the rural poor.

Developing countries must also factor in some degree of uncertainty as to whether wealthy countries actually will scale back subsidies. In the United States, highly subsidized farm groups continue to be a potent political force that resists any significant changes in their favorable agricultural treatment. Several of the crops of most interest to developing countries, including sugar, rice, cotton, corn, and other grains, are grown in politically pivotal states that determine which political party holds the presidency and a congressional majority. Similar political interest groups wield significant power in the European Union.

Some economists argue that failing to liberalize trade will simply prolong developing countries’ dependence on low-productivity agriculture. There is no question that long-term development and poverty alleviation require that low-productivity farmers move into higher-productivity occupations. However, this equation requires not just the phasing out of small-scale agriculture but the contemporaneous creation of more productive job opportunities elsewhere in the economy. The proposal suggested here does not lock such countries into the agricultural sector. Rather, it gives them the autonomy to make decisions on agriculture and rural policies that are appropriate for their level of development and changing circumstances. They are free to accelerate agricultural liberalization as the sector evolves and farm labor is attracted to other sectors.

A final argument is that developing countries will gain more from the liberalization of their own markets (collectively) than from those of wealthy countries. A much-cited World Bank study released before the failed Cancun meeting made that claim. However, more recent studies, including the most recent modeling exercise by the World Bank, find the opposite result. The recent World Bank study finds that most developing countries see a greater improvement in their net food trade (exports minus imports) if rich countries liberalize their agricultural markets but developing countries do not. In that scenario, developing countries as a group would gain an additional $142 billion from improved export of agricultural products. In contrast, if developing countries join rich countries in liberalizing their agricultural markets, the biggest winner in terms of redistributed farm income will be the United States, not the developing world, and especially not the poor.


Sandra Polaski () is director of the Trade, Equity and Development Project at the Carnegie Endowment in Washington, DC.

Flirting with Disaster

In the aftermath of catastrophes, it is common to discover prior indicators, missed signals, and dismissed alerts that preceded the event. Indeed, in reviewing the accident literature, there are many notable examples where such prior signals were observed but not recognized and understood for the threat that they posed. They include the 1979 Three Mile Island nuclear power plant accident (another U.S. nuclear plant had narrowly averted a similar accident two years before) and the 2000 Concorde crash (tires had burst and penetrated the fuel tanks on five previous flights). Probably the most famous examples are the two space shuttle disasters. After it was determined that an O-ring failure had doomed Challenger in 1986, it was recognized that on a number of other occasions, O-rings had partially failed. After the loss of Columbia on February 1, 2004, investigators found that insulating foam had become detached from the external tank and pierced the orbiter’s thermal protection system. Although managers at the National Aeronautics and Space Administration (NASA) had observed debris strikes on numerous prior missions and had recognized them as a potentially serious issue, the Columbia Accident Investigation Board concluded that over time, a degree of complacency about the importance of debris strikes had crept into NASA’s culture.

These so-called precursor events can in hindsight seem so conspicuous that it is hard to understand why they were not recognized and acted on. In practice, organizations and individuals face significant challenges in identifying and reporting precursors; filtering, prioritizing, and analyzing signals that represent significant threats; and tracking the implementation of corrective actions until completion. Multiple approaches have been developed across industries and between firms within an industry to use information about precursor events and economic incentives to improve the safety of technological systems.

Precursors occur more often than accidents, of course, and normally have minimal impacts compared to those of an actual event. As such, they are inexpensive learning opportunities for understanding what could go wrong. Rallying an organization to identify and report precursor events can unearth numerous instances of potentially serious safety gaps. In contrast, failure to solicit, capture, and benefit from precursor information simply wastes a valuable resource that can be used to improve safety.

In recent years, research has pinpointed a variety of approaches that should help inform organizations seeking to initiate precursor-reporting programs, as well as those seeking to revise existing ones. All of these approaches have advantages and drawbacks; there is no one-size-fits-all program. However, although effectively managing precursors is challenging, choosing not to use precursor information to improve safety is unacceptable in high-hazard industries. A precursor event is an opportunity from which to learn and improve safety; not actively trying to learn from these events borders on neglect.

Centralized versus decentralized management

Some industries have centralized bodies that collect, analyze, and disseminate information on precursor events, whereas others have more fragmented site-specific or company-run precursor programs that serve the same role, but for smaller organizational groups. In both cases, government agencies typically work with industry stakeholders to facilitate the establishment of suitable guidelines and institutional contexts within which these bodies can work.

An example of a centralized approach is the Accident Sequence Precursor program overseen by the U.S. Nuclear Regulatory Commission (NRC). The program screens and selects precursors, primarily from Licensee Event Reports that plant operators must submit to the NRC when certain defined precursors that could affect plant safety occur. Each event is analyzed to determine its severity and relevance to safety. The results are aggregated across plants and then shared with licensees.

The airline industry takes a decentralized approach to the issue. Its Aviation Safety Actions Programs (ASAPs) encourage employees to voluntarily report safety information that may be critical to accident prevention. These programs are based on memoranda of understanding among the relevant airlines, the Federal Aviation Administration (FAA), and applicable third parties, such as labor organizations. Although carriers operate the programs, they must adhere to federal guidelines and share generated safety information with the FAA.

GOVERNMENT MUST PLAY A ROLE IN FACILITATING AN INFORMED DIALOGUE WITH INDUSTRY STAKEHOLDERS TO ENSURE THAT PRECURSOR INFORMATION IS SUCCESSFULLY USED AND MANAGED.

Centralized programs may prove better at capturing and detecting trends across all participating organizations, because a single body analyzes reported precursor information. Furthermore, centralized programs may have more impact than decentralized programs if they are recognized as industry watchdogs.

In contrast, decentralized programs may benefit from a greater sense of ownership among participants and can lead to closer interaction among those who report events, those who analyze them, and those who implement corrective actions. Since American Airlines initiated its ASAP in 1999, 43 other carriers have followed; which is evidence of the success of this approach. Each program is carrier-run and allows specific carrier-related safety issues to be addressed, although the FAA and labor unions are able to share lessons learned more broadly.

Command and control versus voluntary reporting

A key issue to be addressed in the design of precursor-reporting programs is whether reporting should be mandatory or voluntary. Government regulators can play a key role in deciding whether certain types of precursors must be reported by organizations under their jurisdiction. Regulatory bodies can also provide legal safeguards, such as protection from prosecution, for those who report precursor events voluntarily. Decisions on these key issues will often inform subsequent decisions about how to either enforce or encourage reporting. Systems with mandated reporting of precursor events normally involve penalties, such as fines, for failure to report certain types of events. Voluntary reporting systems often require the development of trust, goodwill, and an organizational climate that encourages individuals to report.

The Transportation Recall Enhancement, Accountability, and Documentation (TREAD) Act, passed by Congress in response to problems with Firestone tires on Ford vehicles, is an example of a mandated precursor-reporting system. The act requires automobile and automobile-part manufacturers to report a host of precursor data, such as consumer complaints and warranty claims and adjustments, along with more serious accident data. Failure to comply can result in up to $15 million in civil penalties, as well as potential criminal penalties. Precursor data are collected as Early Warning Reports. Although the system is relatively new, warranty data from the reports have been publicized by a number of news channels.

The Aviation Reporting Safety System (ASRS), a centralized system run by NASA for the FAA, is an example of a voluntary precursor-reporting program. The program accepts and analyzes reports voluntarily submitted by pilots, air traffic controllers, flight attendants, mechanics, ground personnel, and others in the airline industry. Individuals are encouraged to report incidents or situations that compromise aviation safety. (Reports about actual accidents or possible criminal activities, such as pilot inebriation or drug trafficking, are not accepted.) To encourage reporting, any identifying information is removed before reports are entered into the ASRS database. The FAA’s commitment not to use information from ASRS reports as a basis for enforcement actions has been a key factor in the submission of more than 600,000 reports since the program’s inception.

The choice of mandatory versus voluntary reporting requires careful consideration. Many precursors result in no obvious damage and are often only observed by one or two individuals. Therefore, to some extent, the reporting of precursors is a voluntary action even if a mandatory reporting system is in place. Voluntary reporting can create a more positive collaborative safety culture among people working on the front line, management, and other stakeholders, such as government agencies and labor unions. When voluntary reporting is implemented, managers must actively solicit reports and take them and act on them as appropriate. It is important that employees receive feedback on how the organization has addressed precursor reports. Failure to take these steps will often result in a dwindling of submitted reports because employees will believe that they are not truly valued by senior management. To further ensure a steady stream of reports, certain protections are often stipulated, such as reporter anonymity and immunity from disciplinary action. This protection can create a cooperative environment between the reporter and those analyzing precursor events and indicate that safety is valued as an organizational norm.

Nonetheless, in some cases mandatory reporting may be preferable, with corresponding punitive sanctions for noncompliance. More specifically, implementation of voluntary precursor programs may be difficult to enact after calamitous events, because public sentiment may push for stricter enforcement measures. In addition, if stakeholders routinely capture precursor information, there may be little to gain by asking for voluntary submission of precursor reports to a centralized agency. Both of these factors may have influenced the mandatory reporting requirements of the TREAD Act. After the 2001 congressional hearings into the Ford-Firestone problems, public sentiment may have encouraged mandated reporting. Furthermore, much of the warranty data that car manufacturers were asked to submit under the TREAD Act were already collected by stakeholders.

Broad versus specific definition of precursors

Defining what a precursor is can be surprisingly difficult because the attributes of a precursor can be subjective. Any definition involves specifying those events or conditions that are sufficiently unsafe to merit analysis. The number and quality of reports submitted also depend on how precursors are defined.

A definition that creates a specific and typically high threshold for reporting may result in fewer reports but potentially more significant findings than looser definitions that encourage reporting of events with a wider range of severities, some of which may involve only a minor loss of safety margin. If the threshold for reporting is set too high (an accident must be narrowly averted to merit reporting) or precursors are defined too precisely (to include only very specific events), some risk-significant events may not be reported.

Conversely, if the threshold for reporting is set too low, the system may be overwhelmed by false alarms or inconsequential events, especially if some corrective action or substantial analysis is required for all reported events. A low threshold can also lead to a perception that the reporting system is of little value. These competing tradeoffs can lead to type I (false positive) and type II (false negative) errors. These types of errors can result in too much investigation of issues that are not problematic and too little investigation of issues that are.

The NRC has chosen to limit the use of the term precursor to events that could lead to the meltdown of a nuclear reactor’s core and that exceed a specified level of severity. For instance, a precursor in a nuclear plant could be a complete failure of one safety system or a partial failure of two safety systems. Events of lesser severity, with low conditional probabilities of core damage, are either not considered precursors or not singled out as deserving of further analysis. To compensate for this very specific definition, the NRC and plant licensees also use other avenues for reporting, such as site-run incident-reporting programs, thereby reducing their exposure to potential type II errors.

In contrast, the Veterans Administration’s (VA’s) Patient Safety Reporting System allows the reporting of any safety-related event, including serious injuries and close calls, but also lessons learned and ideas for safety improvements. In implementing such a system, type I errors can prove problematic because precursor program managers may face a deluge of reports ranging from the very serious to quite benign.

Perceptions of precursor reporting

Is a large number of precursor reports indicative of a safe or unsafe system? There is no simple answer to this question. Organizations and government agencies must be particularly careful in making pronouncements that safety has increased or worsened based on a change in the number of precursor reports.

In some cases, an increase in precursor reports can indicate a greater concern with safety, because it suggests that employees are actively looking for flaws in the system. For example, during the inception of a voluntary reporting system, a rise in the number of reports suggests that employees are participating in the program. Such an increase was observed after the NASA-FAA ASRS program, whose reporting system was established in 1976, experienced a 10-fold increase in reporting between 1983 and 1991, despite accident rates remaining relatively constant.

In other cases, a large number of reports may indicate an unsafe system, particularly in systems where precursors are detected automatically (with an electronic surveillance system). Thus, a decrease in reported precursor events is indicative of safer operations. For example, the rail industry automatically monitors “signals passed at danger,” such as when a train passes a red signal. In this system, reported events are a clear and unambiguous departure from safe operation. As such, any increase in signals passed at danger over time indicates a less safe system.

In general, vigilance should be maintained whether few or many precursors are observed. When few precursors are observed, organizations must question whether they are actively soliciting and identifying relevant signals of potential danger. If many precursors are observed, organizations must determine whether they are giving appropriate attention to these reported events. For instance, in some past accidents, including the losses of the two space shuttles, repeated precursors without an accident led to the perception that the system was more robust than it actually was. This phenomenon, termed “the normalization of deviance” by Diane Vaughan, professor of sociology at Boston College, can result in implicit acceptance of higher and higher risks over time.

In either situation, whether only a few or many precursors are observed, the events can create a compelling case for resolving significant safety threats. Jim Bagian, director of the VA’s National Center for Patient Safety, astutely summarized why organizations should not overly focus on reporting numbers but rather undertake follow-up studies on ways to reduce risks, when he noted that, “In the end, success is not about counting reports. It is about identifying vulnerabilities and precursors to problems and then formulating and implementing corrective actions. Analysis and actions are the keys, and success is manifested by changes in the culture and the workplace.”

Roles for government and industry

Engaging organizations in reporting and learning from accident precursors is a valuable aspect of safety management. Maintaining safety is an ongoing dynamic process that does not stop once a technology has been designed, built, or deployed. Indicators of future problems can and do arise despite the best engineering practices, strict adherence to standards, and ongoing maintenance. There is thus a strong need for mechanisms to capture and benefit from these indicators, with precursor programs being formal approaches to using signals successfully.

Government and industry must work to define new precursor programs and make ongoing improvements in existing ones. Government agencies, especially those overseeing high-hazard industries, must play a role in facilitating an informed dialogue with industry stakeholders to ensure that precursor information is successfully used and managed to maintain and improve system safety. Numerous issues, including reporter indemnity and the sharing of risk-related information between the private sector and the government, require government input. At the same time, the private sector must embrace precursor management as one vital approach in the ongoing pursuit of system safety. Indeed, it is the responsibility of the private sector to be an engaged partner with government to help ensure that precursor programs are defined, implemented, and successfully managed on an ongoing basis.

The Economic Imperative for Teaching with Technology

In 1997, management guru Peter Drucker predicted that in 30 years the big university campuses would be relics, driven out of existence by their inexorable increases in tuition and by competition from alternative education systems made possible by information technology (IT). Drucker overstates the case, but the nation’s major research universities, both the publics and the private nonprofits, will have to make fundamental changes in the way they provide education if they are to thrive, rather than merely survive, in the coming decades. Drucker’s alleged implement of destruction, IT, can actually be an essential tool in maintaining the strength of the research university.

Research universities suffer from what economist William Baumol has labeled a “cost disease”: The relative cost of delivering services in labor-intensive industries increases over time as other industries experience technological progress, employ labor-saving devices, and realize cost savings. In addition, highly selective universities operate in a market in which rankings and prestige have been more important than price to most prospective students. They prosper not by cutting costs but by convincing the public that the value of an education at a first-tier university is worth the ever-rising cost. Labor intensity and quality-based competition are driving up the costs of all universities, particularly the private nonprofits. As Drucker points out, this cannot continue indefinitely.

In the current university financial system, undergraduate education subsidizes research. State legislatures and the alumni of private universities are willing to contribute to the goal of educating undergraduates. The combination of tuition and outside contributions brings in more money than is needed to teach undergraduates, and the universities use the surplus to subsidize research. For example, state subsidies cover faculty salaries because faculty are expected to teach undergraduates, but then the faculty grant themselves light teaching loads so that they have time for research.

Unfortunately for the universities, this strategy is beginning to unravel. The states are cutting higher-education subsidies and at the same time trying to keep a lid on public university tuition. Meanwhile, members of Congress are voicing their concern about the relentless tuition increases at the private nonprofits. Although tuition at private universities might not seem to be a government problem, tax deductions for alumni contributions are a government subsidy, and the government does pay some of the tuition at private universities through grants and loans to individual students. These indirect government subsidies are highly regressive. At the 146 most selective and most heavily subsidized colleges and universities, only 3% of the students come from families in the lowest income quartile, whereas 74% come from families in the highest income quartile. This government largesse to the wealthiest citizens is hard to justify, and it will hardly continue forever. Public and private universities should be preparing for a period of declining revenue growth, and perhaps even falling revenue.

A future in which costs outpace revenues will cause severe financial stress for the universities. The publics are already beginning to feel the pain, and the day of reckoning for the private nonprofits is not far off. To preserve the quality of their research activities, the publics must begin immediately to use technology to make their teaching operations less labor-intensive and to begin developing new markets such as continuing education, where they can use their technology to lower the labor intensity of teaching and thus increase revenue. For now, the private universities can wait and see— and learn from the publics, which need to act immediately. Their strategy might differ in detail, but they too will have to begin soon to use technology in ways that will make teaching more cost-effective.

Where to begin

The first function of research universities is research. We like to say that it is teaching, but no reasonable person seeking to set up a teaching enterprise would organize it as a research university. Of course, a research university can be used for teaching purposes, in the same way that a burning barn can be used to roast a pig; it is just not cost-effective.

The cost of research is difficult to calculate exactly because it includes so many activities. Universities pay premium salaries for star researchers and then require them to do very little teaching; they build science buildings and purchase expensive equipment to help them attract outside funding; they carry researchers who are not receiving outside funding; they train doctoral students in an apprentice system that requires enormous faculty time for each student; they cover the cost of faculty who serve on expert advisory committees, edit journals, review articles for journals and proposals for funding, and take leadership roles in professional organizations; and they allow faculty to participate in an endless variety of other activities that keep the faculty happy or contribute to the prestige of the university.

Research is inherently labor-intensive because human creativity is an essential component, because the related activity of doctoral training requires extensive personal interaction over a long time, and because the need to maintain quality demands that people with specialized expertise devote extensive time to reviewing the work of their peers. Even though researchers have eagerly integrated IT into their work, it has not reduced the labor intensity of research, and we should not expect it to do so in the future. Although researchers will be able to explore new questions and to share their findings more quickly with colleagues, the critical need for human involvement will not diminish.

The bottom line is that there are no significant opportunities to substitute IT for the work of humans and no obvious ways to reduce financial support for research without also endangering the quantity and quality of the research.

The situation is more complicated for teaching, simply because there are so many kinds of teaching. Consider how little common ground exists in the teaching of laboratory science, creative writing, history, and basic accounting. Some kinds of teaching are inherently labor-intensive; others could go either way, but the labor-intensive form yields higher-quality results than does the technology-intensive form; yet others have the potential to achieve their highest quality in the technology-intensive form, although this potential is yet to be realized. For example, practical ethics is usually taught face-to-face in the honored tradition of the Socratic method. In the future, it might be less expensive and more effective for students to learn experientially by immersing themselves in virtual societies and exploring their own and other people’s responses in morally charged situations.

THERE ARE NO SIGNIFICANT OPPORTUNITIES TO SUBSTITUTE IT FOR THE WORK OF HUMANS AND NO OBVIOUS WAYS TO REDUCE FINANCIAL SUPPORT FOR RESEARCH WITHOUT ALSO ENDANGERING THE QUANTITY AND QUALITY OF THE RESEARCH.

Complicating matters further is concern about the quality of student learning. An acclaimed faculty lecturer might be very effective for the majority of students, but research has revealed that individuals differ widely in how they learn. Given the choice, one student will attend a lecture even as another student makes a beeline for the Internet; one student will learn interactively or in a group, whereas another student likes to work on his own. No individual teacher or course plan can be the best choice for all students. The goal should not be to offer the perfect course and teacher, but to provide enough varied options that each student will be able to find the most effective learning environment. IT not only makes it possible to reduce costs, but by offering more options that are tailored to needs of individual students, it also has the potential to increase the quality of learning.

For the individual student, the most effective mix of labor-intensive and technology-intensive teaching will vary with the subject matter, the immediate goal of the course, and the student’s stage in life. A student might start out with a liberal arts education that prepares her cognitively and emotionally to cope with uncertainty and complexity and instills in her a meaningful philosophy of life and a sense of social responsibility. In the course of her working life, she would upgrade her skills as needed by taking continuing education courses with vocational content. To advance her career or to change direction, she might add a professional degree. In retirement, she would come full circle and once again take advantage of the university’s continuing education offerings, but now with a liberal arts bent. After several decades of work and family responsibilities, she would return to the Great Books and a reconsideration of life’s perennial questions. For this student, who may well be representative of the majority of tomorrow’s student body, we should be aiming for labor intensity in the early years of education, technology intensity in the middle, and a mix of labor and technology intensity at the end. This structure also makes economic sense, because it is in the middle where the numbers are, and technology-intensive teaching requires economies of scale to be cost-effective.

Weakening support

State funding of public research universities has always been a roller coaster, with the business cycle driving state tax receipts and thus state support for higher education. The recent decline in state support, however, is not just another transitory trough in a boom-and-bust cycle; it is the beginning of a long-term trend. State budgets are systematically moving out of higher education and into state employee pensions, Medicaid, K-12 schools, and prisons. The combination of shrinking state support, rising university costs, and a political climate in which middle-class voters resist increases in tuition or decreases in enrollment is setting the stage for a financial train wreck. The only option open to university administrators will be to hollow out the teaching enterprise.

Nominally, state budget cuts apply to undergraduate education, not to research, but we have already seen that research needs to be cross-subsidized with funds intended for undergraduates. Tight budgets will hurt research. A university can respond to state budget cuts by increasing the teaching load of its research professors and cutting the number of low-enrollment doctoral classes, or by hiring less-expensive non-tenure-track faculty who do not conduct research or train doctoral students. Either way, the research capacity of the university will suffer.

At first glance, it appears that the private universities will fare better because they are not formally constrained from increasing tuition. They could use tuition, endowment income, and alumni giving to meet their increasing costs. But eventually they too will hit the wall, in part because they are not completely insulated from public opinion. As cost increases attract more attention, the public will undoubtedly hear more about tax exemptions for gifts from wealthy alumni, federal loans and grants used to pay tuition, government research grants, and other government expenditures directed to private institutions that indulge their privileged students with a country club lifestyle and charge fees that are affordable for fewer families each year.

The private nonprofits have some breathing room for now, because taxpayers have not been paying much attention. But that could change quickly if the stock market crashes or the real estate bubble bursts, or if the media expose wasteful spending or fraudulent accounting in the Ivy League. Neither political party is a reliable ally. Democrats are unhappy with the social stratification of the elite universities. Republicans are impatient with the liberal political leanings of the faculty and have their disagreements with stem cell research or support for evolution. The for-profit universities know how to make the most of political contributions and are eager to tout how efficient they are in comparison with traditional universities—and not without some truth. Add to this the rising budget deficit and an aging electorate that has less personal interest in higher education, and one can easily imagine congressional debates that portray research universities as bastions of economic privilege, intellectual elitism, and financial inefficiency that do not deserve to be subsidized. I consider it extremely unlikely that the private non-profits will be able to increase their costs and tuition forever.

Time to act

The research universities will have to take dramatic action to meet the financial challenge, and the publics will have to act first. They should begin by exploring ways in which IT can enable them to fulfill their teaching mission more cost-effectively. This cannot happen overnight. It will require creative design, teams of diverse specialists, and a savvy business model. If most college teaching occurs on the scale of a campus theater production, teaching with technology will be closer to a Hollywood blockbuster. The university will need to develop a professional class of tenured faculty who will forge a second career for themselves in teaching with technology as they work with specialists in cognitive science, software development, and the like. They will have to reinvent themselves just as some faculty build second careers as academic administrators who work with specialists in student counseling, government lobbying, financial management, and fundraising.

Creating high-quality online content and upgrading it over the years is an expensive proposition, and cost savings will come about only if universities succeed in realizing significant economies of scale. This will require collaboration between the departments and divisions of the university, including university extension; between the campuses of a university system; across levels of state higher-education systems from flagship universities to community colleges; and among similarly situated universities such as the Big Ten universities in the Midwest. Urgently needed, too, is a rethinking of faculty hiring, promotion, and reward systems, which in their current form are impediments to interdisciplinary, interdepartmental, and cross-campus collaboration in teaching.

The public research university has the size, organization, and expertise to pull off this feat. But it can expect to encounter significant political opposition. University reform on this scale requires shifting a collective belief system, and there is nothing like a crisis to create an opening for such a shift. For this reason, I disagree with public university leaders, such as University of Texas Chancellor Mark Yudof, who call for tuition deregulation so that they can increase tuition to meet growing expenses. I believe, to the contrary, that state politicians need to keep a firm lid on public university tuition precisely because this is the only way that the public universities will be able to overcome the political obstacles to change. Costs will continue to increase. With tuition income constant and state subsidies decreasing, the public universities can tread water for a while by cannibalizing their research operations and hollowing out the teaching enterprise. But eventually they will start to sink, which will give university leaders and senior administrators the bureaucratic legitimacy to reorganize the university without losing political capital (or office). We can hope, of course, that everyone will see the writing on the wall well before then, which would allow for a timely and smooth transition to the new business model, but we cannot count on it. Hence, we need a tuition lid that will create cost pressure for change.

THE UNIVERSITY WILL NEED TO DEVELOP A PROFESSIONAL CLASS OF TENURED FACULTY WHO WILL FORGE A SECOND CAREER FOR THEMSELVES IN TEACHING WITH TECHNOLOGY AS THEY WORK WITH SPECIALISTS IN COGNITIVE SCIENCE, SOFTWARE DEVELOPMENT, AND THE LIKE.

For the immediate future, the elite private nonprofits are likely to continue to take the expensive labor-intensive route. The richest among them have the resources to finance the switch to teaching with technology, but there are no political forces inside or outside of them that might encourage them to act quickly to make the necessary organizational changes. Nonselective private nonprofit universities, which lack the endowment and alumni giving to cover their ever-spiraling costs, let alone to finance the switch to teaching with technology, face a dire future. Indeed, the large number of private universities that have been forced to close their doors recently indicates that the future has already arrived for some.

Ultimately, even the richest private universities face a future of austerity, if we define austerity not as a low absolute level of resources but as an excess of expenditures over income. Costs will continue to rise into the stratosphere; political forces will constrain the rate at which college tuition can increase; and political and market forces will force colleges to enroll a diverse student body and offer tuition discounts to minority and low-income students. There will inevitably come a time when the private nonprofits will be forced to follow the leadership of the public research university and expand their operations to increase the number of tuition-paying students and teach them with the help of labor-saving technology.

From the Hill – Fall 2005

Energy bill signed, little impact seen on oil and gas use

After several years of impasse in Congress, President Bush on August 8 signed into law a bill that includes a broad range of energy-related activities and R&D. However, according to most analysts, the new law is unlikely to have a significant near-term impact on the nation’s use of fossil fuels.

The new law seeks to achieve its goals of diversified energy production and conservation mainly through the use of tax credits: $2.8 billion for investment in clean coal technology, $3.2 billion for renewable energy production, $2.6 billion for oil and gas production, and $2.7 billion for conservation and energy efficiency.

The law also requires the annual use of 7.5 billion gallons of ethanol, a corn derivative, by 2012. The production of ethanol, an additive that helps gasoline burn more completely and thus helps reduce air pollution, is controversial because it largely benefits one producer, Archer Daniels Midland.

The law also authorizes more than $31 billion for basic science and applied energy technology research over three years—more than double what is currently authorized. However, it is far from clear that this amount would actually be appropriated. Energy research was funded at $3.6 billion in the fiscal year (FY) 2005 budget. President Bush proposed $3.46 billion for FY 2006.

In recognition of the fundamental role that science plays in fulfilling the missions of the Department of Energy, the bill creates an undersecretary of science in the department.

Two provisions that have sidetracked energy legislation in the past—energy production in the Arctic National Wildlife Refuge and the elimination of liability for manufacturers of MTBE, a gasoline additive that has caused groundwater contamination—were deleted from the bill.

In addition, in the conference committee that negotiated the final bill, several Senate-passed provisions—a mandatory climate change strategy, a requirement that utilities use a certain amount of renewable energy, and a direction to the president to find ways to cut the nation’s oil use by one million barrels a day—were eliminated. The new law does, however, include language that encourages voluntary efforts to reduce greenhouse gas emissions.

The new law arguably weakens nuclear nonproliferation controls by allowing operators of nuclear reactors in other countries to continue to receive U.S. supplies of highly enriched uranium if they produce medical isotopes. The major beneficiary would be Nordion, a Canadian company that supplies most of the medical isotopes used in the United States. A provision in a 1992 law had required that all recipients of uranium supplied by the United States convert to low-enriched uranium as soon as is technically feasible.

House seeks to shore up NASA science missions

The House on July 22 reauthorized the National Aeronautics and Space Administration (NASA), but only after taking steps to ensure that the space agency does not neglect its basic science missions.

The House bill requires appropriators to allocate money to four separate accounts: Science, Aeronautics and Education; Exploration Systems; Space Operations; and the Office of the Inspector General. Funds for administration and construction of facilities must be included in each of the four appropriations. However, NASA may transfer funds between these accounts with 30 days notice to Congress, and there is no mechanism for Congress to forbid the transfer of funds.

For fiscal year 2006, the total authorization of $16.471 billion includes $6.870 billion for Science, Aeronautics, and Education; $3.181 billion for Exploration Systems; $6.387 million for Space Operations; and $32.4 million for the Office of the Inspector General. Within the Science, Aeronautics, and Education account, $962 million is allocated for aeronautics, $150 million for a Hubble Space Telescope servicing mission, and $24 million for the National Space Grant College and Fellowship Program.

The bill also requires NASA to place greater emphasis on education, technology transfer, safety, and microgravity science research unrelated to human space exploration. NASA is required to report to Congress on its safety management culture and compliance with the recommendations of the Columbia Accident Investigation Board, as well as its plan for identifying and sharing best practices. In addition, the bill removes the requirement that the space shuttle must be retired by 2010.

The bill must now be reconciled with a Senate bill that does not mandate a return of humans to the Moon and that recommends that use of the shuttle continue until the Crew Exploration Vehicle is ready for flight.

Frist backs stem cell legislation

Senate majority leader Bill Frist (RTN) surprised Washington by announcing that he supports legislation that would loosen President Bush’s restrictions on controversial embryonic stem cell research. In a speech on July 29, Frist said that,“The limitations put in place in 2001 will, over time, slow our ability to bring potential new treatments for certain diseases. Therefore, I believe the president’s policy should be modified.”

The Stem Cell Research Enhancement Act, which passed the House by a 238194 vote in May, would expand the number of stem cell lines available to federally funded researchers. The Bush policy anticipated that many more stem cell lines would be available than has actually been the case.

Senate champions of the legislation hope that Frist’s support will provide enough political cover for other senators to provide a veto-proof majority. But President Bush remains opposed to the legislation, and a two-thirds majority to overturn a veto is unlikely in the House. In addition, there is still uncertainty over what the Senate legislation will include. For example, Frist said he intends to seek revisions to ensure “a strong ethical and oversight mechanism.”

House considers major reorganization of NIH

A July 19 House Energy and Commerce Committee hearing provided the first public unveiling of its draft bill for a major reorganization of the National Institutes of Health (NIH) that would give the director much more authority. The plan would consolidate the existing 27 institutes and centers that make up NIH into two entities—mission-specific research institutes and science-enabling institutes—in addition to the Office of the Director (OD). The mission-specific division would include the existing disease-related institutes, and the science-enabling division would contain the general medical science portfolio.

The plan would also create a Division of Program Coordination, Planning and Strategic Initiatives (DPCPSI) within OD, responsible for coordinating research among institutions. It would have the power to finance research directly.

The mission-specific and science-enabling divisions would be required to set aside an as-yet-to-be-determined part of its funds (5% has been informally discussed) for a “common fund for the common good” within DPCPSI, which would use the funds to support multidisciplinary research that intersects institutional missions, a priority of NIH Director Elias Zerhouni. According to the draft language, the NIH director would be responsible for establishing an advisory council to provide recommendations on how to conduct and support transinstitutional research.

The House Energy and Commerce Committee proposal requires that the director submit a biennial report to Congress that includes an assessment of the “state of biomedical research,” a description of all activities “conducted or supported by the agencies,” a justification for the “priorities established by the agencies,” and a catalog of all research activities, in addition to other information. Members of Congress want this information as a way to keep tabs on how NIH makes decisions on investing in disease-related research. Patient/medical research groups, however, are concerned that this will be too onerous to maintain and take up too much time and money. Zerhouni has recommended a more “streamlined” approach to reporting to Congress, without going into details about what that means.

The draft bill has been met with mixed and subdued reaction. Patient organizations have expressed concern about the possibility that mission-specific institutions would lose authority to fund research. Research organizations have expressed concern about the vaguely defined roles of the coordinating office and advisory council.

Even members of Congress, who understand the need to give the director more power in spending decisions, are hesitant about embracing the plan. Rep. Henry Waxman (D-CA) said that if the agency is not broken, caution should be used before initiating any organizational changes.

Zerhouni, meanwhile, diplomatically stated that he agrees that the committee “should first and foremost carefully reconsider how the organizations of NIH can collectively and effectively support the core missions of the agencies. The challenge is to accomplish this goal through enhanced coordination and partnerships across the NIH institutes and centers while avoiding the pitfalls or centralization or top-down research. Achieving the right balance, the necessary autonomy, and diversity of approaches represented . . . is the central question.”

Congress critical of slow pace on bioweapons prevention

Concerned that the federal effort to thwart potential bioweapons attacks is not providing results quickly enough, Congress is considering bills to bolster the program, called Bioshield.

Members of Congress were sharply critical of Bioshield at a House Homeland Security Committee hearing on July 12, despite testimony from Department of Homeland Security (DHS) and Department of Health and Human Services (HHS) officials that the program was succeeding. Lawmakers were upset by the lack of a stockpile of antiviral drugs and by the time it took to identify priority diseases. At a July 14 House Government Reform Committee hearing, it was revealed that a year after Bioshield began, the government has entered into only three contracts for countermeasures.

Industry representatives at the second hearing said that the current program does not give their companies enough incentive to develop new countermeasures. They said that companies cannot determine whether they will earn a profit on a new countermeasure, because they do not know in advance how many doses of their products the government will buy.

In response, two bills to revamp Bioshield by guaranteeing the profitability of countermeasures have been introduced in the Senate. Sen. Judd Gregg (R-NH) introduced S. 3, the Biopreparedness Act of 2005, which is supported by the Republican leadership. Sen. Joseph Lieberman (D-CT) introduced S. 975, the Project BioShield II Act of 2005, which is cosponsored by Sens. Orin Hatch (R-UT) and Sam Brownback (R-KS).

Both bills extend the term of the patent for a countermeasure by the amount of time between when the patent was issued and when the countermeasure was approved by the Food and Drug Administration. They also permit the secretary of HHS to grant an extension of 6 to 24 months to the term of an unrelated patent owned by a company that is developing a countermeasure. Both create tax credits for conducting research or building factories for countermeasures or vaccines.

Each bill also contains a number of other measures intended to improve Bioshield. These include accelerated approval of countermeasures, partial immunity for harm caused by a remedy for a pandemic or epidemic, new National Institutes of Health and DHS offices to coordinate responses to bioterrorism, grants and scholarships for bioterrorism research, new procurement procedures, and limits on the ability of states to require drug safety or warning label information not mandated by federal law.

Both bills have been criticized by consumer advocates and generic drug manufacturers for being too generous to the pharmaceutical industry.

The Case for Carbon Capture and Storage

Human activity spills about 25 billion tons of carbon dioxide (CO2) into the atmosphere every year, building up the levels of greenhouse gases that bring us ever closer to dangerous interference with Earth’s climate system. The world’s forests take up about 2 or 3 billion tons of that output annually, and the ocean absorbs 7 billion tons. Experts estimate that another 5 to 10 billion tons of this greenhouse gas—as much as 40% of human-made CO2—could be removed from the atmosphere and tucked safely away.

Advancing the technologies needed to capture and store CO2 is a sensible strategy. In addition to increasing renewable energy and promoting energy efficiency and conservation, the strategy of advancing CO2 capture and storage (CCS) can be easily understood by all Americans who acknowledge that even though fossil fuels will be needed for a long time to come, the U.S. government at some point must confront the climate change problem by setting limits on CO2 emissions.

Capturing and storing CO2 is a cost-competitive and safe way to achieve large-scale reductions in emissions. CCS technology offers a unique opportunity to reconcile limits on CO2 emissions with society’s fossil fuel–dominated energy infrastructure. In order to continue using the United States’ vast domestic coal resources in a world where CO2 must be constrained, the country will need to rely on technology that can seize CO2 generated from coal-fired power plants and store it in geologic formations underground. However, the integration and scaling up of existing technologies to capture, transport, and store CO2 emitted from a full-scale power plant have not yet been demonstrated. The technical feasibility of integrating a complete CCS system with a commercial-scale power plant is not in doubt, but it is necessary to build up experience by advancing early deployment.

In addition to the environmental benefits, more aggressive support of CCS technology is critical to maintaining U.S. leadership and competitiveness in both CCS and global energy-technology markets. The United States has played a leading role in nearly all R&D related to the use of fossil fuels and has always had particular expertise in coal-based power-production technologies. Yet despite the great potential of CCS, the U.S. government is not investing in it aggressively. The current administration emphasizes the importance of advanced technologies, including CCS, in addressing climate change, but is not effectively promoting its demonstration and deployment. U.S. industry is already beginning to lose ground, because the handful of existing large-scale CCS projects are not in the United States.

The private sector has shown substantial interest in CCS and has begun investing in development and demonstration projects. But progress will be slow without government-created incentives. The challenge for the government is to harness the private sector’s interest by developing policies that reward investment in and early deployment of CCS systems.

The state of the art

Large stationary sources of CO2 are good candidates for CCS. Power plants are the largest emitters, generating 29% of CO2 emissions. The gas also can be captured from some large-scale industrial processes that release lots of CO2, including the production of iron, steel, cement, chemicals, and pulp; oil refining; natural gas processing; and synthetic fuels production. Small nonpoint sources of CO2, including emissions from vehicles, agriculture, and heating systems in buildings, are not good candidates for CCS because there currently is no way to capture the CO2 from these dispersed sources.

A complete CCS system relies on three technological components: capture, transport, and storage. Technologies that are used commercially in other sectors are available for each of these components. CO2 capture technology is already widely used in ammonia production and other industrial manufacturing processes, as well as oil refining and gas processing. CO2 gas has been transported through pipelines and injected underground for decades, most notably in west Texas, where it is used to enhance oil recovery from wells in which production is declining. In addition, some 3 to 4 million tons of CO2 per year is stored underground at several locations in other countries.

CO2 capture technologies can be divided into three categories: post-combustion or “end-of-pipe” CO2 capture, relying on chemical or physical absorption of CO2;pre-combustion CO2 capture technologies that separate CO2 from a syngas fuel (produced from coal, oil, or natural gas) before the fuel is burned; and oxyfuel combustion, in which oxygen instead of air is introduced during the combustion process to produce a relatively pure stream of CO2 emissions.

Of these options, pre-combustion capture is currently the most efficient and therefore the cheapest. In the case of coal-fired power plants, however, pre-combustion capture can be applied only when coal gasification technology is employed, such as in integrated gasification combined-cycle coal-fired power plants.

Once the CO2 is separated and captured, it must be compressed to reduce the volume of gas for transportation to an appropriate storage location. Compressing gas uses a lot of energy, so this part of the CCS system adds to the overall implementation costs. CO2 can be best transported by pipeline or ship. Ships are cost-effective only if the CO2 must be moved more than 1,000 miles or so. A network of CO2 pipelines is already being used in several areas of the United States for enhanced oil recovery, so building and operating CO2 pipelines is unlikely to pose technical or safety challenges. But regional siting limitations are possible.

Three alternative approaches to storing CO2 in a reservoir other than the atmosphere have been proposed: geological storage, storage in the ocean, or aboveground land storage. Geologic storage is currently the most promising approach. It involves direct injection of CO2 into underground geologic formations, including depleted oil and gas reservoirs, unminable coal seams, and deep saline aquifers. Public opposition to the idea of injecting CO2 directly into the deep ocean has prevented some research on this option, despite the ocean’s natural capacity to store most of the CO2 currently emitted into the atmosphere. The potential of aboveground land storage is limited by the impermanence and short (decade-long) time-scale of carbon storage in biomass and the slow reaction rates associated with the formation of carbonate minerals.

The oil industry has substantial commercial experience with CO2 injection to enhance oil production. That experience provides support for exploiting the many opportunities for coupled enhanced oil recovery/CO2 storage. At Weyburn in Saskatchewan, CO2 has been injected underground since 2000 for the dual purpose of enhancing oil recovery and storing CO2. Interest in storing CO2 in other underground reservoirs, including aquifers, has been increasing rapidly. Both the private and public sectors have contributed support to a handful of underground CO2 storage projects that are not intended to enhance oil recovery. At Sleipner in the North Sea, the Norwegian national oil company Statoil has been injecting CO2 separated out from the production of natural gas into a saline aquifer since 1996. In the fall of 2004, at In Salah in Algeria, BP similarly began injecting CO2 separated from the extracted natural gas back into the gas reservoir. Several other projects of even greater scale than these existing ones are planned in Australia, Germany, and the United States in the next few years.

Risks and uncertainties

The dominant safety concern about CCS is potential leaks, both slow and rapid. Gradual and dispersed leaks will have very different effects than episodic and isolated ones. The most frightening scenario would be a large, sudden, catastrophic leak. This kind of leak could be caused by a well blowout or pipeline rupture. A sudden leak also could result from a slow leak if the CO2 is temporarily confined in the near-surface environment and then abruptly released.

CO2 is benign and nontoxic at low concentrations. But at high concentrations it can cause asphyxiation, primarily by displacing oxygen. The most noteworthy natural example of a catastrophic CO2 release was in the deep tropical Lake Nyos in Cameroon in 1986. Lake water that was gradually saturated with CO2 from volcanic vents suddenly turned over and released a huge amount of the gas; the CO2 cloud killed 1,700 people in a nearby village. An event like this can occur only in deep tropical lakes with irregular turnover, but it is conceivable that leaking CO2 could infiltrate caverns at shallow depths and then suddenly be vented to the atmosphere. CO2 is denser than air, so when released it tends to accumulate in shallow depressions. This increases the risk in confined spaces close to the ground, such as buildings and tents, more than it does in open terrain, where CO2 will diffuse quickly into the air.

Before any CO2 storage project will be allowed to begin, it will have to be demonstrated to regulators that the likelihood of rapid leakage is negligible and that any gradual leakage will be extremely slow. Also, monitoring and verification procedures will be able to detect potential leaks.

In addition to undermining the purpose of a storage project, CO2 leakage from an underground reservoir into the atmosphere could have local effects: ground and water displacement, groundwater contamination, and biological interactions. Monitoring technology that can measure CO2 concentrations in and around a storage location to verify effective containment of the gas has demonstrated that leakage back to the atmosphere has not been a problem in current CO2 storage projects.

Leakage from a naturally occurring underground reservoir of CO2 in Mammoth Mountain, California, provides some perspective on the potential environmental effects. The leaking led to the death of plants, soil acidification, increased mobility of heavy metals, and at least one human fatality. This site is a useful natural analog for understanding potential leakage risks, but Mammoth Mountain is situated in a seismically active area, unlike the sedimentary basins where engineered CO2 storage would take place. Still, we should be wary of undue optimism and continue to question the safety of artificial underground CO2 storage. Given potential risks and uncertainties, the implementation of effective measurement, monitoring, and verification tools and procedures will play a critical role in managing the potential leakage risks of all CO2 storage projects.

Because of the high degree of heterogeneity among different geologic formations, the current set of CO2 storage projects is not necessarily representative of other likely storage locations. More demonstration projects are needed in different geologic areas. Some preliminary work has been done to understand the global distribution of appropriate underground reservoirs. But the regional availability of storage locations has not yet been well characterized, although this will be critical in determining the extent of possible CCS deployment throughout the world. Significant expansion of the number of CO2 storage projects and continued research on the mobility of the injected CO2 (and the risks associated with its leakage) should be high priorities.

To reduce the risks associated with CO2 leaks, it is possible to choose “smart storage” sites first. Aquifers and depleted oil and gas fields under the North Sea, for example, provide a relatively safe opportunity for initial large-scale deployment. Risks associated with leakage from geologic reservoirs beneath the ocean floor are less than risks of leakage from reservoirs under land, because if the containment falters, the dissipating CO2 would diffuse into the ocean rather than reentering the atmosphere.

The United States is doing little to advance the deployment of CCS technologies, but the government did spend about $75 million in 2004 on R&D. The primary goal of the core CCS R&D program is to support technological developments that will reduce implementation costs. In addition, the Department of Energy supports—with a $100 million budget over four years—a Regional Sequestration Partnership program that stimulates region-specific research designed to determine the most suitable CCS technologies, regulations, and infrastructures, as well as to assess best management practices and public opinion issues. It will also develop a database on potential geologic storage sites. The purpose of this partnership is to bring CCS technology from the laboratory to the field-testing and validation stage.

The United States has also initiated one large-scale demonstration project, FutureGen, to investigate the technical feasibility and economic viability of integrating coal gasification technology with CCS. FutureGen, launched in 2003 with a projected budget of some $1 billion, was supposed to be the first demonstration of a commercial-scale coal-fired power plant that captures and stores CO2. But no site has yet been selected, and funding for the construction phase has not been allocated. FutureGen’s future is uncertain.

Thus, although the government is supporting some CCS R&D and has initiated planning of one large-scale CCS demonstration project, none of the current efforts provide the incentives necessary for the private sector to begin deployment.

Economics will largely determine whether CCS can compete with carbon-mitigating energy alternatives. Despite the extensive commercial experience with technological components in other applications, minimal experience in integrating capture, transport, and storage into one system so far means that current cost projections are quite uncertain.

For power plants using modern coal gasification combined-cycle technology or a natural gas combined cycle, the costs of capturing, transporting, and storing carbon dioxide are estimated at about $20 to $25 per metric ton of CO2.For plants using traditional pulverized coal steam technology, these costs could double.

CO2 capture technology is itself energy-intensive and requires a substantial share of the electricity generated. Accounting for the corresponding power plant efficiency reduction (up to 30%) by expressing costs in dollars per ton of CO2 avoided, the costs of CCS in power plants range from $25 to $70 per ton of CO2 avoided. These figures imply an additional 1or 2 cents per kilowatt hour (kWh) for new coal gasification power plants, which have a baseline cost of about 4 cents per kWh.

Currently, these cost estimates are dominated by the cost of capture (including compression). If transport distances are less than a few hundred miles, the cost of capture constitutes about 80% of the total costs. The broad range of current cost estimates for CCS systems results from a high degree of variability in site-specific considerations. Among these are the particular power plant technology, transportation distance, and storage site characteristics. For all power plant alternatives and components, costs are expected to decline as new technologies are developed and as more knowledge is gained from demonstration projects and early deployment efforts.

But if the United States does not step up efforts to advance CCS deployment, the cost reductions from learning by doing will not emerge soon. In addition, growing experience and expertise in other countries could reduce U.S. competitiveness in CCS.

European leadership in CCS deployment began in 1996, when the Norwegian government instituted a tax on CO2 emissions equivalent to about $50 per ton of carbon avoided. This tax motivated Statoil to capture the CO2 emitted from its Sleipner oil and gas field and inject it into an underground aquifer. More recently, the British government has taken the lead on CCS deployment by announcing ?40 million to support CO2 storage in depleting North Sea oil and gas fields. This effort was initiated a month before the July 2005 G8 summit, where its chairman, British Prime Minister Tony Blair, advocated increased governmental support for developing carbon-abatement technology as a critical part of confronting climate change.

Injecting CO2 to enhance oil and gas recovery in the North Sea is not commercially viable without government support. But given the high costs of decommissioning these production wells and the economic benefits associated with retaining jobs and extracting more oil and gas, the British have recognized an opportunity. By providing support for CO2 storage, the British are simultaneously advancing CCS technology, potentially offsetting decades of European CO2 emissions, and extending production of the oil and gas fields for several more decades. Opportunities for early deployment of CCS also exist for other European nations with declining production of oil and gas wells in the reservoirs beneath the North Sea. Denmark, the Netherlands, and Norway may initiate programs similar to those of the British soon.

ECONOMICS WILL DETERMINE WHETHER CCS CAN COMPETE WITH CARBON-MITIGATING ENERGY ALTERNATIVES.

Climate policies being implemented and refined in the European Union (EU) and other countries that signed the Kyoto agreement to reduce greenhouse gas emissions provide some incentive for CCS, so more large-scale CCS projects are imminent. The practical experience obtained through this deployment will elevate the countries and companies involved in these projects to leaders in developing, improving, and exporting CCS technologies.

The United States has recognized efforts elsewhere in the world to advance CCS, so in 2003 it initiated the Carbon Sequestration Leadership Forum , a venue for international collaboration among its 17 member states. It facilitates joint projects and communication about the latest developments in CCS technology. But the United States could be doing a lot more in the international arena than simply facilitating communication. The stakes are too high not to adopt aggressive strategies for domestic CCS deployment as well.

In the absence of large-scale domestic CCS implementation, the United States is likely to lose its current leadership position in frontier fossil-fuel expertise. Even if no meaningful climate policy materializes in the United States in the near term, the CCS market will grow. If the United States wants to maintain a leadership role in CCS technology, it will need to begin deployment soon.

Balancing R&D, demonstration, and deployment

CCS is only one set of technologies with the potential to contribute to stabilizing CO2 emissions. Two seemingly polar opinions predominate about technology for reducing emissions. One side argues persuasively that because humanity already possesses the technological know-how to begin solving the climate problem, the focus should be on implementing all existing methods and technologies. The other side argues that because we have not yet developed sufficient non–CO2-emitting energy technologies, what we need are revolutionary changes in energy production.

But in fact this is a misleading dichotomy. It exists largely because different time scales implicit in each view have not been appropriately reconciled and correctly associated with corresponding parts of the CO2 concentration stabilization path. During the first 50 years of CO2 stabilization, emissions need only be maintained at their current level. Existing technologies can be implemented to achieve this initial part of the stabilization path. Beyond 50 years, atmospheric CO2 stabilization requires steep emission reductions. We do not yet have the technologies to achieve this, so undertaking intensive energy R&D is a necessity.

Thus, the view that we have a portfolio of existing technologies that allow us to start today on the path toward stabilization actually complements the view that new technologies will be required to maintain that stabilization path beyond 50 years. The critical question now is not whether to combine R&D, demonstration, and deployment efforts, but rather how to balance limited resources among these.

Interestingly, technologies associated with CCS can fit comfortably into the spectrum of opinion about how to achieve reduced emissions. Three of the 15 potential changes involving existing technology proposed by Stephen Pacala and Robert Socolow in a 2004 Science article involve capturing CO2 and storing it in underground formations. Geologic storage of carbon released from fossil-fuelled energy production is one of the potential carbon-emission–free primary energy sources identified in a 2002 Science article by Martin Hoffert et al. as needing R&D to overcome limiting deficiencies in existing technology. CCS is thus a set of technologies and concepts at varying stages of readiness.

Although the federal government should continue and increase its support for R&D to improve CCS technology and identify the best storage sites, the most critical and immediate requirement for advancing CCS technology is incentives for companies to begin early deployment. Private-sector interest in CCS is growing rapidly, demonstrating an increasing acceptance of the idea that CCS technologies are going to play a role in future energy production. Many companies in the oil and gas industry are already beginning to invest in and prepare for CCS deployment. The most recent came in June 2005, when BP and several industry partners announced plans to build the world’s first integrated commercial-scale power plant with CO2 capture and storage. The project involves a 350-megawatt power plant in Scotland, from which CO2 will be captured and then transported to the North Sea, to be injected into underground reservoirs for storage plus enhanced oil recovery.

Most current industry projects receive some government support. The most recently initiated CO2 storage project, in In Salah, Algeria, is a joint venture with BP, Sonatrach (the Algerian national energy company), and Statoil. The Sleipner CO2 storage project has some support from the EU, and the Weyburn project is funded by the Canadian government. Because of the level of interest and the fact that there is so much to learn, various organizations have been involved in many of the existing projects.

In the United States there are similar opportunities. But without a regulatory framework to provide incentives for reducing CO2 emissions, companies remain hesitant. The high cost of CCS deployment, as well as the large scale and the complex integration of CCS technologies with existing energy infrastructures, means that the private sector will not act without clear government rules and support. Just as it has in Europe, U.S. government-coordinated demonstration or early-deployment CCS projects with commercially motivated enhanced oil recovery would provide a cost-effective early opportunity for advancing CCS.

Unlike other emerging energy technologies that would compete with the existing fossil fuel infrastructure, CCS provides a way for fossil fuel industries to reconcile continued use of these fuels with climate change mitigation. But to guide the fossil fuel industries toward CCS, the government must set limits on CO2 emissions. The fossil fuel industries have evolved and grown under the assumption that there is no cost associated with emitting CO2. Government limits will force them to reorder their priorities and investments.

There are various approaches to regulating CO2, but the broadest support exists for a cap-and-trade system. This approach was recently endorsed by the National Commission on Energy Policy and is also the approach incorporated into the proposed McCain-Lieberman Climate Stewardship Act.

THE MOST CRITICAL AND IMMEDIATE REQUIREMENT FOR ADVANCING CCS TECHNOLOGY IS INCENTIVES FOR COMPANIES TO BEGIN EARLY DEPLOYMENT.

Incentives for early deployment are the most critical requirements for advancing CCS. Given the increasing technical feasibility of CCS, the country needs experience at the commercial scale and the associated learning by doing as soon as possible. Right now, early deployment is more important than widespread deployment, because much will be learned from an initial set of full-scale CCS operations, and those lessons will influence more advanced deployment.

Demonstration is a critical component of technology innovation, so increased funding for demonstration projects is essential to the advancement of CCS technology. Large-scale demonstration projects should be government/industry partnerships, with seed money coming from government and substantial contributions coming from participating companies. The goals and parameters of each project, as well as the mechanisms for learning from the experience and evaluation methods, should be developed jointly by government and industry. Companies should take the lead on implementation because of their expertise in large-scale projects.

The increasingly precarious FutureGen project was developed to be the first fully integrated demonstration effort, but its slow progress and uncertain future have been frustrating. This discouraging history suggests that earmarking such a large proportion of the government’s total CCS funds for one ambitious commercial-scale power plant is not an efficient use of resources. The demonstration of capture technology in existing power plants or the integration of capture technology in the design of several of the new power plants built in the next few years could be a cheaper way to achieve operational proficiency and to realize overall CCS cost reductions through learning by doing.

In addition to demonstrating CO2 capture technology in commercial-scale power plants, we need more large-scale CO2 storage demonstration projects in a diverse set of locations in order to get experience in geologically heterogeneous reservoirs, both in the United States and elsewhere. The lessons derived from these additional projects would strengthen the case for geologic storage and provide additional information on safety and environmental concerns.

Detailed regional maps of storage potential need to be developed throughout the world. Geologic assessments are needed particularly in China, India, and other coal-rich developing countries. Their rapid economic growth is associated with dramatic increases in CO2 emissions, and the potential for CCS in these countries has not yet been assessed systematically. The level of U.S. action on CCS technology will affect global understanding of the potential for CCS in developed and developing countries and could influence energy technology choices in these countries.

Although the U.S. government continues to abstain from setting limits on CO2 emissions, companies are making critical technical decisions that will have enormous impacts on future emissions. In the United States, the high prices of natural gas and the abundance of domestic coal have increased pressure to build more coal-fired power plants. If CO2 reduction regulations are instituted soon, these will encourage the deployment of technologies to capture and store CO2 from those plants. The United States could begin to reduce its global CO2 burden while at the same time becoming a leader in the rapidly emerging market for carbon-abatement technologies.

Green Urbanism

In Native to Nowhere, Timothy Beatley builds on the rich literature about the importance of the built environment in fostering and strengthening a sense of place in a community. Thanks to the efforts of urban planners, architects, politicians, and many ordinary citizens, some remarkable successes have been achieved during the past three decades. New public plazas, art museums, waterfront promenades, and retail centers have helped revitalize cities and towns across the United States, helping to boost community identity and pride. At the same time, however, powerful economic and social forces have worked to drive Americans apart, often into sprawling, anonymous communities that are indistinguishable from one another. A major challenge for policymakers today, Beatley says, is to help create civic environments that allow more Americans to become “native to somewhere.”

Beatley argues for the need for vital public places with genuine zeal, although this is fortunately tempered by the many practical examples he provides for achieving his vision. He believes that the lack of meaningful public places is one of the great crises in American life today. His thinking complements the views of Robert Putnam in Bowling Alone about today’s diminished level of social engagement and public and political involvement, as well as those of Juliet Schor, who complained in The Overworked American and The Overspent American about the long hours Americans spend working to support increasingly high levels of consumption. In Beatley’s view, the creation of better public spaces could help to slow life down and make it more meaningful. He applauds the “slow city” philosophy of a group of cities in Italy, which aims to elevate the unique and special qualities of place. “We need places that provide healthy living environments and also nourish the soul—distinctive places worthy of our loyalty and commitment, places where we feel at home, places that inspire and uplift and stimulate us and that provide social and environmental sustenance,” Beatley writes.

Beatley, the Theresa Heinz Professor of Sustainable Communities in the School of Architecture at the University of Virginia, Charlottesville, provides a workmanlike review of the literature on place-building and place-strengthening: the role of history, of vital pedestrian places, and of public art and civic celebrations. He also reviews ideas and strategies for overcoming sprawl and reducing sameness in community building. None of this discussion is original, but Beatley enlivens it with many interesting examples drawn from his extensive travels in Europe and the United States. The immense Landscape Park Duisburg-Nord in Germany, crafted around and among industrial ruins, including the superstructure of a blast furnace, demonstrates, Beatley writes, “the importance of building on the unique and particular histories of places and creatively utilizing them as strategies for overcoming the sameness that exists in so many regions and communities.” Several U.S. cities, including Portland, Oregon, and Oakland, California, have prepared pedestrian master plans, putting pedestrians on a more equal footing with cars and creating comprehensive visions of a walking city. Pedestrian places are experienced and lived at a slower pace. “I’m not sure we can truly love places that we only drive by at high speeds,” Beatley writes.

Nature and place

The best and most forward-thinking chapters in the book are the ones on the roles of natural environments and locally sustainable energy policies in strengthening commitment to place. These are areas that until relatively recently have been largely unexamined in the literature. Beatley helped to pioneer this thinking with his 2000 book Green Urbanism. In Native to Nowhere, he builds on that work.

Humans need nature, and green neighborhoods and green cities will instill both greater love of and commitment to these places, he argues. “One of the primary goals of . . . community-building should be to find ways to make urban life rich with the experiences of nature,” he writes. This will be a major challenge for planners, because modern Americans have become so physically disconnected from natural processes. Yet much original thinking has been done in recent years on applying the lessons of nature to buildings, commerce, and other aspects of society. To be sure, little of this thinking has actually been applied to the functioning of communities. Still, Beatley provides a road map of how it can be done.

Some projects now under way aim to reconnect citizens, even those in dense urban environments, to natural landscapes. A prominent example is an initiative called Chicago Wilderness, an effort of more than 160 environmental and community organizations to document the richness of the biodiversity in the Chicago region, to educate the public about it, and to work to restore and protect it. In 2004, the organization released the Chicago Green Infrastructure Vision, a strategic plan aimed at eventually protecting and restoring nearly 2 million acres of land.

Most of the projects Beatley writes about are much smaller than this, and many are being done for nakedly economic reasons in addition to their ecological benefits. Indeed, there is a growing realization that moving from a gray to a green infrastructure can actually save communities money. An innovative example of this is taking place in Seattle, where residential streets are being redesigned to incorporate a system of rainwater swales, trees, and native vegetation that is designed to handle all storm water on site in a way that will provide substantial savings in long-term maintenance costs when compared with conventionally engineered storm-water collection systems.

The local-global connection

Beatley believes that any real solutions to our current environmental and sustainability challenges will by necessity be local. He does not use the now-clichéd phrase “think globally, act locally,” but this is what he means. This is not wooly thinking on his part, because he demonstrates, again through a wealth of examples, the ways in which communities all over the world are using advanced technology to reduce pollution and energy use and beginning to live in more sustainable ways. Indeed, energy, Beatley convincingly argues, represents a significant opportunity for every community to strengthen its place qualities.

Some of the innovations are far-reaching and eye-opening. For example, in the London suburb of Hackbridge, a neighborhood is being created that is designed to be energy- and carbon-neutral (it will produce as much energy as it uses and will produce no net increase in carbon emissions). “Cities ought to aspire to be energy-neutral, to produce the basic energy they need in renewable, nonpolluting ways,” Beatley writes.

Although Europe appears to be moving much faster toward place-based energy strategies than is the United States, Beatley also cites some innovative U.S. cities, notably Chicago. “Few large cities in the world have taken as many steps toward building a renewable-based energy strategy as has Chicago,” he writes. For example, the city has agreed to purchase electricity produced from renewable sources to meet 20 percent of municipal demand, and it is retrofitting 15 million square feet of public building space to be more energy-efficient.

Native to Nowhere concludes with a slightly utopian call for a new “politics of place.” “Longterm commitment to sustainable places will require a politics in which people and organizations work together to create a positive future, not simply to oppose specific projects or decisions,” he writes. In short, we need to shun NIMBY (not in my back yard) tendencies and embrace our better YIMBY (•̀ᴗ•́)و ̑̑ instincts. But he is certainly correct that a local politics that encourages and makes it easier for neighbors to come together to, for example, reconceive a bridge as a forest (as they did for a pedestrian bridge in London that is designed with trees and shrubs) is a positive step forward. Those with their heads in the clouds, as well as those with their feet firmly planted on the ground, will find plenty of inspiration and many practical ideas for change in Native to Nowhere.

Cartoon – Summer 2005

Im on board for microbrews, but nanopizza is taking technology a step too far.

Forum – Summer 2005

Future oil supplies

As Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling themselves point out, their “Peaking Oil Production: Sooner Rather Than Later?” (Issues, Spring 2005) repeats an often-heard warning. Who knows, maybe they are right this time. Right or wrong, however, the best reasons for getting off the oil roller-coaster in the 21st century have little or nothing to do with geology. Rather, national security and global warming have everything to do with the need to act, and the time is now.

The national security problem, as several bipartisan expert groups have recently pointed out, is rooted in the growth of terrorist activity. Terrorist attack at any point in the oil production and delivery system can cause major economic and political disruption. Such disruptions have been a concern for years, of course, because oil reserves are concentrated in the Middle East. What’s new and dangerous is that terrorists will make good on the threat. Unlike the members of OPEC, terrorist groups have little or no economic incentive to keep oil revenues flowing.

A related problem is that some of the cash we pay for oil gets redistributed to terrorist organizations, thus creating another risk for our national security. Even if paying this ransom persuades terrorists not to disable oil-production facilities in the Middle East, for example, it enhances their ability to cause trouble elsewhere.

Climate change is the other reason to act now. Although some uncertainties remain about the specifics, almost all scientists (and not a few business executives) agree that the likely prospect of climate change justifies taking steps soon to mitigate greenhouse gas emissions. However, it’s hard to capture the carbon dioxide produced by a moving vehicle. Therefore, the only way to reduce greenhouse gas emissions from oil burned in the transportation sector is to use less oil.

In short, even if its production doesn’t soon peak, we should start reducing our dependence on oil now. Fortunately, there is no shortage of good ideas about how to do so. As the recent National Research Council report on auto efficiency standards shows, technology is available to reduce the use of gasoline in existing internal combustion engines. Coupling ethanol produced from cellulose with hybrid engines is a very promising avenue for creating a domestic carbon-neutral fuel. Harder but still worth pursuing as a research program is the hydrogen economy.

This isn’t the doom-and-gloom scenario that Hirsch et al. conjure up, but it runs headlong into the same question: Will our leaders act now or continue to dither? If they are to be convinced, it seems to me that a reprise of the tenuous arguments about limits to the growth of oil production isn’t going to do the job. We can only hope that our national security and the plain risks of climate change are reasons enough to take direct action on oil.

ROBERT FRI

Visiting Scholar

Resources for the Future

Washington, D.C.


Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling have produced an excellent analysis of the peak oil issue, which is attracting increasing interest in many quarters around the world, and rightfully so.

The world is coming to the end of the first half of the Age of Oil. It lasted 150 years and saw the rapid expansion of industry, transport, trade, agriculture, and financial capital, which allowed the population to expand sixfold, almost exactly in parallel with oil production. The financial capital was created by banks that lent more than they had on deposit, charging interest on it. The system worked because there was confidence that tomorrow’s expansion, fueled by cheap oil-based energy, was adequate collateral for today’s debt.

The second half of the Age of Oil now dawns and will see the decline of oil and all that depends on it. The actual decline of oil after peak is only about 2 to 3 percent per year, which could perhaps be managed if governments introduced sensible policies. One such proposal is a depletion protocol, whereby countries would cut imports to match the world depletion rate. It would have the effect of moderating world prices to prevent profiteering from shortages by oil companies and producer governments, especially in the Middle East. It would mean that the poor countries of the world could afford their minimal needs and would prevent the massive and destabilizing financial flows associated with high world prices. More important, it would force consumers to face reality as imposed by Nature. Successful efforts to cut waste, which is now running at monumental levels, and improve inefficiency could follow, as well as the move to renewable energy sources to the extent possible.

Public data on oil reserves are grossly unreliable and misunderstood, allowing a vigorous debate on the actual date of peak. Whether it comes this year, next year, or in 5 years pales in significance compared with what will follow. Perhaps the most serious risk relates to the impact on the financial system, because in effect the decline removes the prospect of economic growth that provides the collateral for today’s debt. Whereas the physical decline of oil is gradual, the perception of the post-peak decline could come in a flash and lead to panic. The major banks and investment houses are already becoming aware of the situation, but face the challenge of placing the mammoth flow of money that enters their coffers every day. In practice, they have few options but to continue the momentum of traditional practices, built on the outdated mind-sets of the past, with their principal objective being to remain competitive with each other whether the markets rise or fall. Virtually all companies quoted on the stock exchanges are overvalued to the extent that their accounts tacitly assume a business-as-usual supply of energy, essential to their businesses. In contrast, independent fund managers with more flexibility are already reacting by taking positions in commodities and renewable energies and by trying to benefit from short-term price fluctuations.

Governments are in practice unlikely to introduce appropriate policies, finding it politically easier to react than prepare. It is admittedly difficult for them to deal with an unprecedented discontinuity defying the principles of classical economics, which proclaim that supply must always match demand in an open market and that one resource seamlessly replaces another as the need arises. But failure to react to the depletion of oil as imposed by Nature could plunge the world into a second Great Depression, which might reduce world population to levels closer to those that preceded the Age of Oil. There is much at stake.

COLIN J. CAMPBELL

Oil Depletion Analysis Centre

London, England


As Robert L. Hirsch, Roger H. Bezdek, and Robert M. Wendling eloquently describe, a peak in global oil production is not a matter of if but when. Yet in order to assess how far we are from peak and to prepare ourselves for the moment in which supply begins to decline, we need to gather the most accurate available data. Unfortunately, reserve data are sorely lacking, and the global energy market suffers from the absence of an auditing mechanism to verify the accuracy of data provided by producers. The reason is that over three-quarters of the world’s oil reserves are concentrated in the hands of governments rather than public companies.

Unlike publicly traded oil companies, which are accountable to their stockholders, OPEC governments are accountable to no one. In recent years, we have seen that even public companies sometimes fail to provide accurate data on their reserves. In 2004, Shell had to downsize its reserve figures by 20 percent. Government reporting standards are far poorer. In many cases, OPEC countries have inflated their reserve figures in order to win higher production quotas or attract foreign investment. In the 1980s, for example, most OPEC members doubled their reserve figures overnight, despite the fact that exploration activities in the Persian Gulf declined because of the Iran-Iraq War. These governments, many of them corrupt and dictatorial, allow no access to their field-by-field data. The data situation worsened in 2004 when Russia, the world’s second largest oil producer and not a member of OPEC, declared its reserve data a state secret.

Our ability to create a full picture of the world’s reserve base is further hindered by the fact that in recent years, exploration has been shifting from regions where oil is ubiquitous to regions with less potential. According to the 2004 World Energy Outlook of the International Energy Agency (IEA), only 12 percent of the world’s undiscovered oil and gas reserves are located in North America, yet 64 percent of the new wells drilled in 1995–2003 were drilled there. On the other hand, 51 percent of undiscovered reserves are located in the Middle East and the former Soviet Union, but only 7 percent of the new wells were drilled in those regions. The reason for that is, again, the reluctance of many producers to open their countries for exploration by foreign companies.

These issues require behavioral changes by the major producers; changes that they are unlikely to enact of their own volition. The major consuming countries should form an auditing mechanism under the auspices of the IEA and demand that OPEC countries provide full access to their reserve data. Without such information, we cannot assess our proximity to peak and therefore cannot make informed policy decisions that could mitigate the scenarios described by the authors.

ANNE KORIN

Director of Policy and Strategic Planning

Institute for the Analysis of Global Security

Washington, D.C.

www.iags.org


I was greatly impressed by “Peaking Oil Production: Sooner Rather Than Later?” because I felt it gave a very fair and balanced analysis of the issue. However, because I am quoted as projecting peak oil as occurring “after 2007,” I think your readers are entitled to know how I came to this somewhat alarming conclusion.

My analysis is based on production statistics, which although not perfect are subject to a much smaller margin of error than reserves statistics. The world is looking for a flow of oil supply, and in an important sense is less interested in the stock of oil (reserves), except insofar as these can be taken as a proxy for future flows.

By listing all the larger oil projects where peak flows are 100,000 barrels per day or more, we can gain a good idea of the upcoming production flows. The magazine I edit, Petroleum Review, publishes these listings of megaprojects at intervals, the most recent being in the April 2005 issue. We have now done it for long enough to be confident that few if any large projects have been missed. Stock exchange rules and investor relations mean that no company fails to announce new projects of any size.

Because these large projects average 5 to 6 years from discovery to first oil, and even large onshore projects take 3 to 4 years, there can be few surprises before 2010 and not much before 2012. Simply adding up future flows, however, misleads, because depletion has now reached the point where 18 major producers and 40 minor producers are in outright decline, meaning that each year they produce less than the year before.

The buyers of oil from the counties in decline are (obviously) unable to buy the production that is no longer there. Replacing this supply and meeting demand growth both constitute new demand for the countries where production is still expanding. By the end of 2004, just less than 29 percent of global production was coming from countries in decline. This meant that the 71 percent still expanding had to meet all the global demand growth as well as replacing the “lost” production. In the next 2 or 3 years, the countries in decline will be joined by Denmark, Mexico, China, Brunei, Malaysia, and India. By that point, the world will be hovering on the brink, with nearly 50 percent of production coming from countries in decline balanced or offset by the 50 percent where production is still expanding.

Once the expanders cannot offset the decliners, global oil production peaks.

If global demand (the least predictable part of the equation) averages 2 to 2.5 percent (somewhat slower than the past 2 years), then we find that supply and demand can more or less balance until 2007, but after 2008 only minimal oil demand growth can be met and by 2010 none at all. Because major oil developments take so long to come to production, new discovery now will be too late to affect peak.

The best we can hope for is that the development of known discoveries plus some new discovery can draw the peak out into an extended plateau while human ingenuity races to cope with the new realities.

CHRIS SKREBOWSKI

Editor, Petroleum Review

Energy Institute

London, England


Worries about the future supply of oil began many years ago, but recently a serious concern has arisen that an imminent peak in world oil production will be followed by decline and global economic chaos. Yet the world and the United States have weathered many past peaks, which have been mostly ignored or forgotten.

World oil production dropped 15 percent after 1980. Cause: high prices from the Iran-Iraq war. Earlier, the Arab oil embargo cut global oil production by 7 percent in 1974. Much earlier, the Great Depression cut U.S. oil demand and production by 22 percent after 1929. And even earlier, U.S. oil production dropped 44 percent after 1880, 20 percent after 1870, and 30 percent after 1862, as prices fluctuated wildly.

In every case, the law of supply and demand worked, and the “oil supply problem” solved itself. The solution wasn’t necessarily comfortable. In the United States, the net effect of the 1973–1980 oil price hikes was a doubling of unemployment, a fourfold increase in inflation, and the worst recession since 1929.

Yet future oil supplies, now as in the past, continue to be underestimated for at least three reasons: the U.S. Securities and Exchange Commission (SEC), technology, and oil price increases. The SEC effectively defines “proved oil reserves” worldwide; it is a decidedly and deliberately conservative definition. The effect, now and historically, is a serious understating of oil reserves. This is shown by a continuing growth in U.S. proved oil reserves. They were 36 billon barrels in 1983. Twenty years later, they were 31 billon barrels, an apparent drop of 5 billon barrels. In the interim, the United States produced 64 billon barrels. Simply adding these back to the corrected 2003 reserves shows that the 1983 numbers were understated by a factor of 2.64, or 164 percent.

Technology is a continuous process, enabling discovery and production from places and by means unimaginable in the past. The most famous of oil-supply pessimists, M. King Hubbert, predicted correctly in 1956 that U.S. oil production would peak in 1970 and then rapidly decline. But the oil potential in the deep Gulf of Mexico and in Alaska and the impact of a host of technology developments assured continued U.S. oil development. Hubbert necessarily ignored all these factors, because he based his projections only on history.

Increasing oil prices also enable the exploitation of resources once thought uneconomical, and they accelerate the development and application of new technologies to find and produce oil, induce conservation by consumers, and drive the development and use of alternative fuel supplies. All help force supply and demand into balance.

ARLIE M. SKOV

Retired petroleum engineer

Santa Barbara, California

Arlie M. Skov is past president (1991) of the International Society of Petroleum Engineers and a past member (1996-2002) of the U.S. National Petroleum Council.


Syndromic surveillance

Michael A. Stoto’s “Syndromic Surveillance” (Issues, Spring 2005) catalogs numerous reasons why these early warning systems for large outbreaks of disease will disappoint the thousands of U.S. counties and cities planning to implement them. Besides suffering from high rates of expensive false positive and useless false negative results, syndromic surveillance algorithms only work when very large numbers of victims show up within a short period of time. When they produce a true positive, they don’t tell you what you’re dealing with, so they must be coupled with old-fashioned epidemiology, which trims the earliness of the early warning. Most disturbingly, when challenged by historical or simulated data, they tend to fail as often as they pass. It seems that, like missile defense but on a vastly smaller scale, syndromic surveillance has become yet another public expenditure on something that is not expected to work in the name of defending the homeland. The up side is that epidemiologists will have to be hired to investigate the syndromic surveillance alarms. They might find something interesting while investigating false positives and, if there were a bioterrorist attack, the extra workers would come in handy.

ELIZABETH CASMAN

Associate Research Professor

Department of Engineering and Public Policy

Carnegie Mellon University

Pittsburgh, Pennsylvania


Syndromic surveillance systems monitor pre-diagnostic data (such as work absenteeism or ambulance dispatch records) to identify, as early as possible, trends that may signal disease outbreaks caused by bioterrorist attacks. Michael A. Stoto summarizes some of the main questions being asked about syndromic surveillance and the lines of research currently being pursued in order to supply the answers.

Syndromic surveillance is a relatively young field. It was given a push by the events of 9/11, which generated a huge amount of investment in developing systems designed to detect the effects of bioterrorist attacks. The acute necessity for counterterrorism measures meant that the proof of concept often required of new technologies before public investment is made was initially ignored. This has resulted in the proliferation of surveillance systems, all with similar aims and all now understandably looking to justify several years of investment.

In the absence of any known bioterrorist attacks since 2001, it is instructive to examine the benefits of syndromic surveillance (some mentioned in Stoto’s article), which go beyond bioterrorism preparedness. They include the early detection of naturally occurring infectious disease outbreaks (the majority being viral in nature); the provision of timely epidemiological data during outbreaks; the strengthening of public health networks as a result of the follow-up of syndromic surveillance signals; the identification of secular disease trends; the flexibility to be “tweaked” in response to new threats (such as forest fires, heat waves, or poisonings from new market products); and reassurance that no attack has taken place during periods of increased risk.

There is some discussion in Stoto’s article about the distinction between the generation of surveillance signals and the use of those signals thereafter. This distinction is crucial to the ultimate success of syndromic surveillance systems. A simple and widely accepted definition of disease surveillance is that it is “information for action.” In other words, although syndromic surveillance systems excel at generating signals from noisy data sets, the challenge is how to filter out the false positives and turn the remaining true positives into effective public health action (a reduction in morbidity and mortality). To facilitate public health action, surveillance teams must wrestle with a complex array of jurisdictional, legal, and ethical issues as well as epidemiological ones. A multitude of skills are needed to do this.

Those public health practitioners used to traditional laboratory surveillance may be resistant to, or simply not understand, the new technology. Education and careful preparation must be used to ensure that syndromic surveillance employees communicate effectively with colleagues in other areas, that timely but nonspecific syndromic signals are validated with more definitive laboratory analyses, and that those on the receiving end of surveillance signals know how to respond to them. It is essential that investigations of signals that turn out to be spurious, as well as those representing real outbreaks, be published so that a portfolio of best practice can be developed. Finally, there must be a recognition that the benefits of syndromic surveillance fall not only in infectious disease monitoring and biosurveillance but also in other areas, some potentially not yet realized. The discussion of this topic within this journal’s broad scientific remit is therefore welcomed.

DUNCAN COOPER

Senior Scientist

Health Protection Agency West Midlands

Birmingham, England


Michael A. Stoto confidently straddles the ambivalent line between those who extol the virtues of syndromic surveillance and those who question the resources dedicated to its development. As a public health practitioner, it is refreshing to have the concerns articulated so clearly, yet without minimizing a new public health tool that potentially offers actionable medical information.

The author makes several points that are beyond dispute, including that many city, county, and state health agencies have begun spending substantial sums of money to develop and implement syndromic surveillance systems, and that too many false alerts generated from a system will desensitize responders to real events. Our impatience to build systems is rooted in the desire to improve our nation’s capacity to rapidly detect and respond to infectious disease scourges. Although it is difficult to argue this logic, Stoto appropriately clarifies cavalier statements, such as that syndromic surveillance is cost-effective because it relies on existing data. There is a high cost to these systems, too often without any evaluation of real benefits as compared to those of existing disease detection systems. Protection of patient privacy is another concern. Patient support for these systems could falter if their identities are not protected.

Public health departments need surveillance systems that are strategically integrated with existing public health infrastructure (many systems are outside public health, in academic institutions), and, more importantly, that are accompanied by capacity-building—meaning sufficient, permanent, and well-trained staff that can perform a range of functions. Ignoring these issues may prove to be self-destructive, as fractionated systems multiply and limited public health resources are spent to respond repeatedly to phantom alerts (or health directors begin to downplay alerts).

The offering of a possible supplementary “active syndromic surveillance” approach is appealing and strengthens the agreed-on benefits of syndromic surveillance: new opportunities for collaboration and data sharing between health department staff and hospital staff along with their academic partners, as well as reinforcement of stan-dards-based vocabulary to achieve electronic connectivity in health care.

A stronger message could be sent. Although they are intuitively appealing and growing each year, syndromic surveillance systems do not relieve ambitious developers or funders from executing systems that are better integrated (locally and regionally) and evaluated and have the skilled staff needed to detect and respond to alerts. We must strike a better balance between strengthening what is known to be helpful (existing infectious disease surveillance systems, improved disease and laboratory reporting, and distribution of lab diagnostic agents) and the exploration of new technology.

LOUISE S. GRESHAM

Senior Epidemiologist

San Diego County, Health and Human Services

Associate Research Professor, Epidemiology

Graduate School of Public Health

San Diego State University


We wholeheartedly support Michael A. Stoto’s thesis that syndromic surveillance systems should be carefully evaluated and that only through continued testing and evaluation can system performance be improved and surveillance utility assessed. It is also undeniable that there is a tradeoff between sensitivity and specificity and that it is highly unlikely that syndromic surveillance systems will detect the first or even fifth case of a newly emerging outbreak. However, syndromic surveillance does allow for population-wide health monitoring in a time frame that is not currently available in any other public health surveillance system.

The track record of reliance on physician-initiated disease reporting in public health has been mixed at best. Certainly, physician reporting of sentinel events, such as occurred with the index cases of West Nile virus in New York City in 2000 and with the first case of mail-associated anthrax in 2001, remain the backbone of acute disease outbreak detection. However, the completeness of medical provider reporting of notifiable infectious diseases to public health authorities between 1970 and 1999 varied between 9 and 99 percent, and for diseases other than AIDS, sexually transmitted diseases, and tuberculosis, it was only 49 percent. The great advantage of using routinely collected electronic data for public health surveillance is that it places no additional burden on busy clinicians. The challenge for syndromic surveillance is how to get closer to the bedside, to obtain data of greater clinical specificity, and to enable two-way communication with providers who can help investigate signals and alarms.

The solution to this problem is not a return to “active syndromic surveillance,” which would require emergency department physicians to interrupt their patient care duties to enter data manually into a standalone system. Such a process would be subject to all the difficulties and burdens of traditional reporting without its strengths, and is probably not where the field is headed.

The development of regional health information organizations and the increasing feasibility of secure, standards-based, electronic health information exchange offer the possibility of real-time syndromic surveillance and response through linkages to electronic health records. The future of public health surveillance may lie in a closer relationship with clinical information systems, rather than a step away from them.

KELLY J. HENNING

Special Advisor

Office of the Commissioner

FARZAD MOSTASHARI

Assistant Commissioner

Division of Epidemiology

New York City Department of Health and Mental Hygiene

New York, New York


Michael A. Stoto’s article performs a useful service by subjecting a trendy new technology to evi-dence-based analysis and discovering that it comes up short. All too often, government agencies adopt new technologies without careful testing to determine whether they actually work as advertised and do not create new problems. Tellingly, Stoto writes that one reason why state and local public health departments find syndromic surveillance systems attractive is that “personnel ceilings and freezes in some states . . . have made it difficult for health departments to hire new staff.” Yet he points out later in the article that syndromic surveillance merely alerts public health officials to possible outbreaks of disease and that “its success depends on local health departments’ ability to respond effectively.” Ironically, because epidemiological investigations are labor-intensive and syndromic surveillance can trigger false alarms, the technology may actually exacerbate the personnel shortages that motivated the purchase of the system in the first place.

JONATHAN B. TUCKER

Senior Research Fellow

Center for Nonproliferation Studies

Monterey Institute of International Studies

Washington, D.C.


Problems with nuclear power

Rather than provide a careful exploration of the future of nuclear technologies, Paul Lorenzini’s “A Second Look at Nuclear Power” (Issues, Spring 2005) rehashes the industry’s own mythical account of its stalled penetration of the U.S. energy market.

Does Lorenzini really believe that the runaway capital costs, design errors, deficient quality control, flawed project management, and regulatory fraud besetting nuclear power in the 1970s and 1980s were concocted by environmental ideologues? The cold hard fact is that the Atomic Energy Commission, reactor vendors, and utilities grossly underestimated the complexity, costs, and vulnerabilities of the first two generations of nuclear power reactors. Indeed, the United States has a comparatively safe nuclear power industry today precisely because “environmentalist” citizen interveners, aided by industry whistleblowers, fought to expose dangerous conditions and practices.

Lorenzini shows little appreciation of the fact that during the past decade, the regulatory process has been transformed in the direction he seeks: It now largely shuts out meaningful public challenges. But even with the dreaded environmentalists banished to the sidelines—and more than $65 billion in taxpayer subsidies—Wall Street shows no interest. It has rushed to finance new production lines for solar, wind, and fuel cells in recent years, but not nuclear. Why? Lorenzini never addresses this key question. He fails to acknowledge the prime barrier to construction of U.S. nuclear plants for the past three decades: their exorbitant capital cost relative to those of other energy sources.

Meanwhile, Lorenzini devotes only one paragraph to assessing weapons proliferation and terrorism risks, conceding in passing that “reprocessing generates legitimate concerns.” But this concern is immediately overridden by his assertion that a key to solving the nuclear waste isolation problem is to “reconsider the reprocessing of spent fuel.” This is disastrous advice. There is no rational economic purpose to be served by separating quantities of plutonium now, when a low-enriched uranium fuel cycle can be employed for at least the next century at a fraction of the cost—and security risk—of a “closed” plutonium cycle.

The best way to determine whether nuclear power can put a dent in global warming is to foster competition with other energy sources on a playing field that has been leveled by requiring all producers to internalize their full environmental costs. For the fossil-fueled generators, that means a carbon cap, carbon capture, and emissions standards that fully protect the environment and public health. For the nuclear industry, it means internalizing and amortizing the full costs of nuclear waste storage, disposal, security, and decommissioning, while benefiting from tradeable carbon credits arising from the deployment of new nuclear plants. For coal- and uranium-mining companies, it means ending destructive mining practices. For renewable energy sources, it means new federal and state policies for grid access allowing unfettered markets for distributed generation.

Reasonable people ought to be able to agree on at least two points. The first is that the problem of long-term underground isolation of nuclear wastes is not an insuperable technical task. It remains in the public interest to identify an appropriate site than can meet protective public health standards.

The second point is that new nuclear plants should be afforded the opportunity to compete in the marketplace under a tightening carbon cap. Whether a particular technology also should enjoy a subsidy depends on whether that subsidy serves to permanently transform a market by significantly expanding the pool of initial purchasers, driving down unit costs and enabling the technology to compete on its own, or merely perpetuates what would likely remain unprofitable once the subsidy ends.

The nuclear power industry has already enjoyed a long and very expensive sojourn at the public trough. No one has convincingly demonstrated how further subsidizing nuclear power would lead to its market transformation.

Without resolution of its waste disposal, nuclear weapons proliferation, capital cost, and security problems, such a market transformation for nuclear power is highly unlikely, with or without the megasubsidies it is now seeking. It’s not impossible, but certainly unlikely on a scale that would appreciably abate the accumulation of global warming pollution. That would require a massive mobilization of new investment capital for nuclear power on a scale that seems improbable, even in countries such as Russia, China, and India, where nuclear state socialism is alive and well.

During the next decade, increased public investment in renewable energy sources and efficiency technologies makes more sense and would have a higher near-term payoff for cutting emissions.

THOMAS COCHRAN

CHRISTOPHER PAINE

Natural Resources Defense Council

Washington, D.C.

www.nrdc.org


“A Second Look at Nuclear Power” provides an example of why the debate over nuclear power is unlikely to be resolved in the context of U.S. policy any time soon. As someone who agrees with Lorenzini’s cost/ benefit arguments for greater replacement of hydrocarbon-based energy sources with forms of nuclear power generation, I find parts of his article convincing.

At the same time, the article shows symptoms of the “talk past” rather than “talk with” problem, caused by the overly crude division of the nuclear power conversation into two warring camps (science versus environmentalists) that afflicts energy policy. Using technological determinist arguments to paint nuclear energy opponents as hysterical Luddites is tired and ill-thought out from the standpoint of helping promote nuclear power strategies. It is equivalent to labeling pro-nuclear arguments as being simply the product of lobbyist boosterism and “Atomic Energy for Your Business” techno-lust. In any case, it ignores the current opportunity for progress (which Lorenzini notes), given that several major environmental figures have announced willingness to discuss nuclear options.

Put plainly, the nuclear power industry has earned a tremendous deficit of public trust and confidence. This is not the fault of mis-characterization of waste or an ignorant desire to return to a mythical nontechnological Eden. Previous incarnations of commercial nuclear power technologies largely overpromised and failed to deliver. Missteps were made in ensuring that the public perceptions of regulatory independence and vigilance would be fostered and that operators could be trusted to behave honorably. There are unfortunate, but rational, reasons to doubt industry commitment to full disclosure of hazards, as well as process fairness in siting past generations of reactors and waste facilities. The industry has also displayed overconfidence in engineering ability and a desire to hide behind the skirts of government secrecy and subsidies.

Until friends of nuclear power decide to enter into vibrant, open debate with those who disagree, I fear this problem of public policy will continue to be diagnosed as a struggle over rationality versus unreason. Are there groups who attempt to exploit emotions to promote their own interests instead of “the facts?” Of course: In the past, both the industry and its opposition have fallen short of the pure pursuit of truth. Lorenzini raises a number of excellent points as to why nuclear power must be part of future energy choices. However, progress depends on its supporters dealing with the real question: For many in our democracy, it does not matter what advantages we claim for nuclear power if they do not trust the messenger; after all, no one cares what diseases the snake-oil medicine salesman promises to cure.

ANDREW KOEHLER

Technical Staff Member

Los Alamos National Laboratory

Los Alamos, New Mexico


The length to which Paul Lorenzini goes to selectively use data to support his position in “A Second Look at Nuclear Power” is astonishing. Some real numbers on renewables are called for. It is true that the total contribution from renewables has not increased significantly in the United States during the past 30 years, but that is because of the size and stability of the fraction due to wood and hydropower. It is also true that the International Energy Agency has predicted slow growth of wind, solar, and biofuels, but they have also been predicting stable oil and gas prices for the past 7 years while these have been increasing at an average rate of 28 percent annually.

Biodiesel use in the United States has been doubling every five quarters for the past 2 years, and that growth rate is projected to continue for at least the next 3 years. Wind energy has been growing at roughly 30 percent annually for the past 5 years, both in the United States and globally, and those growth rates are expected to continue for at least the next decade. Solar has seen annual global growth of about 30 percent for the past decade, and General Electric is betting hundreds of millions of dollars that that growth rate will continue or increase over the coming decade. The cost of the enzyme cellulase, needed to produce cellulosic ethanol, has decreased by more than an order of magnitude in the past 3 years. The energy balance for corn ethanol currently exceeds 1.7, and that for cellulosic ethanol from waste woody materials may soon exceed 10. Brazil is producing more than 5 billion gallons of ethanol annually at a cost of about $0.6 per gallon, and the annual growth rate of ethanol production there is projected to remain above 20 percent for at least the next 5 years.

On the subject of nuclear power, only a small minority of scientists would take issue with Lorenzini on the subjects of regulation and waste storage, but spent fuel reprocessing and breeder reactors are not nearly as straightforward as he implies. In fact, despite four decades of worldwide efforts, the viability of the breeder reactor cycle has not yet been demonstrated; and without it, nuclear energy is but a flash in the pan. The International Atomic Energy Agency concludes that the total global uranium reserves (5 million tons) of usable quality are sufficient to sustain nuclear power plants, with a 2 percent annual growth rate, only through 2040. Others have recently concluded that even with near-zero growth, the high-grade ores (those greater than 0.15 percent uranium) will be depleted within 25 years. Moreover, 15 years after the high-grade ores are depleted, we’ll be into the low-grade ores (below 0.02 percent uranium), which may have negative energy balance and result in more carbon dioxide emissions (during ore refining, processing, disposal, etc.) than would be produced by gas-fired power plants. The price of natural uranium has tripled during the past 4 years, and it seems likely that its price will triple again in the next 6 to 10 years, as the finitude of this resource becomes more widely appreciated by those controlling the mines.

F. DAVID DOTY

President

Doty Scientific

Columbia, South Carolina


Childhood obesity

We have failed our children, pure and simple. Nine million just in America are heavy enough to be in immediate health danger. And obesity is but one of many problems brought on by poor diet and lack of physical activity. For reasons woven deeply into the economics, social culture, and politics of our nation, unhealthy food prevails over healthy choices because it is more available and convenient, better-tasting, heavily marketed, and affordable. It would be difficult to design worse conditions.

A few shining events offer up hope. One is the release of Preventing Childhood Obesity by the Institute of Medicine (IOM). “Preventing Childhood Obesity” by Jeffrey P. Koplan, Catharyn T. Liverman, and Vivica I. Kraak (Issues, Spring 2005), who are all key figures in the IOM report, captures much of the strong language and bold recommendations of the report. An authoritative report like this is needed to shake the nation’s complacency and to pressure institutions like government and business to make changes that matter.

Profound changes are needed in many sectors. I would begin with government. Removing nutrition policy from the U.S. Department of Agriculture is essential, given the conflict of interest with its mission to promote food. The Centers for Disease Control and Prevention would be the most logical place for a new home, although substantial funding is necessary to have any hope of competing with industry. Agriculture subsidies and international trade policies must be established with health as a consideration. The nation cannot afford a repeat of the tobacco history, where the federal government was very slow to react because of industry influence. Many other institutional changes are necessary as well, with food marketing to children and foods in schools as prime examples.

Much can also occur at the grassroots, but programs must be supported, nurtured, evaluated, and then disseminated. Considerable creativity exists in communities. Examples are programs to create healthier food environments in schools, community gardens in inner cities, and movements to build walking and biking trails.

In January of 2005, two government agencies released the newest iteration of Dietary Guidelines for Americans. It is difficult to find a single adult, much less a child who could list the guidelines. Yet vast numbers could recall that one goes cuckoo for Cocoa Puffs. Such is the nutrition education of our children.

We step in time and again to protect the health and well-being of children. Mandatory immunization, required safety restraints in cars, and restrictions on tobacco and alcohol promotion begin the list. In each case, there is a perceived benefit that outweighs the costs, including reservations about big government and perceptions that parents are being told how to do their job. Each citizen must decide whether the toll taken by poor diet and inactivity warrants a similar protective philosophy, but in so doing must consider the high cost that will be visited on America’s children by failing to act.

KELLY D. BROWNELL

Chair and Professor of Psychology

Director, Rudd Center for Food Policy and Obesity

Yale University

New Haven, Connecticut


State science policy

Knowledge of science and technology (S&T) has become critical to public policymaking. With increasing incorporation of technology in society, there is hardly a public policy decision that does not have some element rooted in scientific or technical knowledge. Unfortunately, at the state and local level, there are very few mechanisms for bringing S&T expertise to the policymaking process. For states that regard themselves as the high-tech centers of the world, this is a contradictory and economically unhealthy situation.

Heather Barbour raises this point in “The View from California” (Issues, Spring 2005) and emphasizes the scarcity of politically neutral, or at least politically balanced, S&T policy expertise in the state government. I could not agree more with the observation, and I join in expressing a serious concern that good policymaking is almost impossible without S&T experts being deeply involved in providing sound advice during the legislative process. Bruce Alberts, in his final annual address to the National Academy of Sciences, remarked that many states are “no better than developing nations in their ability to harness science advice.”

Of all the high-technology states, it is in California that the issue is being addressed more aggressively and effectively. Currently, the California Council on Science and Technology (CCST) provides a framework for a solution, and CCST could be the solution with sustained commitment from the government.

CCST is the state analog of the National Academies for California and for the past decade has been bridging the gulf between S&T experts and the state government. This is not an easy process. The two cultures are so disparate that there is often a fundamental communication gap to cross before any meaningful dialogue can occur. Policy-makers want clear-cut decisions that will solve problems during their brief stay in office. Some typical responses to S&T advice are: “Don’t slow me down with facts from another study,” “Just give me the bumper-sticker version,” or “Whose budget will be affected by how much?” S&T experts, on the other hand, look at data, at possibilities, at both sides, at pros and cons. Although often at home in Washington, D.C., they do not usually have experience with the state government. More often than not, when newly appointed CCST council members meet in the state capital, it is their first trip there. In spite of these and other challenges, CCST, with a decade of experience in addressing these issues, has worked successfully with the state government on genetically modified foods, renewable energy, nanotechnology, hydrogen vehicles, homeland security, and the technical workforce pipeline.

A look at the agenda items from the latest CCST meeting (May 6 and 7, 2005, in Sacramento) shows the list of issues that CCST addressed:

  • Who owns intellectual property created from research funded by the state?
  • What are the ethical and social implications of stem cell research?
  • How are we going to produce and retain enough science and math teachers to meet our needs?
  • How can our Department of Energy and NASA labs be best used for getting Homeland Security technology into the hands of first responders?
  • How can our health care information technology system better serve our citizens?

Already there is, or soon will be, state legislation in the works on all of these topics. Effective, technically and economically sound policies in these areas are important to California and to us all. CCST’s relationship with public policymakers has been excellent overall, but, because of the organization’s limited resources, insufficient to meet the increasing demands of an ever more technological society. This situation has not been missed by the National Academies, who are currently strengthening CCST capabilities while considering CCST counterparts in other states. I agree that the time is ripe for California and other states to build on the constructive steps already taken toward effective and robust S&T advising.

SUSAN HACKWOOD

Executive Director

California Council on Science and Technology

Sacramento, California


Heather Barbour’s article rehearses an oft-repeated lament about the lack of capacity of state governments—even of the mega-states like California, New York, Florida, and Texas—to grapple with the substantive and procedural complexities of science and technology (S&T) policy. What may have been lost behind the host of Barbour’s generally appropriate recommendations for capacity-building is the more subtle point about policymaking by the ballot box, exemplified by California’s stem cell initiative: that such “[c]omplex issues . . . are not suited to up or down votes.” Out of context, this claim might be taken as critical of any efforts to increase the public role in S&T policymaking. But Barbour’s point is that the stem cell initiative included such a large array of sub-issues, including the constitutional amendment, the financing scheme, the governing scheme, etc., that it was inappropriate to decide all of them with a single vote. Since the landslide, advocates of the initiative have deployed public support as a shield against critics of its governance aspects. But the complex nature of the initiative logically yields no such conclusion. Although we know that the voters of California prefer the entire pro-stem cell menu to the alternative, we have little idea what they would choose if given the options a la carte.

For those of us interested in the more complete democratization of S&T policymaking, Barbour’s list of reforms will be helpful to the extent that they do not graft a corporatist structure atop California’s science politics. But as Barbour only hints, the California initiative as implemented for stem cells was a distorted reflection of democracy.

DAVID H. GUSTON

Professor of Political Science

Associate Director, Consortium for Science, Policy, and Outcomes

Arizona State University

Tempe, Arizona


Genetics and public health

In “Genomics and Public Health” (Issues, Spring 2005), Gilbert S. Omenn discusses the crucial role of public health sciences in realizing “the vision of predictive, personalized and preventive health care and community health services.” Here, I expand on three critical additional processes needed to integrate emerging genomic knowledge to produce population health benefits: 1) systematic integration of information on gene/disease associations, 2) the development of evidence-based processes to assess the value of genomic information for health practice, and 3) developing adequate public health capacity in genomics.

Advances in genomics have inspired numerous case-control studies of genes and disease, as well as several large longitudinal studies of groups and entire populations, or biobanks. Collaboration will be crucial to integrate findings from many studies, minimize false alarms, and increase statistical power to detect gene/environment interactions, especially for rarer health outcomes. No one study will have adequate power to detect gene/environment interactions for numerous gene variants. Appropriate meta-analysis will increase the chance of finding true associations that are of relevance to public health.

To develop a systematic approach to the integration of epidemiologic data on human genes, in 1998, the Centers for Disease Control and Prevention (CDC) and many partners launched the Human Genome Epidemiology Network (HuGE Net) to develop a global knowledge base on the impact of human genome variation on population health. HuGE Net develops and applies systematic approaches to build the global knowledge base on genes and diseases. As of April 2005, the network has more than 700 collaborators in more than 40 different countries. Its Web site features 35 systematic reviews of specific gene/disease associations and an online database of more than 15,000 gene/disease association studies. The database can be searched by gene, health outcomes, and risk factors. Because of the tendency for publication bias, HuGE Net is developing a systematic process for pooling published and unpublished data through its work with more than 20 disease-specific consortia worldwide.

In addition, the integration of information from multiple disciplines is needed to determine the added value of genomic information, beyond current health practices. The recent surge in direct-to-consumer marketing of genetic tests such as genomic profiles for susceptibility to cardiovascular disease and bone health further demonstrates the need for evi-dence-based assessment of genomic applications in population health. In partnership with many organizations, the CDC has recently established an independent working group to evaluate and summarize evidence on genomic tests and identify gaps in knowledge to stimulate further research. We hope that this and similar efforts will allow better validation of genetic tests for health practice.

Finally, the use of genomic information in practice and programs requires a competent workforce, a robust health system that can address health disparities, and an informed public, all crucial functions of the public health system. Although the Institute of Medicine (IOM) identified genomics as one of the eight cross-cutting priorities for the training of all public health professionals, a survey of schools of public health shows that only 15 percent of the schools require genomics to be part of their core curriculum, the lowest figure for all eight cross-cutting areas. The CDC continues to promote the integration of genomics across all public health functions, including training and workforce development. Examples include the development of public health workforce competencies in genomics, the establishment of Centers for Genomics and Public Health at schools of public health, and the funding of state health departments.

In a 2005 report, the IOM defined public health genomics as “an emerging field that assesses the impact of genes and their interaction with behavior, diet and the environment on the population’s health.” According to the IOM report, the functions of this field are to “accumulate data on the relationships between genetic traits and diseases across populations; use this information to develop strategies to promote health and prevent disease in populations; and target and evaluate population-based interventions.” With the emerging contributions of the public health sciences to the study of genomic data, the integration of such information from multiple studies both within disciplines and across disciplines is a crucial methodological challenge that will need to be addressed before our society can reap the health benefits of the Human Genome Project in the 21st century.

MUIN J. KHOURY

Director

Office of Genomics and Disease Prevention

Centers for Disease Control and Prevention

Atlanta, Georgia


Public health and the law

Lawrence O. Gostin’s “Law and the Public’s Health” (Issues, Spring 2005) is of significant interest to public health authorities. Gostin presents the framework of public health law, describing the profound impact it had on realizing major public health achievements in the 20th century, before the field of population health “withered away” at the end of the century. Revitalization of the field of public health law is needed now, as we witness unprecedented changes in public health work, particularly in the expansion of public health preparedness for bioterrorism and emerging infectious diseases and in increased awareness of health status disparities and chronic disease burden attributable to lifestyle and behaviors. Fortunately for the public health workforce, Gostin’s concise and salient assessment of public health law and its tools shows promise that once again law can play an important role in improving the public’s health.

The events surrounding September 11, 2001, and the effects of the anthrax attacks (both real and suspected hazards) catalyzed public health. Public health preparedness became a priority for health officials across the country, with an emphasis on surveillance and planning for procedures for mandated isolation and quarantine—measures that have been largely unused during the past 100 years. Not only were public health practitioners unprepared for these new roles, but the laws needed to provide the framework for these actions were outdated and inconsistent and varied by jurisdiction. Gostin eloquently outlines the importance of surveillance and quarantine/isolation while juxtaposing concerns for personal privacy and liberty.

These issues are not just theoretical exercises in legal reasoning or jurisprudence. The recent outbreak of severe acute respiratory syndrome (SARS) in Toronto is a case in point. Individuals with known contact with infectious patients were asked to remain at home (voluntary quarantine). Compliance was monitored and non-voluntary measures were used when necessary. In New York City, the involuntary isolation of a traveler from Asia suspected of having SARS generated widespread media attention.

Unfortunately, our processes to perform these functions are rusty. In reviewing the history of our jurisdiction, Syracuse, New York, we found that the only example of public health orders to control an epidemic (other than in cases involving tuberculosis) was the influenza pandemic of 1918. Even then, the process was not clearly established. When a health officer issued an order to prevent public assembly of large groups, the mayor was asked by the press how it would be enforced. He simply answered “we have the authority.” Despite the order, public assemblies continued; no record of enforcement action can be found. Almost 100 years later, we are in discussion with the chief administrative judge of the district to again examine the authority available and to draft protocols needed to implement mandated measures should they became necessary because of bioterrorism or the next pandemic flu.

Gostin, in this article and in other work, speaks to these issues. We eagerly await the upcoming revision of his book Public Health Law: Power, Duty, Restraint, which will further detail the complexities of protecting the public’s health through legal interventions.

LLOYD F. NOVICK

Commissioner of Health

CYNTHIA B. MORROW

Director of Preventive Services

Onondaga County

Syracuse, New York


Reducing teen pregnancy

I appreciate the many issues touched on by Sarah S. Brown in her interview with Issues in the Spring 2005 issue. As a former National Campaign staffer, I think the organization plays a significant role in bringing needed attention to teen pregnancy and the many lives affected by it. In particular, in the area of research, the campaign’s work is stellar.

However, the central policy message as articulated by Brown is that on one side of the policy divide are those that push safer sex and contraception and on the other are those that push abstinence only. This simply is not the case. Those of us who believe in a comprehensive approach do not choose either approach in isolation, but embrace both and encourage young people to abstain, abstain, abstain (emphasis intentional), but when they do become sexually active, to be responsible and protect themselves and their partners by using contraception. As Brown says, “there is no need to choose between the two.”

On the side of those who advocate abstinence only, however, pragmatism and moderation are completely absent from the discussion. When the issues of sexual activity and use of contraceptives are addressed, the worst possible picture is painted. One program tells young people that sex outside of marriage is likely to lead to suicide. Others repeatedly cite a resoundingly refuted study that says condoms fail more than 30 percent of the time, and others compare condom use to Russian roulette. Their bottom-line message: Condoms do not work.

The persistence of this extremism is all the more puzzling when the research about the efficacy of comprehensive programs is taken into account. According to research published by the National Campaign and authored by the esteemed researcher Doug Kirby, several studies of comprehensive programs say that they “delay the onset of sex, reduce the frequency of sex, reduce the number of sexual partners among teens, or increase the use of condoms and other forms of contraception” (Emerging Answers, 2001). In other words, there is no discernable downside to providing instruction that is comprehensive in approach. Every desired behavior is boosted by a comprehensive approach.

The same cannot be said for programs that focus only on abstinence. Eleven states have now conducted evaluations of their programs, and all of them have arrived at a similar conclusion: These programs have no long-term beneficial impact on young people’s behavior. Recently published data on the effect of virginity pledges—a key component of most abstinence-only-until-marriage programs—actually found negative outcomes. Among these outcomes is decreased use of contraception when sex occurs and an increase in oral and anal sex among pledgers who believe this type of “non-sex” keeps their virginity intact. Finally, nearly 90 percent of pledgers will still go on to have sex before they marry.

Sadly, the ongoing debate has much to do with politics and very little to do with the health of our young people. And unfortunately, the central policy position of the National Campaign exacerbates the situation by mischaracterizing the approach of comprehensive sexuality education proponents when, in fact, our message is the same as theirs: “There is no need to choose between the two.”

WILLIAM A. SMITH

Vice President for Public Policy

The Sexuality Information and Education Council of the United States

Washington, D.C.

From the Hill – Summer 2005

House bucks Bush, votes to relax stem cell restrictions

Bucking the opposition of President Bush, the House of Representatives in May passed the Stem Cell Research Enhancement Act (H.R. 810), which would relax federal restrictions on embryonic stem cell research by allowing federal researchers to use newly created stem cell lines rather than just those created before the administration’s policy was announced on August 9, 2001.

The bill, championed by Representatives Mike Castle (R-Del.) and Diana DeGette (D-Calif.), comes in the wake of complaints by researchers that fewer than two dozen cell lines are currently available and that many of those are contaminated with animal cultures. By allowing access to new uncontaminated lines derived from excess embryos from in vitro fertilization clinics, the bill aims to advance research into potential therapies for a host of diseases.

The 238-194 vote, supported by 50 Republicans, fell well short of the two-thirds majority that would be needed to overturn a presidential veto. In the Senate, a companion bill has been introduced by Senators Orrin Hatch (R-Ut.) and Tom Harkin (D-Ia.). Sen. Sam Brown-back (R-Kan.) has threatened to filibuster the bill if it reaches the floor.

The debate on the House floor divided Republican and Democrats alike over the scientific promise of stem cell research and the ethical dilemma of an area of research that some view as morally abhorrent. Proponents of embryonic stem cell research argued that more human embryos are created than are needed during the course of an in vitro fertility treatment and that the excess embryos are often simply discarded. Rep. Jim Langevin (D-R.I.), who is partly paralyzed, stated, “What could be more life-affirming than using what would otherwise be discarded to save, extend, and improve lives?”

Opponents objected to this argument, however, saying that such research would still condone the destruction of embryos. They argued that research on stem cells obtained from adults is just as promising and renders embryonic stem cell research unnecessary. Most scientists, however, dispute this claim. Although adult stem cells have potential, scientists say, they also have severe limitations and drawbacks.

Bush plan for earth-penetrating nuclear weapon hits roadblock

Despite a continuing strong push from the Bush administration, the House of Representatives is providing a cool reception for a program to study earth-penetrating nuclear weapons. The recently approved House fiscal year (FY) 2006 Energy and Water Development appropriations bill does not include money for the administration’s Robust Nuclear Earth Penetrator (RNEP), also known as an earth “bunker buster,” which is designed to put at risk deeply buried targets beyond the range of conventional weapons. In addition, the House-passed version of the FY 2006 Defense Authorization bill removed the nuclear component from a study of earth-penetrating weapons and shifted the proposed work to the Department of Defense (DOD) from the Department of Energy.

Meanwhile, a recent National Research Council report concluded that earth-penetrating nuclear weapons cannot penetrate to the depths required to contain all of the effects of a nuclear explosion. According to the report, “For attacks near or in densely populated urban areas using nuclear earth-penetrator weapon on hard and deeply buried targets (HDBTs), the number of casualties can range from thousands to more than a million, depending primarily on weapon yield. For attacks on HDBTs in remote, lightly populated areas, casualties can range from as few as hundreds at low weapon yields to hundreds of thousands at high yields and with unfavorable winds.”

Although the Senate has not yet taken action this year, the Senate Armed Services Subcommittee on Strategic Forces addressed the issue in the context on the country’s overall nuclear capability during an April hearing. Only administration officials were invited. Gen. James E. Cartwright, Commander, U.S. Strategic Command (StratCom) for the Marine Corps, framed today’s nuclear weapon programs in the context of a broad reconfiguration of DOD. He said that in order to respond to post-Cold War threats, a realignment of various combat commands must take place, with emphasis on a quick, agile, and precise response to worldwide threats. He noted that whereas most of DOD’s commands focus on threats in specific regions, StratCom’s role is to supply worldwide “enablers” to these localized commands, including missile defense, sensors, reconnaissance, and surveillance capabilities. Nuclear weapons are another such enabler, and StratCom is working to update the U.S. nuclear arsenal to reflect today’s threats.

Cartwright testified that a leading force in shaping U.S. nuclear weapons policy is the Moscow Treaty, signed in 2001 by President Bush, which limits the U.S. nuclear arsenal to between1,700 and 2,200 warheads by the year 2012. A critical means for protecting the reliability of the arsenal and the sufficiency of the number of warheads is to sustain target precision to meet emerging needs. As it has argued in previous years, StratCom believes that hardened, underground bunkers are one area where conventional weapons are not useful, and thus, the RNEP program is being pursued.

Subcommittee Chairman Jeff Sessions (R-Ala.) challenged the witnesses to refute claims that the RNEP study will disrupt the world’s nuclear balance. Ambassador Linton F. Brooks, administrator at the Department of Energy’s National Nuclear Security Administration, responded by discussing four classes into which the defense world is currently divided.

First, he argued, current nuclear powers will not be affected if the United States develops a new nuclear weapon. Second, aspiring nuclear states, such as North Korea or Iran, won’t be influenced either, because they already feel threatened by the sheer size of U.S. conventional forces. Third, terrorists are unlikely to be deterred regardless of our activities, so we should not worry about them. The final class, and the only one affected by RNEP development, Brooks argued, would be the non-nuclear states who help the United States uphold the Nuclear Proliferation Treaty through their cooperation. Brooks admits that these nations have reason to worry but contends the RNEP study alone won’t convince any of these nations to go nuclear.

Brooks’ arguments in favor of the RNEP study received no dissent from the two senators presiding over the subcommittee hearing. However, Sen. Bill Nelson (D-Fla.), ranking member on the subcommittee noted that the RNEP would not move past its study phase without explicit consent from Congress.

Fraud allegations roil Yucca Mountain project

Nevada’s representatives in the House are using a report alleging fraud as a new means of scuttling the construction of a permanent repository site for high-level nuclear waste in their state. On April 5, Rep. John Porter (R-Nev.), chairman of the House Government Reform Subcommittee on the Federal Workforce and Agency Organization, convened a hearing to probe fraud allegations at the Yucca Mountain project. The charges stemmed from a March report by Secretary of Energy Samuel Bodman that several U.S. Geological Survey (USGS) employees may have falsified documents relating to their work on the project.

At the hearing, House members from Nevada expressed outrage and called for a criminal investigation into the matter. The committee was particularly concerned with recently disclosed email messages that seemed to portray attempts by federal employees to circumvent quality assurance (QA) procedures. One USGS employee wrote in 1999, “In the end I keep track of 2 sets of files, the ones that will keep QA happy and the ones that were actually used.”

Rep. Shelley Berkley (D-NV) painted the allegations as a continuation of problems that have plagued the project to build a nuclear waste repository and urged members to consult the fault-finding 2004 Government Accountability Office report (http://www.gao.gov/new.items/d04460.pdf). Asserting that “Yucca Mountain is based on a lie,” Berkley said he believed that the project was finally “…collapsing before our very eyes.” She called for a complete halt to the Yucca Mountain project.

Rep. James Gibbons (R-Nev.) warned the Department of Energy (DOE) not to downplay the incident as “paperwork problems” or to continue in their mindset of “damage control.” Stating that Yucca Mountain was selected for “purely political reasons,” he compared the allegations to chief executive officers “cooking the books” for Enron and WorldCom. Just as no one would fly in an airplane that had not undergone quality assurance, he claimed, people in Nevada would refuse a waste site they perceived to be improperly screened.

Pointing out that Las Vegas is the fastest growing city in the United States and that the Yucca Mountain site is no longer remote, Porter argued the project is based on “science fiction,” not “sound science.”

The two men assigned to the hot seat at the hearing were USGS Director Charles Groat and Ted Garrish, deputy director of DOE’s Yucca program. Whereas Groat deferred to a current Department of Interior investigation, Garrish assured the committee that the Nuclear Regulatory Commission would properly assess the matter once the formal license application for Yucca had been filed.

Dissatisfied with their answers, Gibbons demanded to know how plans for Yucca Mountain could continue without an assurance that the allegations would not undermine the whole project. Along with his colleagues, he suggested a thorough independent investigation.

B. John Garrick, chairman ofthe Nuclear Waste Technical Review Board, the body charged with reviewing the scientific progress of Yucca Mountain, determined that it was too early to draw conclusions about the effects of the allegations on the overall project. But Nevada Attorney General Brian Sandoval reminded the committee that the unflattering emails would not have been released if not for Nevada’s incessant lawsuits. He called for further disclosure of information relating to the allegations.

Senate Minority Leader Harry Reid (D-Nev.), who called the allegations a “lesson in what’s bad about the government,” offered a possible solution. He proposed legislation to store nuclear waste on site at U.S. nuclear facilities.

Senators clash over terrorist priorities

At a May 18, 2005, hearing of the Senate Committee on Environment and Public Works, senators clashed over the potential terrorist threat of two loosely organized groups, the Animal Liberation Front (ALF) and the Environmental Liberation Front (ELF), which have used violent acts and harassment as a means of advancing their agendas.

Chairman James Inhofe (R-Okla.) noted that the activities of these groups, which are conducted by autonomous individuals or cells, have been designated the number one domestic terrorist threat by the FBI. That title was not so warmly embraced by Inhofe’s colleagues on the other side of the aisle, and the hearing grew heated at times.

Senators James Jeffords (I-Vt.), Frank Lautenberg (D-N.J.), and Barack Obama (D-Ill.) joined in a statement for the record, objecting to designating ALF/ELF as terrorist groups. Lautenberg claimed that the acts committed were merely the product of “crazy” individuals. The senators argued that the acts should be placed in context, maintaining that hate crimes, right-wing militias, abortion-clinic bombers, and potential attacks against nuclear and chemical facilities should be given a higher priority by law enforcement officials.

John Lewis and Carson Carroll, deputy assistant directors of the FBI and the Bureau of Alcohol, Tobacco, Firearms and Explosives, respectively, disagreed. They testified that threats from right-wing militias and hate crimes are less serious than those posed by ALF/ELF in terms of coordination, planning, and geographic range. Further, ALF/ELF are sophisticated users of the Internet, conveying information, encouraging recruits, and posting pictures of the laboratories they have damaged on their Web site, constantly changing servers to avoid tracking by law enforcement.

Both witnesses said that the problem is getting worse, with Carroll testifying about the increasing use of explosives and incendiary devices, ranging from crudely made to sophisticated and electronically ignited. Furthermore, the violent rhetoric used by ALF/ELF and their supporters has also grown. Lewis cited a remark by one ALF supporter that if people who kill animals can be stopped only by violence, then it is morally justifiable.

David Skorton, president of the University of Iowa, provided a perspective on the impact that these activists have had. He testified that in a November 2004 attack at his university, 18 individuals claiming responsibility on behalf of ALF destroyed and poured acid on equipment and papers and released more than 300 animals. “Not only was research disrupted,” Skorton stated, “but the academic activities and careers of faculty, undergraduate and graduate students, and postdoctoral trainees were impaired, in some cases adding months to the conduct of federally funded, peer-reviewed research.”

Furthermore, the group posted the names, addresses, and phone numbers of faculty and their spouses, graduate students, and laboratory assistants on the Web. Calling these latter efforts “blatant intimidation,” Skorton reported that university-affiliated individuals are still being harassed and that the fears created have altered the environment on the campus.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Facing the Global Competitiveness Challenge

The United States today faces a new set of economic challenges. Indeed, for the first time since the end of World War II, U.S. global leadership in innovation is being brought into question.

During the past 15 years, the rise of China, reform in India, and the end of the Soviet Union have added more than 2.5 billion relatively well-educated but low-wage people to the world labor force. China, India, Russia, and Central Europe are all making significant investments in higher education, emphasizing mathematics, science, and engineering. The spread of the Internet and the digital revolution have combined to introduce international competition to a range of service occupations that were previously shielded from overseas rivals.

New competitors in Asia and the old Soviet sphere of influence as well as old rivals in Europe and Japan are making strides in adapting the U.S. model of innovation to their own economic institutions and tradi tions. New and old competitors are now seeking to attract the stu dents, professors, engineers, and scientists that for years viewed the United States as the principal land of opportunity.

The United States has met and overcome economic challenges before. In the 1980s, this country struggled with severe inflation, stagnant productivity growth, and rising international competition. Driven in part by a fear that Japan was becoming the world’s major industrial power, government and private-sector actors worked in tandem to forge changes that once again made the United States the world’s dominant economy.

When the United States entered the 21st century, it faced yet another set of difficulties: the collapse of a financial bubble, a slowing economy, the tragic attacks of 9/11, and a series of corporate scandals. Yet a public-sector focus on stimulating the economy and a still more productive private sector combined to move the economy back toward growth by 2003.

Too many Americans see the 1980s and the country’s recent return to economic health as an all but effortless inevitability, a simple demonstration of the United States’ unique economic prowess. But it is a dangerous mistake to take economic growth for granted. Meeting the challenge of one major competitor does not prevent other rivals from emerging. Short-term recovery can mask longer-term challenges and lead to a comforting yet corrosive complacency.

In the face of emerging and still unforeseen challenges, the United States can find promise in the strategy that helped bring it past success. The experience of the 1980s led to public policies that helped propel the prosperity of the 1990s. At the heart of those policies was a systematic focus on innovation as the key to future growth and prosperity. Yet today the nation seems to have lost that focus. It needs to regain it and to update and broaden the types of competitiveness policies that served it so well in the past.

A competitiveness strategy emerges

The 1970s’ combination of stagflation and growing international competition gave rise to a competitiveness movement that emphasized a complementary set of policies to foster long-term productivity growth. The competitiveness strategy can be traced to the 1979 and 1980 reports of the Joint Economic Committee, chaired by Senator Lloyd Bentsen. It received added definition and political support in Rebuilding the Road to Opportunity, published in 1982 by the House Democratic Caucus.

The 1985 report of the President’s Commission on Industrial Competitiveness (known as the Young Commission after its chairman John Young, then the chief executive officer of Hewlett-Packard) fully embraced the focus on long-term productivity growth and also advocated a series of specific steps to be taken by the private and public sectors. The Young Commission report spelled out four complementary public policies that have set the broad outlines of a national competitiveness strategy:

Investment: Create a climate that encourages public and private investment.

Learning: Emphasize education, training, and the continuous acquisition of new skills.

Science and technology: Add an active technology policy to the existing commitment to basic science.

Trade: Promote U.S. exports and adopt a foreign economic policy that seeks to open markets around the world.

Almost four years of congressional work culminated in the Omnibus Trade and Competitiveness Act of 1988, which incorporated and expanded on the broad strategic focus of the Young Commission.

Economic policy in the 1990s followed the outlines of the competitiveness strategy. Presidents Bush and Clinton reduced deficits to improve the climate for investment, supported higher standards for K-12 education, and actively negotiated trade agreements. Clinton added an aggressive approach to promoting U.S. exports. Bush provided initial funding for thetechnology initiatives created by Congress in 1988, and Clinton sought sharply increased funding for technology programs, added impetus to the 1980s initiatives designed to speed the transfer of technologies from universities and the national laboratories to the commercial market, and actively promoted the spread of the Internet. Beyond education, Clinton emphasized training and lifelong learning as necessary to prepare Americans for an era of rapid change and global competition.

How much did the competitiveness strategy’s focus on long-term productivity growth contribute to the prosperity of the 1990s? Corporate investment rose sharply. Millions of Americans took advantage of new opportunities for education and training. Small manufacturers nationwide sought advice from government-supported manufacturing extension centers on everything from plant layouts to new technologies. By the end of the 1990s, Americans were referring to the Goldilocks economy, in which everything was just right.

The government’s competitiveness strategy did not stand alone. Corporate America became much more efficient and more innovative in the 1980s. Just as corporations were poised to grow in the 1990s, information technology and the Internet created attractive investment opportunities. A series of what economists call favorable supply shocks, including lower oil prices and a strong currency, combined to help keep inflation low.

But the contribution of public policy should not be understated. Not only did it contribute to record-setting prosperity in the 1990s, it also created a useful framework for future economic policy. An economic climate that encourages investment, an outward-looking international economic policy, and a focus on lifelong learning are the elements of a successful and sustainable strategy. Although the roots of the Internet were sown decades in the past, it serves as a powerful daily reminder of the central role of science, technology, and continuous innovation in the United States’ economic future.

New challenges and new conditions

Entering the 21st century, the United States faces a changed geopolitical situation and a new set of economic challenges. The emergence of the digital economy and the global spread of broadband capacity have opened a host of service occupations to international competition. China, India, Russia, much of Central Europe, and other parts of the world are competing successfully for what were once stable, well-paying U.S. jobs. What started with call centers is spreading to encompass everything from chip design to radiology.

It is time to create a National Institute of Innovation that would provide a more predictable flow of funds to support the development of innovative products and processes.

Unfortunately, less attention is being paid to the new challenges to the United States’ longstanding position as the world’s innovation leader. Today, many governments are seeking to adapt the U.S. innovation model to their own economic conditions. They are providing increased support for research universities and attempting to introduce other aspects of the U.S. innovation system, including venture capital funds, incubation centers for new businesses, and an expanded role for innovative small companies.

Innovation, of course, requires innovators and entrepreneurs. In the 20th century, the United States benefited from the fortuitous combination of immigrant and homegrown talent. Starting with the rise of fascism in Europe, the United States became home to many talented scientists and engineers. In recent years, the flow of international talent has taken on a thoroughly global cast, with many students, professors, and skilled professionals coming from China, India, and the old Soviet Union. Here in the United States, the GI Bill and a national commitment to science boosted the development of homegrown talent. Now there is a global competition for international talent. More universities in China and India are offering a quality education. Their growing economies offer technical challenges and prosperity at home, and their promising students no longer need to leave their families or familiar culture to find intellectual challenge or financial prosperity. Australia, Japan, and the European Union are aggressively courting foreign talent. The global competition for talent has come just as the United States tightened visa requirements in the wake of 9/11. As a result, universities, businesses, and even the national laboratories have encountered difficulties in maintaining the flow of foreign talent.

The growing international commitment to innovation holds great promise for the United States as well as for the entire world. In the coming decades, we can expect to make greater progress in fighting disease, facilitating communication, improving the environment, and fighting hunger. The trend, however, poses increasing challenges to the current U.S. innovation system. The United States will need to boost investment in research, sharply improve its own education system, and work to remain a magnet for talented students, researchers, and professionals from around the world.

A 21st-century competitiveness strategy

How can the United States remain a leading innovator? What steps should we take today to build on a system that has contributed to health, prosperity, and national security?

The pace and direction of innovation will inevitably affect the health of the whole economy. A supportive climate for public and private investment remains critical. The country also needs to take a long step beyond the philosophy of lifelong learning to think in terms of an integrated U.S. learning system that will extend from before birth to well after conventionally defined retirement. New technologies and global competition will transform education at virtually every level. An outward-looking international economic policy will need to become a policy of full global engagement. As other countries become more innovative, the U.S. public and private sectors will need to become more adept again at borrowing and adapting new technologies as well as innovating at home.

To pave the way for continued worldwide leadership in innovation, the United States needs to take the following six steps:

Regularly assess the U.S. innovation system. The government and the public must have a clear sense of how federal support for R&D fits into the larger national economic system and how both are linked to an increasingly international process of innovation. The White House Office of Science and Technology Policy should prepare a quadrennial report on innovation that would be linked to the federal budget cycle. Much of the necessary data is already collected by the federal government or is available through private-sector surveys. The reports of international bodies and some foreign governments provide a useful overview of international trends. The government should add to its understanding of overseas trends by emphasizing science and technology (S&T) reporting by the State Department’s Foreign Service officers and the Department of Commerce’s Foreign Commercial Service.

Congress should add force to the quadrennial report by giving it a legislative mandate, requiring periodic testimony by the president’s adviser on science and technology and using oversight hearings to encourage government coordination of research efforts.

Build sustained political support for S&T. The president should deliver an annual “State of the Innovation System” address that assesses the current state of U.S. innovation in light of international as well as domestic developments. The president and other elected officials, business leaders, and university presidents need to articulate a clear rationale for S&T that links innovation to long-term prosperity, national security, improved health care, and a clean environment.

Leaders in public and private life need to make an added effort to respond to the two most frequent attacks on technology policy: that it is a veiled form of industrial policy and that it is little more than a fancy name for corporate welfare. The economic case for public spending on basic technologies is much the same as it is for basic science. When markets are not going to spur investment, the public sector needs to decide whether, when, where, and how to fund research. Major companies often have the resources to pursue a research project but cannot justify the effort because the cost is too high, the payoff is distant, or there is a risk that benefits could be appropriated by competitors. The prospect of sharing the costs with other partners, including the federal government, can lead to innovations that foster industrywide or economywide benefits as well as specific corporate ones.

Maintain national strength in a global economy. Innovation was already going global in the late 20th century. Barring a major disaster, that trend will accelerate in this century. A host of countries are investing in the building blocks to become major innovators. Research-based multinational companies are investing around the world to take advantage of scientific talent and to adapt to local markets.

Although growing interdependence is creating enormous opportunities to deepen research and speed the development of key technologies, it also carries risks for foreign policy and could disrupt the domestic industrial basis for innovation. In the decades ahead, leading states will remain an important force in conducting foreign policy

In this century even more than in the last, national security will draw on the successful development and application of a wide variety of new technologies. At times, the federal government has intervened to help key industries regain their strength. For instance, in the mid-1980s, the government used trade policy and helped fund an industry consortium to respond to the Japanese challenge to the U.S. semiconductor industry. Where it proves too difficult or too costly to maintain a U.S.-based industry, government collaboration with private-sector partners can stimulate the development of new technologies or seek out more than one source of supply overseas.

There is also a risk that the erosion of the U.S. industrial base will result in innovations being commercialized overseas without the intervening boost to domestic job creation. In addition, as U.S. manufacturers concentrate in overseas locations, they may turn to research universities in Europe and Asia, thereby shifting intellectual and financial resources away from the U.S. base. In a word, manufacturing still matters.

Some of the change is driven by the logic of companies seeking lower costs or promising global opportunities. In other cases, the overseas advantage is the product of the long-term growth strategies of foreign governments. The public and private sectors need to monitor and evaluate these government strategies as well as international trends as part of a strategy that supports long-term innovation as well as foreign policy independence.

The Bush administration has not yet made innovation a key element of its longterm growth strategy.

Shore up weaknesses in the innovation chain. U.S. leadership in innovation has been built on a mix of public and private strengths. However, weaknesses continue to exist in several areas.

The post-Cold War decline in defense spending had the unintended consequence of limiting R&D funding for the physical sciences and engineering. During much of the past two decades, spending on the physical sciences has been flat and has actually declined relative to the size of the country’s gross domestic product. Not only is R&D in the physical sciences important in itself, but it also provides knowledge that helps to underpin advances in the life sciences, where federal research support has continued to grow.

Venture capital has emerged as one of the nation’s great strengths. As other countries attempt to adapt the U.S. innovation model to their own economic structures, they often import U.S. venture capital. Venture capital is, however, volatile. At times, venture capital will concentrate on a specific industry—for example, biotechnology (one of the current favorites)—at the expense of other fields. The government could act as a balance wheel, providing a steadier flow of risk capital to a variety of industries. It is time to create a National Institute of Innovation that, like the National Institutes of Health, would provide a more predictable flow of funds to support the development of innovative products and processes.

In 1983, President Reagan’s Secretary of Education, Terrence Bell, issued A Nation at Risk, a scathing assessment of the state of U.S. education. By some measures, the country has not made much overall progress since that time. International comparisons of math and science achievement tell an equally bleak tale. The public and private sectors, universities, and local communities will have to dedicate the funds, the time, and the technologies needed to prepare all Americans to be productive citizens in the 21st century. Hardest of all, the country will need to shake up a culture that neither celebrates science nor understands its fundamental importance.

Support great missions that drive innovation. In the S&T world, great goals drive innovation and attract talent. A generation of scientists and engineers that is nearing retirement was drawn to science by the challenge of Sputnik and the lure of a pledge to put a man on the Moon.

We need to develop 21st-century missions that will meet key national goals, drive innovation, and attract a new generation of scientists and engineers. There are a host of potential projects, including the following three.

First, the United States should make a national commitment to develop new sources of energy that will support economic growth, protect the environment, and achieve the foreign policy flexibility that comes with greater energy independence. Highlighting the link between energy research and national security, reducing poverty around the world, and improving the environment will help capture public support and the imagination of a new generation.

Second, the country should attack new and existing diseases on a global basis. Every nation is only a plane ride away from any disease. The appearance of HIV/AIDS, Ebola, SARS, and, more recently, the avian flu has highlighted the peril of the new. Statistics on deaths from tuberculosis and malaria show the devastation that can be wrought by the old. To nature’s diseases have been added the threat of human-made pathogens. Evaluations of U.S. homeland security consistently rank biological terrorism at or near the top of the list of domestic threats. With the potential to save millions of lives around the world, a national commitment to eradicate the threat posed by new and old diseases will kindle a spark of excitement among young Americans choosing their professional futures.

Third, the nation should apply technology to the challenge of an aging population. The creative use of information technology can help keep seasoned workers active and reduce health care costs at the same time. New technologies can lead to early detection of disease, added mobility, and new cures. The idea of better lives for grandparents will draw another set of talented young people toward science and engineering.

There are many other national goals that could drive innovation. President Bush has announced a new vision for future missions to the Moon and eventually to Mars. In the developing world, there is a tremendous need for clean water and for crops adapted to tropical conditions. The effort to secure the homeland will drive new innovations in information technology, materials, and biology. The United States must choose major goals, provide sufficient funding, and build sustained popular and political support.

Continue to improve the climate for commercialization. Throughout the 1970s and 1980s, the United States was often first in Nobel Prizes but too often second in moving technologies from the laboratory to the living room. Industry responded by adapting lean production methods to U.S. conditions and shifting their research activities closer to the market. The public sector responded by encouraging collaboration among the private sector, national laboratories, and universities; supporting research; and helping facilitate the spread of new technologies.

The public role in fostering a favorable climate for commercialization requires a philosophy that encompasses the entire innovation system and a government that is agile in responding to changing economic circumstances. It will also require the adoption of a comprehensive competitiveness strategy. Steady growth, low interest rates, price stability, adequate research funding, innovation-oriented regulations, and a more flexible government are all-important pieces of the commercialization puzzle, as they are for competitiveness in general.

The Bush record on innovation

In 2000, President Bush campaigned as a compassionate conservative promising improved education, a smaller government, and a more humble foreign policy. Instead of the comprehensive competitiveness strategy of the Clinton years, Bush returned to the supply-side emphasis on the power of reducing marginal tax rates. There was little emphasis on S&T in his approach to the economy. They were largely absent from his initial energy and environmental bills.

The global competition for science and engineering talent has come just as the United States tightened visa requirements in the wake of 9/11.

The 9/11 attacks and a weakening economy had a major impact on Bush’s economic as well as his foreign policy. What has been their impact on innovation? How has the administration done in terms of funding research, creating new structures for innovation, adopting innovation-driving missions, and strengthening education and other basic elements of an innovative future? The record is mixed.

Funding: The administration has continued to emphasize the life sciences, despite the country’s need to recommit itself to adequate research funding in physical sciences and engineering. The proposed fiscal year 2006 budget takes positive steps in support of R&D in advanced manufacturing but, depending on the agency, reduces or makes only modest increases in funding for the physical sciences and engineering.

Structures for innovation: For the most part, the administration has not focused on creating new or strengthening old structures for innovation. For instance, there has been a consistent effort to reduce or even eliminate funding for the Advanced Technology Program and the Manufacturing Extension Program. There is one notable exception. At the insistence of Congress, the new Department of Homeland Security has its own Homeland Security Advanced Research Projects Agency. Modeled after the very successful Defense Advanced Research Projects Agency, the new agency is still too young to have developed much of a track record.

New missions: The president did seek advice from his administration about broad new missions. He proposed a mission to the Moon as the first step toward reaching Mars. The mission to Mars, however, comes at the expense of proposed cuts in NASA’s aeronautical research. Although President Clinton’s clean car initiative has been continued, the focus of the program has been narrowed to a focus on hydrogen fuel. In 2004, the president called for broadband links to speed the spread of the Internet throughout the country, but this vision has not been followed by concerted public action. In sum, the administration has not adopted a major technological challenge as a way of either driving innovation or attracting the next generation of scientists and engineers.

Long-term competitiveness: Instead of the synergistic elements of a competitiveness strategy, the president appears to rely on tax cuts, market forces, and the United States’ entrepreneurial spirit to assure future growth. Understandably, fiscal policy and monetary policy were initially focused on limiting the economic downturn and fostering recovery. Since the economic rebound in 2003, however, there has not been a renewed emphasis on basic S&T. Nor has there been any concerted action to reduce long-term budget deficits or record trade and current account deficits.

In terms of building blocks for the future, the president has been most active in seeking to reform elementary and secondary education. His No Child Left Behind policy builds on the standard-raising efforts of his father and President Clinton. Although this approach has attracted some criticism, Bush has at least attacked one of the central weaknesses threatening the innovative future of the United States. In his second term, the president has indicated his determination to spread the same philosophy to the nation’s high schools. The administration has also adjusted its excessively restrictive visa policy for overseas students in response to complaints from universities and business.

The administration has been very active on the international trade front, simultaneously pursuing major multilateral, regional, and bilateral trade negotiations. It has not discouraged depreciation of the dollar and has made public efforts to encourage the appreciation of key Asian currencies. So far, a weakening dollar has provided some benefits to the manufacturing sector without producing added inflation.

Like the White House occupants during the Cold War, the administration has its hands full with questions of national security, checking the spread of weapons of mass destruction, and spreading the seeds of democracy. Today, there is the added fear of terrorists crossing porous borders and the use of biological agents, chemical weapons, and/or suitcase-sized nuclear weapons. National and homeland security concerns have certainly stimulated R&D on new technologies. Some of the new approaches will have near-term commercial applications and unknown long-term potential.

The administration, however, has not yet made innovation a key element of its long-term growth strategy. To some observers, the second-term emphasis on Social Security and tax reform, the proposed extension of the No Child Left Behind policy to high schools, and the announced desire for a new approach to immigration does not suggest much room for an innovation initiative.

Yet the potential is there. Concerns about national security could stimulate interest in everything from advanced sensors to an accelerated effort to explore whether hydrogen truly is the fuel of the future. New technologies could transform education, helping the president reach his goals in the nation’s classrooms. If the Social Security debate leads to an overall assessment of the U.S. retirement system, the president and Congress may turn to technologies as a way of improving the lives of the elderly as well as containing costs.

The president’s determination to spread democracy could lead the administration to focus on many of the questions of political stability and economic sufficiency that help lay the basis for democratic governments. The focus on development could in turn create an interest in research on diseases, clean water, and new crops.

Since the early-1950s work of Nobel laureate Robert Solow, economists have stressed the role of innovation in fostering economic growth and rising standards of living. Engineers often propose an even simpler “look-at-what-has-changed-your-life” test— from medicines to the Internet to new materials. During the 20th century, the United States became the world’s most successful economy by adopting and adapting ideas from the around the world while becoming the leader in research and innovation. To deal with the rising global competitiveness challenge, we must now make a renewed effort to bolster our national system of innovation. By doing so, we can help ensure a prosperous future.

Games, Cookies, and the Future of Education

In February 1990, President George H. W. Bush joined Governor Bill Clinton of Arkansas to embrace the goal that by the year 2000, “U.S. students will be first in the world in mathematics and science achievement.” The two leaders were reacting, in part, to a study stating that education in the United States was so bad that it would have been considered an “act of war” had it been imposed by a foreign power. The ensuing decade was marked by the release of a series of major studies by government or business highlighting the problem, with ever sharper rhetoric. Yet when the year 2000 rolled around, U.S. students ranked 22nd among 27 industrialized countries in math skills, according to a widely regarded international comparison. In 2003, a similar study ranked U.S. students 24th of 29 countries.

Today, even in the contentious atmosphere of political Washington, there is near-universal agreement that this situation must be remedied. No one doubts that a world-class U.S. workforce, skilled in math, science, and technology, is needed to maintain or improve the competitiveness of U.S. companies, ensure nationalsecurity, and meet critical needs in health care, energy, and the environment. There also is growing concern that U.S. wages and living standards are at risk as companies and investors must choose between training underprepared U.S. employees and finding a way to do the job using better-trained employees in other countries.

The dilemma, of course, is that consensus about the magnitude of the problem has not translated into agreement on what to do. Holding students and school systems to high standards is necessary, as called for by the federal No Child Left Behind Act, but there is widespread concern that this alone is not sufficient. It also will be necessary to understand better how students learn and to design and implement new tools to take advantage of this understanding. This job is doable, especially with the help of advanced information technologies. But meeting the challenge will take concerted efforts that begin at the national level and extend into state and local governments, school systems, and businesses.

Affordable solutions

Recent advances in learning and cognitive research provide a solid basis for believing that real progress is possible in improving learning outcomes for anyone studying any subject. Among the recommendations for improving education are the use of individualized instruction, subject-matter experts, and rich curricular activities. But few of the proposals have been widely adopted, and in many quarters they seem hopelessly unaffordable within our traditional approach to teaching.

Advanced information technologies offer real hope that many of the recommendations can be implemented without an unrealistic increase in spending. We are all familiar with the unexpected ways in which information technology has improved our lives in other areas—–instant messaging, sophisticated software that helps firms personalize online shopping, efficient systems for answering consumer questions, eye-popping simulations on inexpensive computer game consoles. These tools have the potential to reshape learning through interactive simulations, “question management” systems that combine automated and human responses, and powerful continuous assessments. Computer simulations could let learners tinker with chemical reactions in living cells, practice operating or repairing expensive equipment, or experiment with marketing techniques, making it easier to grasp complex concepts and transfer this understanding quickly to practical problems. New communication tools could enable learners to collaborate in complex projects and ask for help from teachers and experts from around the world. Learning systems could adapt to differences in student interests, backgrounds, learning styles, and aptitudes.

Despite huge investments in communications and computer hardware made by universities, schools, and training institutions, most formal teaching and learning still use methods familiar in the 19th century: reading texts, listening to lectures, and participating in infrequent—and usually highly scripted— laboratory experiences. The cookies on children’s computers might know more about what they like and do not like than do their teachers.

Given current conditions, it will take a significant and sustained investment in research to invent and test new approaches to learning. It took years of experiment and failure for other service businesses and the entertainment industry to find ways to improve the quality of their services and increase their productivity through the effective use of information technology. The gains required decades of research, an unforgiving review of cherished management approaches, and a dramatic redefinition of many jobs. Education can benefit from what these companies learned if it is willing to undertake a serious process of research, evaluation, and redesign.

It is difficult to see how the kind of patient, long-term research and evaluation needed to bring these concepts alive can be done without a new, aggressive, large-scale federal program for research, development, demonstration, and testing. Once proven, of course, research results can be translated rapidly by individuals and companies into commercial products that can be used across the country by instructional institutions with innovative leaders. Although the implementation will be a bottom-up process, research should be conducted at the national level. A small fraction of total federal investment in education and training devoted to research, design, and development in this area would pay huge dividends. The research should not face political barriers, if only because it can design tools that would give more power to the nation’s diverse education and training institutions, enabling them to tailor instructional systems to unique local needs.

The federal research effort should be guided by an effective management plan that includes a clear definition of goals and ways to measure progress toward them. Progress can be measured in four different ways, based on the extent to which a new tool or approach 1) increases the speed at which expertise is acquired and depth of understanding achieved; 2) increases a learner’s ability to transfer expertise acquired to the solution of practical tasks; 3) decreases the range of outcomes among learners; and 4) makes learning more motivating (and more fun), if only to get more time on task.

The technology alone obviously is not sufficient to meet the goals and cannot substitute for talented teachers and experts. But taken together with skillful use of human instructors, the technology can do two dramatically new things. First, it can provide accurate, compelling simulations of physical phenomena and virtual environments for exploration and discovery. These can be used to illustrate complex concepts through the ancient art of talking and showing and can be used to build challenging assignments and games. Second, it can combine artificial intelligence techniques and rapid connections to real experts that together can reproduce many of the benefits of one-on-one tutoring.

The essential first step is to define key research challenges and organize the research on learning technology into manageable components. The division must be somewhat arbitrary, because the pieces are obviously dependent, but some form of structure is essential to expand the research efforts beyond the current cottage-industry approach. An effective research structure also will link learning research with other information technology research working on similar problems.

The Learning Federation Learning Science and Technology R&D road map provides a well-defined structure for organizing the R&D around core research challenges. The road map was developed over a 3year period using the methods pioneered by the SEMATECH Corporation, which built and revised a research plan that helped guide the revival of U.S. semiconductor manufacturing. More than 70 leading researchers from industry, academia, and government helped develop this road map through their participation in focused workshops, interviews, and preparation of technical plans. The road map organizes research into the following four topic areas: new approaches to teaching and learning enabled by new technologies, peer-reviewed simulations and virtual environments, systems that will make it easy for students to pose questions and receive answers, and assessment.

Improving teaching and learning

It may seem obvious, but one of the most important lessons learned by commercial service companies trying to make effective use of information technology is that the place to start is not with the question “what can computers do?” but “what do we really want to accomplish?” It is essential to begin by understanding how learning can best be achieved and then ask technologists how much of the ideal can be achieved at an acceptable price. Fortunately, a now-classic 1999 report from the National Research Council (NRC), How People Learn, provided a superb answer to the “what do we want to accomplish” question in a comprehensive review of what is known to work in improving learning.

Compared with the need, current federal research funding for learning technology is small and fragmented.

The report argued for approaches that give the learner lots of practical experience and opportunities to apply facts and theories in practical situations. It also cited the need to continue efforts as long as they remain challenging and reinforce expertise. Not surprisingly, some of the most powerful learning strategies also are the most ancient: struggling to accomplish a difficult but highly motivating task that requires new knowledge; carefully scanning a complex, changing environment; and seeking individualized help from experts and friends.

At the time of the report, some critics worried that it would not be feasible to provide large numbers of students with the kinds of experiences and challenges suggested or to monitor each student to find out whether he or she was prepared for the next level of complexity. But the recent spectacular success of computer games provides a tantalizing example of what might now be accomplished. Well-designed, highly interactive simulations can provide a wide range of experiences, such as navigating difficult terrain, operating complex vehicles, and collaborating with colleagues to overcome obstacles. They have an almost frightening ability to capture and hold interest. Gamers will spend literally hundreds of hours mastering obscure details of new weapons systems in order to meet the motivating goals established by the artifice of the games.

Obviously, significant research is needed to find out how best to achieve the goals of How People Learn using new technologies. But where investments have been made, the results have been impressive. Training experts in the U.S. Department of Defense (DOD) are convinced that expertise gained through the use of flight simulators and large-scale military computer simulations has a high rate of transfer to practical skills in the field. They point to changes in the shape of the learning curve: phenomena well documented in training fighter pilots, surgeons, and algebra students. Novices make many more mistakes when they encounter their first practical applications than they do after a few dozen real experiences. Something in the experience reshapes the formal information and begins to approach real expertise. In many cases, simulated experiences can have much the same learning impact as real experience. Research can tell us when this is true and when it is not.

Even if we fully understand how best to use simulated environments, the challenge of actually building technically accurate and visually compelling simulated environments is enormous. Ideally, someone with a compelling idea for creating a simulation that required mastering a new set of concepts in biology, to choose one example, could draw on libraries of pre-tested software, such as simulations of biological systems, vehicles, and landscapes, to implement their ideas, instead of being forced to design their own new software. The task of building such a library and providing the needed peer review and updates is clearly enormous and must be the work of many hands. But for the system to be useful, the components built by different groups must be reusable and able to work together or interoperate; my simulated knee bone must recognize your simulated thigh bone. Most major computer-assisted design formats are interoperable, meaning that General Motors can build an engine design from software elements provided by the vendors supplying cylinder heads and fuel injectors. However, full interoperability of simulations built from robust peer-reviewed software is still an elusive goal in all domains.

The explosion of “in silico” experiments now under way in almost every branch of science should be a gold mine for developers of educational simulations. Several major federal research funding organizations have taken a critical first step by recognizing that a more systematic approach to software engineering is essential. The complexity and importance of software for academic research have outstripped the tradition of having self-taught graduate students build code with little thought to documentation, reusability, or interoperability with code developed by other organizations. The Defense Advanced Research Projects Agency (DARPA) has taken an early lead in this area with projects that are building a community of practice in software written by different teams for biological simulations. The National Institutes of Health’s (NIH’s) new National Institute of Biomedical Imaging and Bioengineering also has begun to encourage the interoperability and development of new tools for peer review, error reporting, and managing intellectual property. This “digital human” movement can give rise to software models that can be used as the basis of powerful, accurate, and up-to-date instruction in biology. But the specialized research software will not move automatically into learning environments, if only because neither DARPA nor NIH has a mission in education.

Building better tutors

Studies comparing individual tutoring with classroom instruction suggest that tutoring has spectacular impact. In a landmark series of studies, Benjamin S. Bloom and colleagues demonstrated that one-on-one tutoring improved student achievement by 2 standard deviations over group instruction. This is roughly equivalent to raising the achievement of the 50th-percentile students to the 98th-percentile level. In addition, the study found that the range of outcomes (the gap separating the best and worst students) was greatly reduced.

A combination of well-designed computer systems and careful use of human resources can approach the impact of good tutoring. Consider the success of two educational tools, Algebra Tutor and Geometry Tutor, designed by the company Carnegie Learning for use in a traditional teacher-led classroom. The computer tutors, which incorporate nearly two decades of research in artificial tutoring, augment the teacher with tutoring software that adjusts to the individual learner’s competency level. In a number of studies, these tools have produced an improvement of 1 standard deviation over conventional classroom instruction. One interpretation of these results is that the artificial tutor is twice as effective as typical classroom instruction (although it is only about half as effective as the best human tutors).

New instructional systems can have an enormous impact on this problem. First, they can create many situations where learners are highly motivated to ask questions—including deep questions about things they do not understand—instead of being embarrassed. And second, they can provide timely, accurate answers without the need for one tutor per learner, providing the best mix of automated answers and opportunities to talk with teachers and experts. If you keep crashing your airplane in a flight simulator, or if your patient dies in a simulated surgery, then you are likely to be highly motivated to ask questions about how to improve your performance. Witness the sale of “hint books” for computer games sold to people who spend hours boning up on expertise valuable only for meeting the artificial objectives of the game.

Businesses and defense agencies have a large amount of work under way in the area of question management. The intelligence community, for example, has mounted a major research effort to create effective question-answering technologies. Existing systems have not fully succeeded, but the progress is significant. Search engines are becoming increasingly sophisticated and are by default the principal question-management tools for students. Business help desks provide as much automated advice as possible and connect clients to human experts only when the automated system is inadequate. In the case of systems developed for learning, of course, the answer should reflect the instructor’s pedagogical strategy. In many cases, the best answer is another question—instead of Ask Jeeves, Ask Socrates.

One important feature of the practical systems that are emerging is that most involve both automated systems and humans. Early “artificial intelligence” projects set out to meet the famous Turing Test: an automated system that is so good in conversation that the user cannot tell that it is not human. This is an interesting, but perhaps unattainable, goal. In the meantime, part of the challenge is designing a system that knows what it cannot do and then links the questioner quickly to the right human being.

Sharpening assessments

The No Child Left Behind Act has put testing and assessment squarely at the center of national educational policy. There has been overwhelming bipartisan support for the idea that education, like all other endeavors, can succeed only if there are clear ways of measuring quality and holding students, teachers, and school systems accountable. The consensus fragments when it comes to the details on how to do this. In particular, if the tests are not measuring the right skills and knowledge, then accountability is warped by rewarding the wrong behavior.

The evidence available from commercial products suggests that a rich new set of assessment tools is possible.

Ideally, the learning goals should make sense to the learner, to the instructors, and to an employer or another teacher interested in an accurate measure of the individual’s expertise. A good and timely assessment can be highly motivating. Prospective surgeons, for example, are presumably highly motivated to be able to perform surgeries correctly and are eager to get feedback on how well they are doing. A series of NRC reports have offered a number of recommendations in this direction. The reports call for assessments that are integrated seamlessly into instruction and provide continuous and unobtrusive feedback. They also call for assessments that focus on complex aspects of expertise, not simply on short-term memory of facts. In such assessments, the learner’s thinking needs to be made visible in ways that can help the learner and the instructor make timely adjustment to the learning process.

But as in the case of so many other recommendations of cognitive scientists, this advice has been difficult to put into practice in general education because of limitations on teachers’ time. Although technology should be able to provide powerful help, examples are not found in education. On the other hand, designers of computer games have intuitively implemented assessment strategies that meet many of the NRC’s recommendations, albeit in highly limited domains. A good game continuously evaluates a player’s skill level, knowing that if players stay at a given level of expertise too long, they will become bored, and if they are allowed to advance too fast, they will become frustrated—both disasters for future sales. A good game keeps the player just at the edge of anxiety.

Many businesses also carry out sophisticated assessments whenever someone goes to their commercial Web sites. Each visitor’s action is carefully evaluated to ensure that the information presented on the screen is helpful and attractive to the individual. The evidence available from commercial products suggests that a rich new set of assessment tools is possible. These tools can support learning environments that:

Make adjustments and guide individual learning based on accurate models of what students have mastered (formative assessments) and on other student characteristics (including mood, level of interest, and learning styles) that are revealed by their behavior during the learning. DOD conducted one of the most ambitious efforts of this type, in a system to help military personnel diagnose and repair the complex hydraulic systems on F-15 aircraft. The system continuously observed the decisions being made during a simulated repair session and used a sophisticated filter to develop theories about what the individual did and did not understand.

Communicate what each individual has mastered at key milestones in the learning process (summative assessment) in ways that are understandable and credible, to the individual, future instructors (and automated instruction systems), employers, and others. Sophisticated assessments can provide multidimensional records of the levels of a learner’s expertise, including specific examples of how the person has performed in a simulation that illustrates this mastery in a practical way. In principle, records would be maintained in two forms: a public record available to employers and other interested parties, and a private set of records that would provide detailed information about the learner’s background, strengths and weaknesses, interests, preferences, and other information needed by an automated or real tutor. These private records are the functional equivalent of personal medical records and should be carefully protected and available only at the learner’s discretion.

Assess the performance of the learning system itself in ways that permit comparison with alternative learning systems and provide information useful to perfecting system components. Such assessments could tell designers whether learners are spending an unusually long time mastering a particular skill, or whether the instruction generates large numbers of bewildered questions.

Managing innovation in schools

Research to design and test new approaches to learning will be pointless if educational institutions are not willing or able to use them. Education markets are notoriously difficult to enter; they are highly fragmented and often highly political. Investors lost considerable amounts of money during the 1990s as companies greatly exaggerated what could be done quickly and underestimated the effort needed to sell learning-technology products to these unique markets. Many companies now have simply abandoned the field, and no private firm is making an investment in learning-technology research that approaches the scale needed for a serious development effort.

Marketing novel products in education is difficult for a number of reasons. Even sophisticated instructional institutions, such as research universities or major government training operations, have no tradition of managing innovation. The new technologies cannot have a significant impact on learning outcomes unless they are accompanied by systematic changes in approach to instruction and new roles for faculty and staff. It is likely that new specialties will be created, such as instructors who spend more time as tutors or members of design teams that build and test simulations than as classroom instructors. But the culture of most learning institutions resists the exploration of such options.

The problem is highlighted by the huge difference between the way in which educational institutions use new technologies and the way in which successful service industries, such as banking and insurance, have adapted to new technologies. The first corporate attempts to use new technology were largely efforts to automate existing work without recognizing that dramatic improvements in the process were possible. During the early 1990s, the economic literature was replete with studies showing that the investment was a wasteful fad—and a lot of it was. But tough competitive pressure forced businesses to rebuild around the new tools and economics, and these companies are now showing substantial productivity gains.

Unfortunately, education seems stuck in the first phase of this process. Massive public investments over the past few years have succeeded largely in providing most students with access to computers and connectivity to the Internet. But for the most part, little has been done to capture the potential of technology. Progress is likely to be slow, because the mechanisms that drive innovation in business simply do not work well in education markets.

New information technologies designed for delivering tailored financial services, customer-friendly support, and spectacular computer games have demonstrated clearly that they can have a powerful influence on providing affordable education and training services. Conventional markets have failed to stimulate the research and testing needed to exploit the opportunities in education. Investors are doubly hesitant to enter the notoriously difficult market for educational products and are concerned that they will not be able to appropriate the benefits of basic research. This is a near-classic definition of a problem requiring public investment in research.

Federal help

Compared with the need, current federal research funding for learning technology is small and fragmented. For many years the National Science Foundation (NSF) has managed a small but highly effective set of programs in the field and recently has funded three “science of learning centers.” These new centers will study and model behavioral and brain processes; provide test beds for evaluating the use of computerized intelligent tutoring systems; and study neural processes and principles associated with the cognitive, linguistic, and social dimensions of learning. Appropriate to its mission, NSF concentrates its funding on cognitive science and other areas of basic research. No federal agency has a clear mission to support the applied research needed to move from theory to the development, testing, and implementation of innovations in learning technology. DOD has by far the best record in making effective use of learning technology, through support by DARPA and the Army’s Institute for Creative Technology for several small but promising applied research programs in learning technology. The Army, Navy, and intelligence services all have ambitious programs specialized for their unique needs. But taken as a whole, the federal research programs are small and poorly coordinated.

Pressured by industry and academic groups, the Department of Education and NSF joined Microsoft, Hewlett Packard, and several major foundations to sponsor the development of a research plan detailing what would be needed to achieve ambitious long-term goals in enhancing education through technology. To build on this effort, the Departments of Education and Commerce held a major summit of corporate and academic leaders to identify ways to strengthen federal education-technology research programs. President Bush’s National Science and Technology Council is starting to follow up on these recommendations by conducting a careful inventory of federal research already under way in relevant areas. But even before the results are in, it is clear that there are major holes in the fabric.

The task of creating practical markets for innovations in learning technology will fall primarily on state and corporate program managers.

The Digital Opportunity Investment Trust (DO IT) Act (S. 1023, H.R. 2512), introduced with bipartisan support in early 2005, proposes an entirely new approach. The DO IT bill would create an independent federal agency charged with managing an ambitious research program built around the research priorities identified by corporate and academic groups during the past few years. The program’s primary focus would be on applied research and on the testing needed to ensure that the innovations actually translate into improved learning. Progress in this regard can be achieved by following the model of “spiral development” that has worked well in other fields of applied research, building pilot applications and thoroughly evaluating them. This effort will require close collaboration with educational institutions and the academic and commercial research community. The effort also will require confronting a host of thorny policy issues, including the management of intellectual property and development of technical standards for interoperability of records and software and reusable learning objects.

The task of creating practical markets for innovations in learning technology will fall primarily on state and corporate program managers. It is hoped that some of the federal demonstration programs can be conducted in ways that encourage participating organizations to explore basic changes in the way in which they approach education and training.

The most powerful tool available to the federal government is wise management of its own training programs. DOD invests at least $50 billion annually in education and training. The overwhelming majority of this spending has nothing to do with skills needed in battle but supports training in areas such as financial management and engineering that are essentially identical to civilian equivalents. But DOD has not managed to design or implement a coherent plan to develop and deploy learning technology over the coming decade. Lacking a commitment to such a plan, DOD has not been able to use its market powers effectively to drive change in learning-technology products, and it has never organized an R&D effort commensurate with the need.

The Department of Homeland Security (DHS) is in even worse shape, despite the flexibility it enjoys by being a new organization. New security needs create enormous training challenges, because many different kinds of people need new skills and regular retraining. Simulations are particularly important for reinforcing skills that are seldom, if ever, required during routine public health and safety operations. Unfortunately, DHS has not yet acknowledged that research in improving training should be an integral part of its R&D mission.

Powerful economic forces are driving spectacular advances in computer processor power, mobile devices, and the software needed to deliver entertainment, answer consumer questions, and simulations for science and engineering. But these pieces will not self-assemble into the tools needed for education without an adequately funded, well-managed program of federal research, development, and demonstration in learning science and technology. The absence of a coherent national program to search for solutions in this area is, without question, the largest single gap in the nation’s R&D program.

Bolstering U.S. Supercomputing

The nation’s needs for supercomputers to strengthen defense and national security cannot be satisfied with current policies and spending levels.

In November 2004, IBM’s Blue Gene/L, developed for U.S. nuclear weapons research, was declared the fastest supercomputer on the planet. Supercomputing speed is measured in teraflops: trillions of calculations per second. Blue Gene/L achieved on one computation 70.72 teraflops, nearly doubling the speed of Japan’s Earth Simulator, the previous recordholder at 35.86 teraflops. Despite Blue Gene/L’s blazing speed, however, U.S. preeminence in supercomputing, which is imperative for national security and indispensable for scientific discovery, is in jeopardy.

The past decade’s policies and spending levels are inadequate to meet the growing U.S. demand for supercomputing in critical national areas such as intelligence analysis, oversight of nuclear stockpiles, and tracking climate change. There has been little long-term planning for supercomputing needs and inadequate coordination among relevant federal agencies. These trends have reduced opportunities to make the most of this technology. The federal government must provide stable long-term funding for supercomputer design and manufacture and also support for vendors of supercomputing hardware and software.

Supercomputers combine extremely fast hardware with software that can solve the most complex computational problems. Among these problems are simulation and modeling of physical phenomena such as climate change and explosions, analyzing massive amounts of data from sources such as national security intelligence and genome sequencing, and designing intricate engineered products. Supercomputers are top not only in performance but also in cost: The price tag on the Earth Simulator has been estimated at $500 million.

Supercomputing has become a major contributor to the economic competitiveness of the U.S. automotive, aerospace, medical, and pharmaceutical industries. The discovery of new techniques and substances, as well as cost reduction through simulation rather than physical prototyping, underlies progress in a number of economically important areas. Many technologies initially developed for supercomputers have enriched the mainstream computer industry. For example, multithreading and vector processing are now used on personal computer chips. Application codes that required supercomputing performance when they were developed are now routinely used in industry. This trickle-down process is expected to continue and perhaps even intensify.

But progress in supercomputing has slowed in recent years, even though today’s computational problems require levels of scaling and speed that stress current supercomputers. Many scientific fields need performance improvements of up to seven orders of magnitude to achieve well-defined computational goals. For example, performance measured in petaflops (thousands of teraflops) is necessary to conduct timely simulations that, in the absence of real-world testing, will certify to the nation that the nuclear weapons stockpile is safe and reliable. Another example is climate modeling for increased understanding of climate change and to enable forecasting. A millionfold increase in performance would allow reliable prediction of regional and local effects of certain pollutants on the atmosphere.

The success of the killer micros

The disheartening state of supercomputing today is largely due to the swift rise of commodity-based supercomputing. That is clear from the TOP500, a regularly updated list of the 500 most powerful computer systems in the world, as measured by performance on the LINPACK dense linear algebra benchmark (an imperfect but widely used measure of performance on real-world computational problems). Most systems on the TOP500 list are now clusters, systems assembled from commercial off-the-shelf processors interconnected by off-the-shelf switches. Fifteen years ago, almost all TOP500 systems were custom supercomputers, built of custom processors and custom switches.

Cluster supercomputers are a prime example of Moore’s law, the observation that processing power doubles every 18 months. Cluster supercomputers have benefited from the huge investments in commodity processors and rapid increases in processor performance. For many applications, cluster technology offers supercomputing performance at the cost/perfor-mance ratio of a personal computer. For applications with the characteristics of the LINPACK benchmark, the cost of a cluster can be an order of magnitude lower than the cost of a custom supercomputer with the same performance. However, many important supercomputing applications have characteristics that are very different from those of LINPACK; these applications run well on custom supercomputers but achieve poor performance on clusters.

The success of clusters has reduced the market for custom supercomputers so much that its viability is now heavily dependent on government support. At less than $1 billion annually, the market for high-end systems is a minuscule fraction of the total computer industry, and according to International Data Corporation, more than 80 percent of high-end system purchases in 2003 were made by the public sector. Historically, the government has ensured that supercomputers are available for its missions by funding supercomputing R&D and by forging long-term relationships with key providers. Although active government intervention has risks, it is necessary in situations like this, where the private market is nonexistent or too small to ensure a steady flow of critical products and technologies. This makes sense because supercomputers are public goods, an essential component of government missions ranging from basic research to national security.

Yet government support for the development and acquisition of such platforms has shrunk. And computer suppliers are reluctant to invest in custom supercomputing, because the market is so small, the financial returns are so uncertain, and the opportunity costs of moving skilled personnel away from products designed for the broader IT market are considerable. In addition, the supercomputing market has become unstable, with annual variations of more than 20 percent in sales. Consequently, companies that concentrate primarily on developing supercomputing technologies have a hard time staying in business. Currently, Cray, which almost went out of business in the late 1990s, is the only U.S. firm whose chief business is supercomputing hardware and software and the only U.S. firm that is building custom supercomputers. IBM and Hewlett-Packard produce com-modity-based supercomputer systems as one product line among many. Most supercomputing applications software comes from the research community or from the applications developers themselves.

The limits of clusters

For increasingly important problems such as computations that are critical for nuclear stockpile stewardship, intelligence analysis, and climate modeling, an acceptable time to solution can be achieved only by custom supercomputers. Custom systems can sometimes reduce computation time by a factor of 10 or more, so that a computation that would take a cluster supercomputer a month is completed in a few days. Slower computation might cost less, but it also might not meet deadlines in intelligence analysis or allow research to progress fast enough.

This speed problem is getting worse. As semiconductor and packaging technology gets better, different components of a supercomputer improve at different rates. In particular, processor speed increases much faster than memory access time. Custom supercomputers overcome this problem with a processor architecture that can support a very large number of concurrent memory accesses to unrelated memory locations. Commodity processors support a modest number of concurrent memory accesses but reduce the effective memory access time by adding large and often multilevel cache memory systems. Applications that are unable to take advantage of the cache normally will scale in performance at the memory speed, not the processor performance speed. As the gap between processor and memory performance continues to grow, more applications that now make good use of a cache will be limited by memory performance. The problem affects all applications, but it affects scientific computing and supercomputing sooner because commercial applications usually can take better advantage of caches. A similar gap affects global communication: Although processors run faster, the physical dimensions of the largest supercomputers continue to increase, whereas the speed of light, which bounds the speed of interprocessor communication, does not increase.

Continued leadership in essential supercomputing technologies will require an industrial base of multiple domestic suppliers.

As transistors continue to shrink, hardware fails more frequently; this affects very large, tightly coupled systems such as supercomputers more than smaller or less-coupled systems. Also, the ability of microprocessor designers to translate the increasing number of transistors on a chip into increased processor performance seems to have reached its limits; processor performance continues to improve as clock rates continue to increase, but vendors now leverage the increased transistor count by putting an increasing number of processor cores on each chip. As a result, the number of processors per system will need to increase rapidly in order to sustain past rates of supercomputer performance improvement. But current algorithms and applications do not scale easily to systems with hundreds of thousands of processors.

Although clusters have reduced the hardware cost of supercomputing, they have increased the programming effort needed to implement large parallel codes. Scientific codes and the platforms on which they run have become more complex, but the application development environments and tools used to program complex parallel scientific codes are generally less advanced and less robust than those used for general commercial computing. As a result, software productivity is low. Custom systems could support more efficient parallel programming models, but this potential is largely unrealized. No higher-level programming notation that adequately captures parallelism and locality (the two main algorithmic concerns of parallel programming) has emerged. The reasons include the very low investment in supercomputing software such as compilers for parallel systems, the desire to maintain compatibility with prevalent cluster architecture, and the fear of investing in software that runs only on architectures that may disappear in a few years. The software problem will worsen as higher levels of parallelism are required and as global interprocessor communication slows down relative to processor performance.

Thus, there is a clear need for scaling and software improvements in supercomputing. New architectures are needed to cope with the diverging improvement rates of various components such as processor speed versus memory speed. New languages, new tools, and new operating systems are needed to cope with the increased levels of parallelism and low software productivity. And continued improvements are needed in algorithms to handle larger problems; new models that improve performance, accuracy, or generality; and changing hardware characteristics.

It takes time to realize the benefits of research into these problems. It took more than a decade from the creation of the first commercial vector computer until vector programming was well supported by algorithms, languages, and compilers. Insufficient funding for the past several years has emptied the research pipeline. For example, the number of National Science Foundation (NSF) grants supporting research on parallel architectures has been cut in half over little more than 5 years; not coincidentally, the number of scientific publications on high-per-formance computing has been reduced by half as well. Although many of the top universities had large-scale prototype projects exploring high-performance architectures a decade ago, no such effort exists today in academia.

Making progress in supercomputing

U.S. needs for supercomputing tostrengthen defense and national security cannot be satisfied with current policies and levels of spending. Because these needs are distinct from those of the broader information technology (IT) industry, it is up to the government to ensure that the requisite supercomputing platforms and technologies are produced. Government agencies that depend on supercomputing, together with Congress, should take primary responsibility for accelerating advances in supercomputing and ensuring that there are multiple strong domestic suppliers of both hardware and software.

The federal agencies that depend on supercomputing should be jointly responsible for the strength and continued evolution of the U.S. supercomputing infrastructure. Although the agencies that use supercomputers have different missions and requirements, they can benefit from the synergies of coordinated planning, acquisition strategies, and R&D support. An integrated long-range plan—which does not preclude individual agency activities and priorities—is essential to leverage shared efforts. Progress requires the identification of key technologies and their interdependences, roadblocks, and opportunities for coordinated investments. The government agencies responsible for supercomputing should underwrite a community effort to develop and maintain this roadmap. It should be assembled with wide participation from researchers, developers of both commodity and custom technologies, and users. It should be driven both top-down from application needs and bottom-up from technology barriers. It should include measurable milestones to guide the agencies and Congress in making R&D investment decisions.

If the federal government is to ensure domestic leadership in essential supercomputing technologies, a U.S. industrial base of multiple domestic suppliers that can build custom systems must be assured. Not all of these suppliers must be vertically integrated companies such as Cray that design everything from chips to compilers. The viability of these vendors depends on stable long-term government investments at adequate levels; both the absolute investment level and its predictability matter because there is no alternative support. Such stable support can be provided either via government funding of R&D expenses or via steady procurements or both. The model proposed by the British UKHEV initiative, whereby government solicits and funds proposals for the procurement of three successive generations of a supercomputer family over 4 to 6 years, is a good example of a model that reduces instability.

The creation and long-term maintenance of the software that is key to supercomputing require the support of the federal agencies that are responsible for supercomputing R&D. That software includes operating systems, libraries, compilers, software development and data analysis tools, application codes, and databases. Larger and more coordinated investments could significantly improve the productivity of supercomputing platforms. The models for software support are likely to be varied— vertically integrated vendors that produce both hardware and software, horizontal vendors that produce software for many different hardware platforms, not-for-profit organizations, or software developed on an open-source model. No matter which model is used, however, stability and continuity are essential. The need for software to evolve and be maintained over decades requires a stable cadre of developers with intimate knowledge of the software.

Because the supercomputing research community is small, international collaborations are important, and barriers to international collaboration on supercomputer research should be minimized. Such collaboration should include access to domestic supercomputing systems for research purposes. Restrictions on supercomputer imports have not benefited the United States, nor are they likely to do so. Export restrictions on supercomputer systems built from widely available components that are not export-controlled do not make sense and might damage international collaboration. Loosening restrictions need not compromise national security as long as appropriate safeguards are in place.

Supercomputing is critical to advancing science. The U. S. government should ensure that researchers with the most demanding computational requirements have access to the most powerful supercomputing systems. NSF supercomputing centers and Department of Energy (DOE) science centers have been central in providing supercomputing support to scientists. However, these centers have undergone a broadening of their mission even though their budgets have remained flat, and they are under pressure to support an increasing number of users. They need stable funding, sufficient to support an adequate supercomputing infrastructure. Finally, science communities that use supercomputers should have a strong say in and a shared responsibility for providing adequate supercomputing infrastructure, with budgets for acquisition and maintenance of this infrastructure clearly separated from the budgets for IT research.

In fiscal year (FY) 2004, the aggregate U.S. investment in high-end computing was $158 million, according to an estimate in the 2004 report published by the High-End Computing Revitalization Task Force. (This task force was established in 2003 under the National Science and Technology Council to provide a roadmap for federal investments in high-end computing. The proposed roadmap has had little impact on federal investments so far.) This research spending included hardware, software, and systems for basic and applied research, advanced development, prototypes, and testing and evaluation. The task force further noted that federal support for high-end computing activities had decreased from 1996 to 2001. The report of our committee estimated that an investment of roughly $140 million annually is needed for supercomputing research alone, excluding the cost of research into applications using supercomputing, the cost of advanced development and testbeds, and the cost of prototyping activities (which would require additional funding). A healthy procurement process for top-performing supercomputers that would satisfy the computing needs of the major agencies using supercomputing was estimated at about $800 million per year. Additional investments would be needed for capacity supercomputers in a lower performance tier.

The High-End Computing Revitalization Act passed by Congress in November 2004 is a step in the right direction: It called for DOE to establish a supercomputing software research center and authorized $165 million for research. However, no money has been appropriated for the recommended supercomputing research. The High-Performance Computing Revitalization Act of 2005 was introduced to the House Committee on Science in January 2005. This bill amends the High-Performance Act of 1991 and directs the president to implement a supercomputing R&D program. It further requires the Office of Science and Technology Policy to identify the goals and priorities for a federal supercomputing R&D program and to develop a roadmap for high-performance computing systems. However, the FY 2006 budget proposed by the Bush administration does not provide the necessary investment. Indeed, the budget calls for DOE’s Office of Science and its Advanced Simulation and Computing program to be reduced by 10 percent from the FY 2005 level.

Immediate action is needed to preserve the U.S. lead in supercomputing. The agencies and scientists that need supercomputers should act together and push not only for an adequate supercomputing infrastructure now but also for adequate plans and investments that will ensure that they have the tools they need in 5 or 10 years.


Susan L. Graham () and Marc Snir () cochaired the National Research Council (NRC) committee that produced the report Getting Up to Speed: The Future of Supercomputing. Graham is Pehong Chen Distinguished Professor of Electrical Engineering and Computer Science at the University of California, Berkeley, and former chief computer scientist of the multi-institutional National Partnership for Advanced Computational Infrastructure, which ended in October 2004. Snir is the Michael Faiman and Saburo Muroga Professor and head of the Department of Computer Science at the University of Illinois, Urbana-Champaign. Cynthia A. Patterson () was the NRC committee’s study director.

Secrets of the Celtic Tiger: Act Two

Ireland’s brilliant catch-up strategy of the 1990s offers important lessons for countries that want to build a modern technology-based economy. But Ireland is not growing complacent. It knows that a decade of steady and strong economic growth, high employment, and success in recruiting foreign investment hardly guarantees future results. Ireland is now supporting R&D activities designed to help it prosper not simply for years but for generations. This new effort might be instructive for the United States and other technology leaders.

Ireland’s growth in the 1990s was remarkable. From 1990 to 2003, its gross domestic product (GDP) more than tripled, from €36 billion to €138 billion, as did its GDP per capita, which rose from €10,400 to €35,200. During the same period, merchandise exports grew 4.5 times, from €18 billion to €81 billion. Meanwhile, Ireland’s debt as a percentage of GDP fell from 96 percent to 33 percent; the total labor force rose nearly 50 percent, to 1.9 million; and the rate of unemployment dropped from 12.9 percent to 4.8 percent. And inflation barely changed, from 3.4 percent to 3.5 percent.

Perhaps even more impressive was Ireland’s growth relative to the United States and the European Union (EU). Between 1994 and 2003, average annual real economic growth in the EU was just over 2 percent, in the United States just over 3 percent, and in Ireland 8 percent. Whereas Ireland’s income levels once stood at 60 percent of the EU average, they now stand at 135 percent of the average.

This rapid progress is no doubt due in part to the fact that Ireland had farther to travel than many other countries to become a 21st-century economic competitor. When the 1990s began, tourism and agriculture still dominated Ireland’s economy. All the while, however, Ireland possessed a combination of strengths that, it seems clear in retrospect, had long been ready to blossom once a more knowledge-based economic system began to emerge.

Four sources of growth

First among these attributes is Ireland’s excellent educational system. Though it is not perfect, and passionate debates continue about how the system can best serve Ireland’s changed society, it is in some respects a model. Today, 48 percent of the Irish population has attained college-level education, compared with less than 40 percent in countries such as the United Kingdom, United States, Spain, Belgium, and France, and less than 25 percent in Germany.

Ireland’s success in the 1990s, in fact, would not have been possible if the country had not taken a crucial step 40 years ago, when it began a concerted effort to increase educational participation rates and introduce programs that would match the abilities of students to the needs of a global economy and advanced, even high-tech, enterprises. At the same time, the country started making its already demanding K-16 education system more rigorous, creating links between industry and education and formalizing and supporting workplace education.

What happened thereafter is no coincidence. In the mid-1960s, fewer than 20,000 students were attending college in Ireland. By 1999, the number had risen sixfold, to 112,000. In 1984-1985, only 40 percent of 18-year-olds in Ireland were engaged in full-time education. Ten years later, the figure was 64 percent. During the first 5 years of the 1990s, the total number of students engaged in college-level programs grew by 51 percent. By 1995, Ireland had more students as a percentage of population with science-related qualifications than any of the other 30 countries in the Organization for Economic Cooperation and Development (OECD). A long-term commitment to education provided the foundation for the boom that followed.

Second, Ireland was ready to prosper when the knowledge-based economy emerged because of a combination of benefits it enjoyed as one of 15 members (the total is now 25) of the EU and as a nation with a historically strong cultural and political connection to the United States, where 40 million people trace some part of their heritage to Ireland. The EU made massive investments in Ireland, as the single-market system followed its plan of shifting a portion of EU contributions from richer members to those in need of development, on the principle that growing markets would benefit all members. This investment transformed infrastructure, including roads, ports, and communications, and gave overseas investors reason to look to Ireland as a haven of opportunity. Meanwhile, as an English-speaking country with a unique bond to the United States, Ireland was already an enticing marketplace for U.S. enterprises. When the technology age of the 1990s arrived in the United States, great opportunities existed for it to arrive in Ireland as well, especially given the country’s other advantages.

Third, a consistent political and public commitment to investment has existed in Ireland for decades. The country’s investment agency, IDA Ireland, for example, was established in 1969 and has played an important role in recruiting U.S. corporations. Today, there is hardly a leading U.S. manufacturer of computer software or hardware, pharmaceuticals, electronics, or medical equipment, among other knowledge-based businesses, without thriving operations in Ireland. IDA Ireland has meanwhile been able to develop relationships with overseas companies across the globe and has established offices in the United States and several other countries to serve clients and attract further investment.

Fourth, Ireland was shrewd enough to capitalize on these strengths by lowering corporate taxes. By 2003, Ireland’s corporate tax rate was 12.5 percent, covering both manufacturing and services (with a rate of 25 percent applying to passive income, such as that from dividends). This change, along with sustained efforts to reduce payroll and other business taxes, gave U.S. manufacturers operating in Ireland a financially competitive platform from which to serve the EU single market of 470 million people. In 1990, about 11,000 companies were exporting from Ireland. By 2002, the number had risen to 70,000.

What next?

Ireland now accounts for roughly one-quarter of all U.S. foreign direct investment (FDI) in Europe, including almost one-third of all FDI in pharmaceuticals and health care. Nine of the world’s top 10 drug companies have plants in Ireland. One-third of all personal computers sold in Europe are manufactured in Ireland, and OECD data indicate that the country is the world’s biggest software exporter, ahead of the United States. These statistics become even more impressive when one realizes that Ireland’s population is just over four million, about the same as Brooklyn’s.

But Ireland knows that 10 to 15 years of growth, however important, must be seen only as a beginning. Its political and business leaders well remember the days of economic stagnation and relative poverty, and they are not ready to relax. In fact, across all government departments, there is an impressive commitment to policies, programs, and investments designed to make Ireland an enduring knowledge society. The investments include support for R&D in scientific and engineering areas that capitalize on the same conditions that benefited the country in the past decade: a strong educational system, aggressive economic strategies, partnerships with nations rich in knowledge-based businesses, and, above all, highly skilled talent.

Watching Ireland imitate what is best in the U.S. system might be a helpful reminder to U.S. policymakers to preserve and strengthen their own government efforts.

With this philosophy, the Irish government established Science Foundation Ireland (SFI) in 2000 as part of the National Development Plan 2000–2006 (NDP). This investment followed a year-long study by a group of business, education, and government leaders appointed by the prime minister and deputy prime minister. The group’s job was to examine how an infusion of government funding could best improve Ireland’s long-term competitiveness and growth. To their credit, the leadership saw the promise in what the group’s report described.

The NDP funding that is focused on research, technological development, and innovation programs will total approximately €1 billion by 2006, a considerable sum for a country of Ireland’s size. SFI’s portion totals €635 million, or approximately $820 million, and SFI’s investment does not have to stand alone. To the contrary, 18 months before SFI came into being, the government began building R&D infrastructure and it continues to do so. Through the Higher Education Authority, Ireland has already committed almost €600 million to creating new labs and research space. As a result, SFI has been able to be aggressive in helping higher-education institutions recruit researchers to build programs in these facilities.

SFI’s ultimate goal is to foster an R&D culture by investing in superb individual researchers and their teams. We want them to uncover ideas that attract grants and inspire patents. We want them to recruit and train academic scientists and engineers from home and abroad and to explore daring ideas as well as ideas that can lead to knowledge-based businesses, create jobs, and generate exports. Perhaps most important, we want them to help inspire Irish students to pursue careers in science and engineering.

Importing ideas

The model for SFI, not surprisingly, is the U.S. National Science Foundation (NSF). Ireland recognized the contribution that NSF has made to U.S. science, economic growth, and talent development, among other areas. Because Ireland cannot afford the comprehensive scope of NSF, the Irish government has to target research investments in areas with the most likely scientific and economic impact, and where the country already has concentrated skills and industrial interest, such as computers, electronics, pharmaceuticals, and medical equipment. SFI’s initial mission was therefore to support world-class research programs in science and engineering fields that underpin biotechnology and information and communications technology. The SFI mission recently expanded to include the newest areas of science and engineering. This expanded mission will help Ireland build the talent needed for the long term and respond even better to creativity.

We also interpret our mission broadly. Our selection process lets innovation and imagination earn grants, using international experts to judge proposals and the likelihood of success. To their credit, SFI’s international board has insisted that we not make selections based on an averaging of reviewer scores, but instead aim to invest in performance and excellence and take risks. We follow the NSF model by having technical staff make final grant decisions based on the outside review, rather than having reviewers’ scores dictate the result. This approach helps address all questions before final rankings and ensures the accountability of the technical staff, an essential feature of the NSF system.

We also share with teams whose proposals we deny a summary of the weaknesses that the experts found in the submissions. This feedback loop has generated successful proposals from rejected ones and, in so doing, created stronger research measured against an objective standard. It is worth noting that as common as such practices might be in the United States, they seem uncommon in European research programs.

In the biotechnology areas, we are interested in work in a range of fields, from DNA chips to drug delivery, from biosensors to bioremediation. At the same time, we have particular interest in research that draws on special capabilities in Ireland’s academic and industrial system. We currently give special emphasis to agri-food, cell cycle control, enabling technologies, medical biotechnology/ biopharmaceuticals/therapeutics, microbiology, and neuro/developmental biology. But we are determined to stay open to the best ideas of the best researchers.

The same is true for our grants in information and communications technology (ICT). We take ICT to include broadband, wireless, and mobile transmission; parallel processing systems; engineering for reliability of data transfer; wearable sensors; computer modeling; distributed networking; computer-based training; nanoscale assembly; and human language understanding. Our specific focus is currently on the following areas:

  • Novel adaptive technologies for distributed networking of people, machines and sensors, and other devices.
  • Software engineering for improved reliability, security, and predictability of all software-based systems.
  • Machine learning and semantic web technologies and image processing to extract information from massive data sets, and enabling adaptive systems and significant applications for the future.
  • Nanotechnology breakthroughs in device design and information processing.

In both ICT and biotech, what drives us to specific areas is the sense that they will produce the greatest prospects for technological and economic development in the next few decades. But as the goals of researchers evolve, so will the proposals that get our attention. We also are open to, and actually encourage, proposals that recognize that the next major leaps could occur in areas where ICT and biotechnology overlap, in what is sometimes called digital genetics.

We support research aggressively. Our portfolio includes professorships that range up to €2.5 million over 5 years to help attract outstanding scientists and engineers from outside the country to Irish universities and institutes of technology, and principal investigator grants that are normally worth at least €250,000 per year over 3 or 4 years for researchers who are working in or will work in Ireland. We are now funding 450 projects through grants totaling €450 million. These projects include more than 1,190 individuals, research teams, centers, and visiting researchers from Australia, Belgium, Canada, England, Germany, Japan, Russia, Scotland, Slovakia, South Africa, Switzerland, and the United States.

This funding includes the Centers for Science, Engineering, and Technology (CSET) program, which connects researchers in academia and industry through grants worth as much as €20 million over 5 years and may be renewed for an additional term of up to 5 years. The idea is to fund centers that can exploit opportunities for discovery and innovation as no smaller research project can, link academic and industry researchers in promising ways, generate products of value in the marketplace, and contribute to the public’s interest in science and technology. These centers give Ireland another recruitment tool, again building on the relationships it has established around the world. Already, the CSETs have led to research partnerships in Ireland with Bell Labs, HP, Intel, Medtronic, and Proctor & Gamble.

As such work suggests, SFI has special obligations as Ireland’s first extended national commitment to a public R&D enterprise. The foundation must prove itself a reliable partner with the other sectors crucial to this enterprise, notably related government enterprises, the education system, and the industrial and business sectors. We engage with these sectors constantly, including by having their representatives on review committees, and, as with the CSET program, offering grants that can both draw interest among researchers and help in industrial R&D recruitment. Our board also includes leaders from every sector, both within and outside Ireland, and has been pivotal in determining the emphasis and structure of our programs.

The foundation’s work is not occurring in a vacuum, either. Investments in the science and technology infrastructure are continuing, and the government last year established the position of national science adviser. This adviser will report to a cabinet-level committee dedicated to science. At the highest levels of Ireland’s government, there is a deep conviction that R&D are crucial to the country’s future.

Europe’s next step?

Ireland’s early results have not gone unnoticed by its neighbors. Intrigued by what Ireland and SFI have begun, the European Commission asked me to lead an expert group in evaluating a potential EU-wide research-funding scheme. The program would pit researchers in Europe against one another for certain EU grants, using competition to drive up the value and number of ideas, patents, and products. The report, Frontier Research: The European Challenge, was published in April 2005. It places considerable weight on the value of independent outside expert review teams and the importance of having technical staff make final grant decisions. As in SFI, this process will allow for appropriate follow-up on issues raised by technical experts and, in principle, encourage more risk-taking. Europe has never tried a pan-national competitive approach before, or widely employed this decision model, so it will be interesting to see whether the EU can apply the NSF approach across multiple countries with a common interest in challenging U.S. R&D dominance.

Obviously, though, Europe is not the only country chasing the United States, as Ireland is not the only country using an NSF prototype to guide its research investments. Other nations also have seen how research can become the basis for a competitive knowledge-based system, innovation, and growth. Indeed, watching Ireland imitate what is best in the U.S. system might be a helpful reminder to U.S. policy-makers to preserve and strengthen their own government efforts that have contributed so much to its economic success.

Readers of this magazine know that higher education in India and China is advancing at a stunning pace (“Asian Countries Strengthen Their Research,” Issues, Summer 2004). In 1999, U.S. universities awarded 220,000 bachelor’s degrees in science and engineering. China awarded 322,000, and India awarded 251,000. Just two decades ago, these countries awarded barely a fraction of such degrees. China’s college enrollment grew by two-thirds between 1995 and 2000. India’s increased by more than a third between 1996 and 2002, to 8.8 million.

In 2003, the United States lost its status as the world’s leading recipient of foreign direct investment. The new leader is China—a worrisome sign of how the market now judges future opportunity.

At the same time, U.S. struggles in education—the basis of any society’s innovation culture—continue. It is remarkable to consider that at present rates, only 18 out of every 100 U.S. ninth-graders will graduate in 10 years with either a bachelor’s or an associate’s degree. Can any country afford to continue squandering such talent?

In a January 2005 study of U.S. education and competitiveness, the Business–Higher Education Forum—an organization of top executives from U.S. businesses, colleges, universities, and foundations—observed, “[T]he United States is losing its edge in innovation and is watching the erosion of its capacity to create new scientific and technological breakthroughs. Increased global competition, lackluster performance in mathematics and science education, and a lack of national focus on renewing its science and technology infrastructure have created a new economic and technological vulnerability as serious as any military or terrorist threat.”

Ireland, I believe, appreciates the new severity of competition. Its recent experience with innovation and growth built on education has given it reason not to slip backward. It has begun to believe the potential truth of what Juan Enriquez of Harvard University wrote at the start of this decade, “The future belongs to small populations who build empires of the mind.”

Ireland is acting as if it knows that empires of the mind emerge from the commitment to science and engineering that R&D requires, that powerful R&D starts with talent, and that a successful education system allows talent to flourish. The United States showed countries such as Ireland, among many others, the power of these connections. Other large and growing countries are now applying this lesson. I hope that the United States does not forget it.

A Reality Check on Military Spending

For fiscal year (FY) 2005, military spending will be nearly $500 billion, which is greater in real terms than during any of the Reagan years and surpassed only by spending at the end of World War II in 1945 and 1946 and during the Korean War in 1952. The White House is asking for an FY 2006 Department of Defense (DOD) budget of $413.9 billion, which does not include funding for military operations in Iraq and Afghanistan.

The administration argues that increased military spending is a necessary part of the war on terrorism. But such logic assumes that the war on terrorism is primarily a military war to be fought by the Army, Navy, Air Force, and Marines. The reality is that large conventional military operations will be the exception rather than the rule in the war on terrorism. Instead, the military’s role will mainly involve special operations forces in discrete missions against specific targets, not conventional warfare aimed at overthrowing entire regimes. The rest of the war aimed at dismantling and degrading the al Qaeda terrorist network will require unprecedented international intelligence and law enforcement cooperation, not expensive new planes, helicopters, and warships.

Therefore, an increasingly large defense budget (DOD projects the budget to grow to more than $487 billion by FY 2009) is not necessary to fight the war on terrorism. Nor is a huge budget necessary to protect the United States from traditional nation-state military threats, because the United States has no major military rivals, is relatively secure from conventional military attack, and has a strong nuclear deterrent force. Major cuts in the defense budget would be possible if the United States substantially reduced the number of troops stationed abroad and embraced a “balancer-of-last-resort” strategy, in which it would intervene overseas only if its vital interests were threatened and in which countries in Europe, Asia, and elsewhere would take greater responsibility for their own regional security.

According to the International Institute for Strategic Studies (IISS), in 2003 (the last year for which there is comparative worldwide data) total U.S. defense expenditures were $404.9 billion. This amount exceeded the combined defense expenditures of the next 13 countries and was more than double the combined defense spending of the remaining 158 countries in the world (Figure 1). The countries closest to the United States in defense spending were Russia ($65.2 billion) and China ($55.9 billion). The next five countries—France, Japan, the United Kingdom, Germany, and Italy—were all U.S. allies. In fact, the United States outspent its North Atlantic Treaty Organization (NATO) allies by nearly 2 to 1 ($404.9 billion versus $221.1 billion) and had friendly relations with 12 of the 13 countries, which included another NATO ally, Turkey, as well as South Korea and Israel. Finally, the combined defense spending of the remaining “Axis of Evil” nations (North Korea and Iran) was about $8.5 billion, or 2 percent of U.S. defense expenditures.

Such lopsided U.S. defense spending needs to be put in perspective relative to the 21st-century threat environment. With the demise of the Soviet Union, the United States no longer faces a serious military challenger or global hegemonic threat. President Vladimir Putin has charted a course for Russia to move closer to the United States and the West, both politically and economically, so Russia is not the threat that the former Soviet Union was. Indeed, Russia now has observer status with NATO, a dramatic change given that the NATO alliance was created to contain the former Soviet Union. And in May 2002, Russia and the United States signed the Strategic Offensive Reductions Treaty (SORT) to reduce their strategic nuclear arsenals to between 1,700 and 2,200 warheads each by December 2012. According to IISS, “despite disagreement over the U.S.-led action in Iraq, the bilateral relationship between Washington and Moscow remains firm.”

Even if Russia were to change course and adopt a more hostile position, it is not in a position to challenge the United States, either economically or militarily. In 2003, Russia’s gross domestic product (GDP) was a little more than 1-10th of U.S. GDP ($1.3 trillion versus $10.9 trillion). Although a larger share of Russia’s GDP was devoted to defense expenditures (4.9 percent versus 3.7 percent), in absolute terms the United States outspent Russia by more than 6 to 1. To equal the United States, Russia would have to devote more than 20 percent of its GDP to defense, which would exceed what the Soviet Union spent at the height of the Cold War during the 1980s.

Certainly, Chinese military developments bear watching. Although many see China as the next great threat, even if China modernizes and expands its strategic nuclear force (as many military experts predict it will), the United States will retain a credible nuclear deterrent with an overwhelming advantage in warheads, launchers, and variety of delivery vehicles. According to a Council on Foreign Relations task force chaired by former Secretary of Defense Harold Brown, “The People’s Republic of China is pursuing a deliberate and focused course of military modernization but . . . it is at least two decades behind the United States in terms of military technology and capability. Moreover, if the United States continues to dedicate significant resources to improving its military forces, as expected, the balance between the United States and China, both globally and in Asia, is likely to remain decisively in America’s favor beyond the next twenty years.”

Like Russia, China may not have the wherewithal to compete with and challenge the United States. In 2003, U.S. GDP was almost eight times more than China’s ($10.9 trillion versus $1.4 trillion). China spent fractionally more of its GDP on defense than the United States (3.9 percent versus 3.7 percent), but in absolute terms the U.S. defense expenditures were seven times greater than China’s ($404.9 billion versus $55.9 billion). To equal the United States, China would have to devote one-quarter of its GDP to defense.

If the Russian and Chinese militaries are not serious threats to the United States, so-called rogue states, such as North Korea, Iran, Syria, and Cuba, are even less of a threat. Although these countries are unfriendly to the United States, none have any real military capability to threaten or challenge vital U.S. security interests. The GDP of these four countries was $590.3 billion in 2003, or less than 5.5 percent of U.S. GDP. Military spending is even more lopsided: $11.3 billion compared to $404.9 billion, or less than 3 percent of U.S. defense spending. Even if North Korea and Iran eventually acquire a long-range nuclear capability that could reach the United States, the U.S. strategic nuclear arsenal would continue to act as a powerful deterrent.

Downsizing the U.S. military

According to DOD, before Operation Iraqi Freedom the total number of U.S. active-duty military personnel was more than 1.4 million troops, of which 237,473 were deployed in foreign countries. With an all-volunteer force, maintaining those deployments requires at least twice as many additional troops to be deployed in the United States so that the overseas force can be rotated at specified intervals. Thus, one way to measure the cost to the United States of maintaining a global military presence is the expense of supporting more than 700,000 active-duty troops along with their associated force structure. But without a great-power enemy that might justify an extended forward deployment of military forces, the United States could dramatically reduce its overseas commitments, and U.S. security against traditional nation-state military threats could be achieved at significantly lower costs.

If the United States adopted a balancer-of-last-resort strategy, most overseas military commitments could be eliminated and the defense budget substantially reduced.

Instead of a Cold War-era extended defense perimeter and forward-deployed forces (intended to keep in check an expansionist Soviet Union), the very different 21st-century threat environment affords the United States the opportunity to adopt a balancer-of-last-resort strategy. Such a strategy would place greater emphasis on allowing countries to take responsibility for their own security and, if necessary, to build regional security arrangements, even in important areas such as Europe and East Asia. Instead of being a first responder to every crisis and conflict, the U.S. military would intervene only when truly vital security interests were at stake. This strategy would allow the United States to eliminate many permanent foreign bases and substantially reduce the large number of troops deployed at those bases.

Although it is counterintuitive, forward deployment does not significantly enhance the U.S. military’s ability to fight wars. The comparative advantage that the U.S. military possesses is airpower, which can be dispatched relatively quickly and at very long ranges. Indeed, during Operation Enduring Freedom in Afghanistan, the Air Force was able to fly missions from Whiteman Air Force Base in Missouri to Afghanistan and back. The ability to rapidly project power, if necessary, can also be made easier by pre-positioning supplies and equipment at strategic locations. Troops can be deployed faster if their equipment does not have to be deployed simultaneously.

If U.S. ground forces are needed to fight a major war, they could be deployed as necessary. It is worth noting that both Operation Enduring Freedom and Operation Iraqi Freedom were conducted without significant forces already deployed in either theater of operation. In the case of Operation Enduring Freedom, the military had neither troops nor bases adjacent to Afghanistan. Yet military operations commenced less than a month after the September 11 attacks. In the case of Operation Iraqi Freedom, even though the military had more than 6,000 troops (mostly Air Force) deployed in Saudi Arabia, the Saudi government denied the use of its bases to conduct military operations. Instead, the United States used Kuwait as the headquarters and the jumping-off point for military operations. Similarly, the Turkish government prevented the U.S. Army’s 4th Infantry Division from using bases in that country for military operations in northern Iraq, forcing some 30,000 troops to be transported via ship through the Suez Canal and Red Sea to the Persian Gulf, where they arrived too late to be part of the initial attack against Iraq. Despite these handicaps, U.S. forces swept away the Iraqi military in less than four weeks.

Consider also that in the case of South Korea, the 31,000 U.S. troops deployed there are insufficient to fight a war. Operation Iraqi Freedom, against a smaller and weaker military foe, required more than 100,000 ground troops to take Baghdad and topple Saddam Hussein (and more to occupy the country afterward). If the United States decided to engage in an offensive military operation against North Korea, the troops stationed in South Korea would have to be reinforced. This would take almost as much time as deploying the entire force from scratch. If North Korea (with a nearly one-million-man army) decided to invade South Korea, the defense of South Korea would rest primarily with that country’s 700,000-man military, not 31,000 U.S. troops. Nor does the U.S. military presence in South Korea alter the fact that North Korea is believed to have tens of thousands of artillery weapons that can hold the capital city of Seoul hostage. At best, U.S. forces are a tripwire for defending South Korea.

Not only does the post-Cold War threat environment give the United States the luxury of allowing countries to take responsibility for security in their own neighborhoods, but the economic strength of Europe and East Asia mean that friendly countries in those regions can afford to pay for their own defense rather than relying on the United States to underwrite their security. In 2003, U.S. GDP was $10.9 trillion and total defense expenditures were 3.7 percent of that. In contrast, the combined GDP of the 15 European Union countries in 2003 was $10.5 trillion, but defense spending was less than 2 percent of GDP. Without a Soviet threat to Europe, the United States does not need to subsidize European defense spending. The European countries have the economic wherewithal to increase their military spending, if necessary.

Likewise, U.S. allies in East Asia are capable of defending themselves. North Korea, one of the world’s last bastions of central planning, is an economic basket case. North Korea’s GDP in 2003 was $22 billion, compared to $605 billion for South Korea. South Korea also outspends North Korea on defense by nearly 3 to 1: $14.6 billion versus $5.5 billion. Japan’s GDP was $4.34 trillion (more than 195 times larger than North Korea’s) and defense spending was $42.8 billion (almost eight times that of North Korea). South Korea and Japan certainly have the economic resources to adequately defend themselves against North Korea. They even have the capacity to act as military balancers to China (if China is perceived as a threat). In 2003, China had a GDP of $1.43 trillion and spent $22.4 billion on defense.

Consider, in very rough terms, what it would mean to adopt a balancer-of-last-resort strategy. Virtually all U.S. foreign military deployments and twice as many U.S.-based troops could be cut. Exceptions include Marine Corps personnel assigned to embassies. If the country also eliminated the troops maintained at home for rotation purposes, the result would be to reduce the total active-duty force by about half to 699,000, which would break down as follows:

Army: 189,000, a 61 percent reduction, which would result in a force strength of four active-duty divisions or their equivalent in a brigade force structure.

Navy: 266,600, a 31 percent reduction, which would result in an eight-carrier battle group force.

Marine Corps: 77,000, a 56 percent reduction, which would result in one active Marine Expeditionary Force and one Marine Expeditionary Brigade.

Air Force: 168,000, a 54 percent reduction, which would result in 11 active-duty tactical fighter wings and 93 heavy bombers.

Admittedly, this is a macro approach that assumes that the current active-duty force mix is appropriate. Interestingly enough, this top-down approach yields a force structure not markedly different from what the most recent DOD bottom-up review determined would be needed to fight a single major regional war: four to five Army divisions, four to five aircraft carriers, four to five Marine expeditionary brigades, and 10 Air Force tactical fighter wings and 100 heavy bombers. Therefore, it is a reasonable method for assessing how U.S. forces and force structure could be reduced by adopting a balancer-of-last-resort strategy. And as shown in Figure 2, the size of the defense budget correlates rather strongly with the number of U.S. troops deployed overseas.

According to DOD, the FY 2005 personnel budget for active-duty forces is $88.3 billion out of a total of $104.8 billion for military personnel. A 50 percent reduction in active-duty forces would translate into a FY 2005 active-duty military personnel budget of $44.1 billion and a total military personnel spending budget of $60.6 billion.

If U.S. active-duty forces were substantially reduced, it follows that the associated force structure could be similarly reduced, resulting in reduced operations and maintenance (O&M) costs. Using the same percentage reductions applied to active-duty forces, the O&M budget for the active Army force could be reduced from $26.1 billion to $12.8 billion, the active Navy force could be reduced from $29.8 billion to $20.6 billion, the active Marine Corps force could be reduced from $3.6 billion to $1.6 billion, and the active Air Force could be reduced from $28.5 billion to $13.1 billion. The total savings would be $39.9 billion, and the total spent on O&M would fall from $140.6 billion to $100.7 billion.

The combined savings in military personnel and O&M costs would total $84 billion, or about 21 percent of the total defense budget. Because military personnel and O&M are the two largest portions of the defense budget—26 percent and 35 percent, respectively—significant reductions in defense spending can be achieved only if these costs are reduced. And the only way to reduce these costs is to downsize active-duty military forces.

Unneeded weapon systems

Further savings could be realized by eliminating unneeded weapon systems, which would reduce the procurement budget ($74.9 billion) and the research, development, test, and evaluation (RDT&E) budget ($68.9 billion). The Pentagon has already canceled two major weapon systems: the Army’s Crusader artillery piece and Comanche attack helicopter, with program savings of $9 billion and more than $30 billion, respectively. But this is simply a good start. Other weapon systems that could be canceled include the F-22 Raptor, F/A-18 E/F Super Hornet, V-22 Osprey, and Virginia-class attack submarine.

The Air Force’s F/A-22 Raptor was originally designed for air superiority against Soviet tactical fighters that were never built. It is intended to replace the best fighter in the world today, the F-15 Eagle, which no current or prospective adversary can seriously challenge for air superiority. The Navy’s F/A-18E/F Super Hornet is another unneeded tactical fighter. The Marine Corps V-22 Osprey’s tilt-rotor technology is still unproven and is inherently more dangerous than helicopters that can perform the same missions at a fraction of the cost. The Navy’s Virginia-class submarine was designed to counter a Soviet nuclear submarine threat that no longer exists.

Canceling the F-22, F/A-18E/F, V-22, and Virginia-class attack submarines would save a total of $12.2 billion in procurement and RDT&E costs in the FY 2005 budget and a total of $170 billion in future program costs. Combined with the military personnel and O&M savings previously discussed, a revised FY 2005 defense budget would be $305.8 billion, a 21 percent reduction (figure 3). Of course, it is not realistic to say that the defense budget could be reduced immediately. But the budget could be reduced in increments to this proposed level during a five-year period.

Tools against terror

The defense budget can be reduced because the nation-state threat environment is markedly different than it was during the Cold War, and because a larger military is not necessary to combat the terrorist threat. It is important to remember that a large military with a forward-deployed global presence was not an effective defense against 19 hijackers. In addition, the shorthand phrase “war on terrorism” is misleading. First, as the National Commission on Terrorist Attacks upon the United States (also known as the 9/11 Commission) points out: “The enemy is not just ‘terrorism,’ some generic evil. This vagueness blurs the strategy. The catastrophic threat at this moment in history is more specific. It is the threat posed by Islamist terrorism—especially the al Qaeda network, its affiliates, and its ideology.” Second, the term “war” implies the use of military force as the primary policy instrument for waging the terrorism fight. But traditional military operations should be the exception in the conflict with al Qaeda, which is not an army with uniforms that operates in a specific geographic region but a loosely connected and decentralized network with cells and operatives in 60 countries. President Bush is right: “We’ll have to hunt them down one at a time.”

Although the president is also correct in being skeptical about treating terrorism “as a crime, a problem to be solved mainly with law enforcement and indictments,” the reality is that the arduous task of dismantling and degrading the network will largely be the task of unprecedented international intelligence and law enforcement cooperation. The military aspects of the war on terrorism will largely be the work of special forces in discrete operations against specific targets rather than large-scale military operations. Instead of spending hundreds of billions of dollars to maintain the current size of the armed forces and buy unneeded weapons, the United States should invest in better intelligence gathering, unmanned aerial vehicles (UAVs), special operations forces, and language skills.

Intelligence gathering. Better intelligence gathering about the threat is critical to fighting the war on terrorism. Although the budgets for the 15 agencies with intelligence gathering and analysis responsibilities are veiled in secrecy, the best estimate is that the total spent on intelligence is about $40 billion. As with the defense budget, it is not necessarily a question of needing to spend more money on intelligence gathering and analysis, but how to best allocate spending and resources. About 85 percent of the estimated $40 billion spent on intelligence activities goes to DOD and only about 10 percent to the Central Intelligence Agency, with the remainder spread among the other intelligence agencies. If the war on terrorism is not primarily a military war, perhaps the intelligence budget could be reallocated between DOD and other intelligence agencies, with less emphasis on nation-state military threats and more on terrorist threats.

Questions about the right amount of intelligence spending and its allocation aside, the war on terrorism requires:

Less emphasis on spy satellites as a primary means of intelligence gathering. That does not mean abandoning the use of satellite imagery. Rather, it means recognizing that spy satellite images might have been an excellent way to monitor stationary targets such as missile silos or easily recognizable military equipment, such as tanks and aircraft, but might not be as capable in locating and tracking individual terrorists.

Recognizing the problems involved with electronic eavesdropping. According to Loren Thompson of the Lexington Institute, “The enemy has learned how to hide a lot of its transmissions from electronic eavesdropping satellites.” The problem of having trouble finding and successfully monitoring the right conversations is further compounded by being unable to sift through the sheer volume of terrorist chatter to determine what bits of information are useful.

Greater emphasis on human intelligence gathering. Spies on the ground are needed to supplement and sometimes confirm or refute what satellite images, electronic eavesdropping, interrogations of captured al Qaeda operatives, hard drives on confiscated computers, and other sources are indicating about the terrorist threat. Analysis and interpretation need to be backed up with as much inside information as possible. This is perhaps the most critical missing piece in the intelligence puzzle in terms of anticipating future terrorist attacks. Ideally, the United States needs “moles” inside al Qaeda, but it will be a difficult task (and likely take many years) to place someone inside al Qaeda who is a believable radical Islamic extremist and will be trusted with the kind of information U.S. intelligence needs. The task is made even more difficult because of the distributed and cellular structure of al Qaeda and the fact that the radical Islamic ideology that fuels the terrorist threat to the United States has expanded beyond the al Qaeda structure into the larger Muslim world.

Language skills. Directly related to intelligence gathering is having a cadre of experts to teach and analyze the relevant languages of the Muslim world: Arabic, Uzbek, Pashtu, Urdu, Farsi (Persian), Dari (the Afghan dialect of Farsi), and Malay, to name a few. But according to a Government Accountability Office (GAO) report, in FY 2001 only half of the Army’s 84 positions for Arabic translators and interpreters were filled, and there were 27 unfilled positions (out of a total of 40) for Farsi. Undersecretary of Defense for Personnel and Readiness David Chu admits that DOD is having a “very difficult time . . . training and keeping on active duty sufficient numbers of linguists.” As of March 2004, FBI Director Robert Mueller reported that the bureau had only 24 Arabic-speaking agents (out of more than 12,000 special agents). According to a congressionally mandated report, the State Department had only five linguists fluent enough to speak on Arab television (out of 9,000 Foreign Service and 6,500 civil service employees).

According to the GAO, the Pentagon estimates that it currently spends up to $250 million per year to meet its foreign language needs. The GAO did not indicate whether the $250 million (about six 100ths of 1 percent of the FY 2005 defense budget) was adequate. Whether DOD and other government agencies are spending enough, this much is certain: Language skills for the war on terrorism are in short supply. According to a 9/11 Commission staff report, “FBI shortages of linguists have resulted in thousands of hours of audiotapes and pages of written material not being reviewed or translated in a timely manner.” Increasing that supply will not be easily or quickly done, especially since two of the four most difficult languages for Americans to learn are Arabic and Farsi.

UAVs. The potential utility of UAVs for the war on terrorism has been demonstrated in Afghanistan and Yemen. In February 2002, a Predator UAV (armed with Hellfire missiles and operated by the CIA) in the Tora Bora region of eastern Afghanistan attacked a convoy and killed several people, including a suspected al Qaeda leader. In November 2002, a Predator UAV in Yemen destroyed a car containing six al Qaeda suspects, including Abu Ali al-Harithi, one of the suspected planners of the attack on the USS Cole in October 2000.

If parts of the war on terrorism are to be fought in places such as Yemen, Sudan, Somalia, and Pakistan, and especially if it is not possible for U.S. ground troops to operate in those countries, UAVs could be key assets for finding and targeting al Qaeda operatives because of their ability to cover large swaths of land for extended periods of time in search of targets. A Predator UAV has a combat radius of 400 nautical miles and can carry a maximum payload of 450 pounds for more than 24 hours. The Congressional Budget Office states that UAVs can provide their users with “sustained, nearly instantaneous video and radar images of an area without putting human lives at risk.”

Armed UAVs offer a cost-effective alternative to ground troops or the use of piloted aircraft to perform missions against identified terrorist targets. One can only wonder what might have happened if the spy Predator that took pictures of a tall man in white robes surrounded by a group of people—believed by many intelligence analysts to be Osama bin Laden— in the fall of 2000 had instead been an armed Predator capable of immediately striking the target.

In addition to their utility, UAVs are particularly attractive because of their relatively low cost, especially when compared to that of piloted aircraft. Developmental costs for UAVs are about the same as those for a similar piloted aircraft, but procurement costs are substantially less. In FY 2003, the government purchased 25 Predator UAVs for $139.2 million ($5.6 million each), and in FY year 2004, it purchased 16 Predators for $210.1 million ($13.1 million each). The FY 2005 budget includes the purchase of nine Predators for $146.6 million ($16.3 million each). (The increases in per-unit cost reflect the arming of more Predators with Hellfire missiles.) Although UAVs are unlikely to completely replace piloted aircraft, a Predator UAV costs a fraction of a tactical fighter aircraft such as an F-15 or F-22, with unit costs of $55 million and $257 million, respectively. O&M costs for UAVs are also expected to be less than for piloted aircraft.

Thus, the $10 billion in planned spending on UAVs in the next decade (compared to $3 billion in the 1990s) is a smart investment in the war on terrorism. Even doubling the budget to $2 billion a year on average over the next 10 years would make sense and would represent less than 1 percent of an annual defense budget based on a balancer-of-last-resort strategy. The bottom line is that UAVs are a very low-cost weapon that could yield an extremely high payoff in the war on terrorism.

Special operations forces. The al Qaeda terrorist network is a diffuse target with individuals and cells operating in 60 or more countries. U.S. special operations forces are ideally suited for counteracting this threat. Indeed, according to the Special Operations Forces posture statement, counterterrorism is the number one mission of these forces, and they are “specifically organized, trained, and equipped to conduct covert, clandestine, or discreet CT [counterterrorism] missions in hostile, denied, or politically sensitive environments,” including “intelligence operations, attacks against terrorist networks and infrastructures, hostage rescue, [and] recovery of sensitive material from terrorist organizations.”

Secretary of Defense Donald Rumsfeld has been a strong advocate of using special operations forces against terrorist targets. In August 2002, he issued a classified memo to the U.S. Special Operations Command (SOCOM) to capture or kill Osama bin Laden and other al Qaeda leadership. Rumsfeld has also proposed sending these forces into Somalia and Lebanon’s Bekaa Valley, because these lawless areas are thought to be places where terrorists can hide and be safe from U.S. intervention.

As with UAVs, the cost of special operations forces is relatively inexpensive. The FY 2005 budget request for SOCOM is $6.5 billion, or only about 1.6 percent of the total defense budget. As with UAVs, the budget for special operations forces could be significantly increased without adversely affecting the overall defense budget. Given the importance and unique capabilities of these forces relative to those of the regular military in the war on terrorism, it would make sense to increase funding for special forces, perhaps doubling SOCOM’s budget.

Ever-increasing defense spending is being justified as necessary to fight the war on terrorism. But the war on terrorism is not primarily a conventional military war to be fought with tanks, planes, and ships, and the military threat posed by nation states to the United States does not warrant maintaining a large, forward-deployed military presence around the world. Indeed, there is a relationship between U.S. troops deployed abroad and acts of terrorism against the United States, as the Bush administration has acknowledged. According to former Deputy Defense Secretary Paul Wolfowitz, U.S. forces stationed in Saudi Arabia after the 1991 Gulf War were “part of the containment policy [of Iraq] that has been Osama bin Laden’s principal recruiting device, even more than the other grievances he cites.”

Therefore, a better approach to national security policy would be for the United States to adopt a less interventionist policy abroad and pull back from the Cold War-era extended security perimeter (with its attendant military commitments overseas). Rather than being the balancer of power in disparate regions around the world, the United States should allow countries in those regions to establish their own balance-of-power arrangements. A balancer-of-last-resort strategy would help the United States distinguish between crises and conflicts vital to its interests and those that do not threaten U.S. national security.

The Global Water Crisis

People living in the United States or any industrialized nation take safe drinking water for granted. But in much of the developing world, access to clean water is not guaranteed. According to the World Health Organization, more than 1.2 billion people lack access to clean water, and more than 5 million people die every year from contaminated water or water-related diseases.

The world’s nations, through the United Nations (UN), have recognized the critical importance of improving access to clean water and ensuring adequate sanitation and have pledged to cut the proportion of people without such access by half by 2015 as part of the UN Millennium Development Goals. However, even if these goals are reached, tens of millions of people will probably perish from tainted water and water-borne diseases by 2020.

Although ensuring clean water for all is a daunting task, the good news is that the technological know-how exists to treat and clean water and convey it safely. The international aid community and many at-risk nations are already working on a range of efforts to improve access to water and sanitation.

It is clear, however, that more aid will be needed, although the estimates of how much vary widely. There is also considerable debate about the proper mix of larger, more costly projects and smaller, more community-scale projects. Still, it seems that bringing basic water services to the world’s poorest people could be done at a reasonable price—probably far less than consumers in developed countries now spend on bottled water.

The global water crisis is a serious threat, and not only to those who suffer, get sick, and die from tainted water or water-borne disease. There is also a growing realization that the water crisis undercuts economic growth in developing nations, can worsen conflicts over resources, and can even affect global security by worsening conditions in states that are close to failure.

Mounting death toll

According to a Pacific Institute analysis, between 34 and 76 million people could perish because of contaminated water or water-related diseases by 2020, even if the UN Millennium Development Goals are met.

The spending gap

Despite the toll of the global water crisis, industrial nations spend little on overseas development efforts such as water and sanitation projects. Only 5 of 22 nations have met the modest UN goal of spending 0.7 percent of a nation’s gross national income on overseas development assistance. And only a fraction of all international assistance is spent on water and sanitation projects. From 1999 to 2001, an average of only $3 billion annually was provided for water supply and sanitation projects.

Water from a bottle

Although tap water in most of the developed world is clean and safe, millions of consumers drink bottled water for taste, convenience, or because of worries about water quality. Comprehensive data on bottled water consumption in the developing world are scarce. However, some water experts are worried that increased sales of bottled water to the developing world will reduce pressure on governments to provide basic access to non-bottled water. Others are concerned that the world’s poorest people will have to spend a significant amount of their already low incomes to purchase water.

Too dear a price?

Consumers spend nearly $100 billion annually on bottled water, according to Pacific Institute estimates. Indeed, consumers often pay several hundred to a thousand times as much for bottled water as they do for reliable, high-quality tap water, which costs $.50 per cubic meter in California. This disparity is often worse in developing nations where clean water is far out of reach for the poorest people.

Harnessing Nanotechnology to Improve Global Equity

Developing countries usually find themselves on the sidelines watching the excitement of technological innovation. The wealthy industrialized nations typically dominate the development, production, and use of new technologies. But many developing countries are poised to rewrite the script in nanotechnology. They see the potential for nanotechnology to meet several needs of particular value to the developing world and seek a leading role for themselves in the development, use, and marketing of these technologies. As the next major technology wave, nanotechnology will be revolutionary in a social and economic as well as a scientific and technological sense.

Developing countries are already aware that nanotechnology can be applied to many of their pressing problems, and they realize that the industrialized countries will not place these applications at the top of their to-do list. The only way to be certain that their needs are addressed is for less industrialized nations themselves to take the lead in developing those applications. In fact, many of these countries have already begun to do so. The wealthy nations should see this activity as a potential catalyst for the type of innovative research and economic development sorely needed in these countries. Strategic help from the developed world could have a powerful impact on the success of this effort. Planning this assistance should begin with an understanding of developing-country technology needs and knowledge of the impressive R&D efforts that are already under way.

To provide strategic focus to nanotechnology efforts, we recently carried out a study using a modified version of the Delphi method and worked with a panel of 63 international experts, 60 percent of whom were from developing countries, to identify and rank the 10 applications of nanotechnology most likely to benefit the less industrialized nations in the next 10 years. The panelists were asked to consider impact, burden, appropriateness, feasibility, knowledge gaps, and indirect benefits of each application proposed. Our results, shown in Table 1, show a high degree of consensus with regard to the top four applications: All of the panelists cited at least one of the top four applications in their personal top-four rankings, with the majority citing at least three.

Table 1.
Top 10 Applications of Nanotechnology for Developing Countries

1. Energy storage, production, and conversion
2. Agricultural productivity enhancement
3. Water treatment and remediation
4. Disease diagnosisand screening
5. Drug delivery systems
6. Food processing and storage
7. Air pollution and remediation
8. Construction
9. Health monitoring
10. Vector and pest detection and control

Source: F. Salamanca-Buentello etal., “Nanotechnology and the Developing World,” PLoSMedicine2 (2003): e97.

To further assess the impact of nanotechnology on sustainable development, we asked ourselves how well these nanotechnology opportunities matched up with the eight United Nations (UN) Millennium Development Goals, which aim to promote human development and encourage social and economic sustainability. We found that nanotechnology can make a significant contribution to five of the eight goals: eradicating extreme poverty and hunger; ensuring environmental sustainability; reducing child mortality; improving maternal health; and combating AIDS, malaria, and other diseases. A detailed look at how nanotechnology could be beneficial in the three most commonly mentioned areas is illustrative.

Energy storage, production, and conversion. The growing world population needs cheap noncontaminating sources of energy. Nanotechnology has the potential to provide cleaner, more affordable, more efficient, and more reliable ways to harness renewable resources. The rational use of nanotechnology can help developing countries to move toward energy self-sufficiency, while simultaneously reducing dependence on nonrenewable, contaminating energy sources such as fossil fuels. Because there is plenty of sunlight in most developing countries, solar energy is an obvious source to consider. Solar cells convert light into electric energy, but current materials and technology for these cells are expensive and inefficient in making this conversion. Nanostructured materials such as quantum dots and carbon nanotubes are being used for a new generation of more efficient and inexpensive solar cells. Efficient solar-derived energy could be used to power the electrolysis of water to produce hydrogen, a potential source of clean energy. Nanomaterials also have the potential to increase by several orders of magnitude the efficiency of the electrolytic reactions.

One of the limiting factors to the harnessing of hydrogen is the need for adequate storage and transportation systems. Because hydrogen is the smallest element, it can escape from tanks and pipes more easily than can conventional fuels. Very strong materials are needed to keep hydrogen at very low temperature and high pressure. Novel nanomaterials can do the job. Carbon nanotubes have the capacity to store up to 70 percent of hydrogen by weight, an amount 20 times larger than that in currently used compounds. Additionally, carbon nanotubes are 100 times stronger than steel at one-sixth the weight, so theoretically, a 100-pound container made of nanotubes could store at least as much hydrogen as could a 600-pound steel container, and its walls would be 100 times as strong.

Agricultural productivity enhancement. Nanotechnology can help develop a range of inexpensive applications to increase soil fertility and crop production and thus to help eliminate malnutrition, a contributor to more than half the deaths of children under five in developing countries. We currently use natural and synthetic zeolites, which have a porous structure, in domestic and commercial water purification, softening, and other applications. Using nanotechnology, it is possible to design zeolite nanoparticles with pores of different sizes. These can be used for more efficient, slow, and thorough release of fertilizers; or they can be used for more efficient livestock feeding and delivery of drugs. Similarly, nanocapsules can release their contents, such as herbicides, slowly and in a controlled manner, increasing the efficacy of the substances delivered.

Water treatment and remediation. One-sixth of the world’s population lacks access to safe water supplies; one-third of the population of rural areas in Africa, Asia, and Latin America has no clean water; and 2 million children die each year from water-related diseases, such as cholera, typhoid, and schistosomiasis. Nanotechnology can provide inexpensive, portable, and easily cleaned systems that purify, detoxify, and desalinate water more efficiently than do conventional bacterial and viral filters. Nanofilter systems consist of “intelligent” membranes that can be designed to filter out bacteria, viruses, and the great majority of water contaminants. Nanoporous zeolites, attapulgite clays (which can bind large numbers of bacteria and toxins), and nanoporous polymers (which can bind 100,000 times more organic contaminants than can activated carbon) can all be used for water purification.

Nanomagnets, also known as “magnetic nanoparticles” and “magnetic nanospheres,” when coated with different compounds that have a selective affinity for diverse contaminating substances, can be used to remove pollutants from water. For example, nanomagnets coated with chitosan, a readily available substance derived from the exoskeleton of crabs and shrimps that is currently used in cosmetics and medications, can be used to remove oil and other organic pollutants from aqueous environments. Brazilian researchers have developed superparamagnetic nanoparticles that, coated with polymers, can be spread over a wide area in dustlike form; these modified nanomagnets would readily bind to the pollutant and could then be recovered with a magnetic pump. Because of the size of the nanoparticles and their high affinity for the contaminating agents, almost 100 percent of the pollutant would be removed. Finally, the magnetic nanoparticles and the polluting agents would be separated, allowing for the reuse of the magnetic nanoparticles and for the recycling of the pollutants. Also, magnetite nanoparticles combined with citric acid, which binds metallic ions with high affinity, can be used to remove heavy metals from soil and water.

Developing-country activities

Understanding how selected developing countries are harnessing nanotechnology can provide lessons for other countries and for each other. These lessons can be used to provide heads of state and science and technology ministers in less industrialized countries with specific guidance and good practices for implementing innovation policies that direct the strengths of the public and private sectors toward the development and use of nanotechnology to address local sustainable development needs. The actions of developing countries themselves will ultimately determine whether nanotechnology will be successfully harnessed in the developing world.

Developed-country governments should provide incentives for their companies to direct a portion of their R&D toward the development of nanotechnology in less industrialized nations.

We found little extant useful information on nanotechnology research in developing countries, so we conducted our own survey. This preliminary study used information we could collect on the Internet, from e-mail exchanges with experts, and from other publicly available documents. We were able to categorize countries based on the degree of government support for nanotechnology, on the presence or absence of a formal government funding program, on the level of industry involvement, and on the amount of research being done in academic institutions and research groups. Our results revealed a surprising amount of nanotechnology R&D activity (Table 2). Our plan now is to conduct individual case studies of developing countries to obtain a greater depth of understanding. Below is some detailed information we have acquired in the preliminary study.

China. China has a very strong and solid Nanoscience and Nanotechnology National Plan, a National Steering Committee for Nanoscience and Nanotechnology, and a National Nanoscience Coordination Committee. Eleven institutes of the Chinese Academy of Sciences are involved in a major nanotechnology research projects funded partly by the Knowledge Innovation Program. The Chinese Ministry of Science and Technology actively supports several nanoscience and nanotechnology initiatives. The Nanometer Technology Center in Beijing is part of China’s plan to establish a national nanotechnology infrastructure and research center; it involves recruiting scientists, protecting intellectual property rights, and building international cooperation in nanotechnology. China’s first nanometer technology industrial base is located in the Tianjin economic and development area. Haier, one of the country’s largest home appliance producers, has incorporated a series of nanotechnology-derived materials and features into refrigerators, televisions, and computers. Industry and academic researchers have worked together to produce nanocoatings for textiles that render silk, woollen, and cotton clothing water- and oilproof, prevent clothing from shrinking, and protect silk from discoloration. Nanotech Port of Shenzhen is the largest manufacturer of single-walled and multi-walled carbon nanotubes in Asia. Shenzheng Chengying High-Tech produces nanostructured composite anti-ultraviolet powder, nanostructured composite photocatalyst powder, and high-purity nanostructured titanium dioxide. The last two nanomaterials are being used to catalyze the destruction of contaminants using sunlight.

Table 2.
Selected Developing Countries and Their Nanotechnology Activity

Front Runner China National government funding program
South Korea Nanotechnology patents
India Commercial products on the market or in development
Middle Ground Thailand Development of national government funding program
Philippines Some form of existing government support (e.g., research grants)
South Africa Limited industry involvement
Brazil Numerous research institutions
Chile
Up and Comer Argentina Organized government funding not yet established
Mexico Industry not yet involved
Research groups funded through various science and technology institutions

India. Indian nanotechnology efforts cover a wide spectrum of areas, including microelectromechanical systems (MEMS), nanostructure synthesis and characterization, DNA chips, quantum computing electronics, carbon nanotubes, nanoparticles, nanocomposites, and biomedical applications of nanotechnology. The Indian government catalyzed, through the Department of Science and Technology, the National Nanotechnology Program, which is funded with $10 million over 3 years. India has also created a Nanomaterials Science and Technology Initiative and a National Program on Smart Materials; the latter will receive $15 million over 5 years. This program, which is focused on materials that respond quickly to environmental stimuli, is jointly sponsored by five government agencies and involves 10 research centers. The Ministry of Defence is developing projects on nanostructured magnetic materials, thin films, magnetic sensors, nanomaterials, and semiconductor materials. India has also formed a joint nanotechnology initiative with the European Union (EU). Several academic institutions are pursuing nanotechnology R&D, among them the Institute of Smart Materials Structures and Systems of the Indian Institute of Science; the Indian Institute of Technology; the Shanmugha Arts, Science, Technology, and Research Academy; the Saha Institute of Nuclear Physics; and the Universities of Delhi, Pune, and Hyderabad. The Council for Scientific and Industrial Research, India’s premier R&D body, holds numerous nanotechnology-related patents, including novel drug delivery systems, production of nanosized chemicals, and high-temperature synthesis of nanosized titanium carbide. In the industrial sector, Nano Biotech Ltd. is doing research in nanotechnology for multiple diagnostic and therapeutic uses. Dabur Research Foundation is involved in developing nanoparticle delivery systems for anticancer drugs. Similarly, Panacea Biotec has made advances in novel controlled-release systems, including nanoparticle drug delivery for eye diseases, mucoadhesive nanoparticles, and transdermal drug delivery systems. CranesSci MEMS Lab, a privately funded research laboratory located at the Department of Mechanical Engineering of the Indian Institute of Science, is the first privately funded MEMS institution in India; it carries out product-driven research and creates intellectual property rights in MEMS and related fields with an emphasis on social obligations and education.

Brazil. The government of Brazil considers nanotechnology a strategic area. The Brazilian national nanotechnology initiative started in 2001, putting together several existing high-level nanotechnology research groups in several academic institutions and national research centers. Four research networks have been created with initial funds provided by the Ministry of Science and Technology through the National Council for Scientific and Technological Development. Two virtual institutes operating in the area of nanoscience and nanotechnology have also been created through the national program. The total budget for nanoscience and nanotechnology for 2004 was about $7 million; the predicted budget for 2004-2007 is around $25 million. About 400 scientists are working on nanotechnology in Brazil. Activities include a focus on nanobiotechnology, novel nanomaterials, nanotechnology for optoelectronics, biosensors, tissue bioengineering, biodegradable nanoparticles for drug delivery, and magnetic nanocrystals.

South Africa. South African research in nanotechnology currently focuses on applications for social development and industrial growth, including synthesis of nanoparticles, development of better and cheaper solar cells, highly active nanophase catalysts and electrocatalysts, nanomembrane technology for water purification and catalysis, fuel-cell development, synthesis of quantum dots, and nanocomposites development. The South African Nanotechnology Initiative (SANi), founded in 2003, aims to build a critical mass of universities, science councils, and industrial companies that will focus on those areas of nanotechnology in which South Africa can have an advantage. To this end, SANi has an initial budget of about $1.3 million. The total spending in South Africa on nanotechnology is about $3 million. SANi is also interested in promoting public awareness of nanotechnology and assessing the impact of nanotechnology in the South African population. There are currently 11 universities, 5 research organizations (including the Water Research Commission), and 10 private companies actively participating in this initiative. The areas of interest of the private sector in South Africa appear to be chemicals and fuels, energy and telecommunications, water, mining, paints, and paper manufacturing.

Addressing the legitimate concerns associated with nanotechnology can foster public support and allow the technology platform to progress in a socially responsible manner.

Mexico. Mexico has 13 centers and universities involved in nanotechnology research. In 2003, the National Council of Science and Technology spent $12.5 million on 62 projects in 19 institutions. There is strong interest in nanoparticle research for optics, microelectronics, catalysis, hard coatings, and medical electronics. Several groups have focused on fullerenes (in particular, carbon nanotubes), nanowires, molecular sieves for ultrahard coatings, catalysis, nanocomposites, and nanoelectronics. Novel polymer nanocomposites are being developed for high-performance materials, controlled drug release, nanoscaffolds for regenerative medicine applications, and novel dental materials. Last year, Mexican researchers, along with the Mexican federal government and private investors, unveiled a project for the creation of the $18 million National Laboratory of Nanotechnology, which will be under the aegis of the National Institute of Astrophysics, Optics, and Electronics. The initiative was funded by the National Council of Science and Technology, several state governments, and Motorola.

Biotechnology lessons

Lessons from successful health-biotechnology innovation in developing nations, by analogy, might also offer some guidance for nanotechnology innovation. We recently completed a 3-year empirical case study of the health-biotechnology innovation systems in seven developing countries: Cuba, Brazil, South Africa, Egypt, India, China, and South Korea.

The study identified many key factors involved in each of the success stories, such as the focus on the use of biotechnology to meet local health needs. For instance, South Africa has prioritized research on HIV/AIDS, its largest health problem, and developments are under way for a vaccine against the strain most prevalent in the country; Egypt is responding to its insulin shortage by focusing its R&D efforts on the drug; Cuba developed the world’s only meningitis B vaccine as a response to a national outbreak; and India has reduced the cost of its recombinant hepatitis B vaccine to well below that in the developed world. Publications on health research in each of these countries follow the same trend of focusing on local health needs.

Political will is another important factor for establishing a successful health-biotechnology sector, because long-term government support was integral in all seven case studies. In efforts to promote health care biotechnology, governments have developed specific polices, provided funding and recognition for the importance of research, responded to the challenges of brain drain, and provided biotechnology enterprises with incentives to overcome problematic economic conditions. Close linkages and active communication are important as well. Whereas in some countries, such as Cuba, strong collaboration and linkages yielded successful health-biotechnology innovation, lack of these factors in China, Brazil, and Egypt has resulted in less impressive innovation. Defining niche areas, such as vaccines, emerged as another key factor in establishing a successful health-biotechnology sector. Some of the countries have also used their competitive advantages, such as India’s strong past focus on generic drug development.

Our study identified private-sector development as essential for the translation of health-biotechnology knowledge into products and services. South Korea significantly surpassed all other countries in this respect, with policies in place to assist technology transfer and allow university professors to create private firms. China has also promoted enterprise formation, converting existing research institutions into companies. To further explore the role of the private sector, we are currently examining how the domestic health-biotechnology sector in developing countries contributes to addressing local health needs and what policies or practices could make that contribution more effective.

The 2004 report of the UN Commission on Private Sector and Development, Unleashing Entrepreneurship, Making Business Work for the Poor () emphasized the important economic role of the domestic private sector in developing countries. The commission, chaired by Paul Martin and Ernesto Zedillo, highlighted how managerial, organizational, and technological innovation in the private sector, particularly the small and mid-sized enterprise segment, can improve the lives of the poor by empowering citizens and contributing to economic growth. The work of the commission also emphasized the lack of knowledge about best practices and the need for more sustained research and analysis of what works and what does not when attempting to harness the capabilities of the private sector in support of development.

The catalytic challenge

Although the ultimate success of harnessing nanotechnology to improve global equity rests with developing countries themselves, there are significant actions that the global community can take in partnership with developing countries to foster the use of nanotechnology for development. These include:

Addressing global challenges. We have proposed an initiative called Addressing Global Challenges Using Nanotechnology, which can catalyze the use of nanotechnology to address critical sustainable development problems. In the spirit of the concept of Grand Challenges, we are issuing a call to arms for investigators to confront one or more bottlenecks in an imagined path to solving a significant development problem (or preferably several) by seeking very specific scientific or technological breakthroughs that would overcome this obstacle. A scientific board, similar to the one created for the Foundation for the U.S. National Institutes of Health/Bill and MelindaGates Foundation’s Grand Challenges in Global Health, with strong representation of developing countries, would need to be established to provide guidance and oversee the program. The top 10 nanotechnology applications identified above can be used as a roadmap to define the grand challenges.

Helping to secure funding. Two sources of funding, private and public, would finance our initiative. In February 2004, Canadian Prime Minister Paul Martin proposed that 5 percent of Canada’s R&D investment be used to address developing world challenges (). If all industrialized nations adopted this target, part of these funds could be directed toward addressing global challenges using nanotechnology. In addition, developed-country governments should provide incentives for their companies to direct a portion of their R&D toward the development of nanotechnology in less industrialized nations.

Forming effective North-South collaborations. There are already promising examples of North-South partnerships. For instance, the EU has allocated 285 million Euros through its 6th Framework Programme (FP6) for scientific and technological cooperation with third-partner countries, including Argentina, Chile, China, India, and South Africa. A priority research area under FP6 is nanotechnology and nanoscience. Another example is the U.S. funding of nanotechnology research in Vietnam, as well as the U.S.-Vietnam Joint Committee for Science & Technology Cooperation. IndiaNano, a platform created jointly by the Indian-American community in Silicon Valley and Indian experts involved in nanotechnology R&D, aims to establish partnerships between Indian academic, corporate, government, and private institutions in order to support nanotechnology R&D in India and to coordinate the academic, government, and corporate sectors with entrepreneurs, early-stage companies, investors, joint ventures, service providers, startup ventures, and strategic alliances.

Facilitating knowledge repatriation by diasporas. We have recently begun a diaspora study to understand in depth how emigrants can more systematically contribute to innovation and development in their countries of origin. A diaspora is formally defined as a community of individuals from a specific developing country who left home to attend school or find a better job and now work in industrialized nations in academia, research, or industry. This movement of highly educated men and women is often described as a “brain drain” and is usually seen as having devastating effects in the developing world. Rather than deem this migration, which is extremely difficult to reverse, an unmitigated disaster, some developing countries have sought ways to tap these emigrants’ scientific, technological, networking, management, and investment capabilities. India actively encourages its “nonresident Indians” diaspora to make such contributions to development back home, and these people have made a valuable contribution to the Indian information technology and communications sector. We foresee a significant role for diasporas in the development of nanotechnology in less industrialized nations.

Emphasizing global governance. We propose the formation of an international network on the assessment of emerging technologies for development. This network should include groups that will explore the potential risks and benefits of nanotechnology, incorporating developed- and developing-world perspectives, and examine the effects of a potential “nanodivide.” The aim of the network would be to facilitate a more informed policy debate and advocate for the interests of those in developing countries. Addressing the legitimate concerns associated with nanotechnology can foster public support and allow the technology platform to progress in a socially responsible manner. Among the issues to be discussed are who will control the means of production and who will assess the risks and benefits? What will be the effects of military and corporate control over nanotechnology? How will the incorporation of artificial materials into human systems affect health, security, and privacy? How long will nanomaterials remain in the environment? How readily do nanomaterials bind to environmental contaminants? Will these particles move up through the food chain, and what will be their effect on humans? There are also potential risk management issues specific to developing countries: displacement of traditional markets, the imposition of foreign values, the fear that technological advances will be extraneous to development needs, and the lack of resources to establish, monitor, and enforce safety regulations. Addressing these challenges will require active participation on the part of developing countries. In developing these networks, the InterAcademy Council of the world’s science academies could play a role in convening groups of the world’s experts who can provide informed guidance on these issues.

The inequity between the industrialized and developing worlds is arguably the greatest ethical challenge facing us today. The gap is even growing by some measures. For example, life expectancies in most industrialized nations are 80 years and rising, whereas in many developed nations, especially in sub-Saharan Africa where HIV/AIDS is rampant, life expectancies are 40 years and falling.

Although science and technology are not a magic bullet and cannot address problems such as geography, poor governance, and unfair trade practices, they have an essential role in confronting these challenges, as explained in the 2005 report of the UN Millennium Project Task Force on Science, Technology and Innovation (http://www.unmillenniumproject.org/documents/Science-complete.pdf). Some will argue that the focus on cutting-edge developments in nanotechnology is misplaced when developing countries have yet to acquire more mature technologies and are still struggling to meet basic needs such as food and water availability. This is a short-sighted view. All available strategies, from the simplest to the most complex, should be pursued simultaneously. Some will deal with the near term, others the long-term future. What was cutting-edge yesterday is low-tech today, and today’s high-tech breakthrough will be tomorrow’s mass-produced commodity.

Each new wave of science and technology innovation has the potential to expand or reduce the inequities between industrialized and developing countries in health, food, water, energy, and other development parameters. Information and communication technology produced a digital divide, but this gap is now closing; genomics and biotechnology spawned the genomics divide, and we will see if it contracts. Will nanotechnology produce the nanodivide? Resources might be directed primarily to nanosunscreens, nanotrousers, and space elevators to benefit the 600 million people in rich countries, but that path is not predetermined. Nanotechnology could soon be applied to address the critical health, food, water, and energy needs of the 5 billion people in the developing world.