Will Government Programs Spur the Next Breakthrough?
Economy-changing technologies often originated in government research. Are today’s federal programs sufficiently ambitious to catalyze the next big thing?
The future health of the U.S. economy depends on faith: the faith that a new general-purpose technology will emerge that will enable the tech-savvy United States to maintain its pace of rapid productivity growth. In the 20th century, these technological breakthroughs—jet aircraft, satellite communications, computers—always seemed to emerge magically when they were needed. Why should we not continue to believe in the magic of human ingenuity?
Although human ingenuity is indeed a wonder, a closer look at the history of the emergence of new technologies reveals that government R&D spending played an important role in the development of almost every general-purpose technology in which the United States was internationally competitive. In particular, defense-related research, development, and procurement played a pervasive role in the development of a number of industries—aircraft, nuclear power, computer, semiconductor, Internet and satellite communication, and Earth-observing systems—that account for a substantial share of U.S. industrial production.
Identifying this force behind the magic would be reassuring were it not that changes in government policy have reduced the type of federal R&D that spurred technological breakthroughs. At the same time, the private-sector R&D operations, such as Bell Labs, that performed much of this defense R&D as well as supporting their own long-range R&D have refocused their efforts on incremental technology improvements with a shorter-term payoff. The result is that although one can always maintain the hope that magical technological progress will occur, the government and industrial investments that made previous breakthroughs possible are shrinking. Human ingenuity is an abstract concept that will always be with us, but technological innovation is a more mundane activity that requires financial resources as well as inspiration. It is not obvious where the resources to propel the cutting edge of innovation will come from. And without that innovation, it is obvious that the United States will not be able to maintain the rate of productivity growth necessary to sustain its global economic leadership.
After initially experiencing rapid or even explosive development, general-purpose technologies often experience a period of maturity or stagnation. One indicator of technological maturity has been a rise in the scientific and technical effort required to achieve incremental gains in a performance indicator. In some cases, renewed development has occurred along a new technological trajectory.
Measurable impact of a new general-purpose technology on productivity in an industry or sector often does not occur until a technology is approaching maturity. Nobel economist Robert Solow famously commented only a decade ago that he saw computers everywhere except in the productivity statistics.
The electric utility industry is a classic example. Although the first commercially successful system for the generation and distribution of electricity was introduced by Thomas A. Edison in 1878, it was not until well into the 20th century that the electrification of factories began to have a measurable impact on productivity growth. Between the early 1920s and the late 1950s, the electric utility industry was the source of close to half of U.S productivity growth.
Electric power generation from coal-fired plants reached technological maturity between the late 1950s and early 1960, with boiler-turbine units in the 1,000-megawatt range. The exploitation of renewable energy resources or development of other alternative energy technologies could emerge over the next several decades as a possible new general-purpose technology. However, none of the alternative technologies, including nuclear power, appear at present to promise sufficient cost reduction to enable the electric power industry to again become a leading rather than a sustaining source of economic growth in the U.S. economy.
Aircraft production is an example of an industry in which a mature technological trajectory was rapidly followed by transition to a new technological trajectory. Propeller aircraft reached technological maturity in the late 1930s. The scientific and technological foundation for a transition to a jet propulsion trajectory was well under way by the late 1930s, but the transition to commercial jet aircraft would have occurred much more slowly without military support for R&D during World War II and military procurement during the Korean War. Thanks to these government efforts, the industry boomed in the 1950s and 1960s. Growth reached a plateau in 1969, when the launch of the Boeing 747 marked the technological maturity of commercial jet transport.
A similar story can be found in computer development. By the late 1960s, there were indications that mainframe computer development was approaching technological maturity, but new trajectories were opened up by the development of the microprocessor. The personal computer replaced the mainframe as the most rapidly growing segment of the computer industry and as an important source of output and productivity growth in the U.S. economy.
However, support from defense and space agencies contributed to continuing advances in supercomputer speed and power for high-end scientific and military use into the early 1990s. By the late 1990s, substantial concern was being expressed about the sources of future advances in all computer performance.
A continuing concern in the field of computers and information technology (IT) is how long microprocessors will continue their “Moore’s law” pace of doubling capacity every 18 months. It may be premature to characterize the computer and IT industries as approaching maturity, but the collapse of the dot.com industry stock market bubble beginning in the late 1990s and the continuing consolidation of the industry suggest some caution about the expectation that this pace of progress can continue indefinitely.
Historically, new general-purpose technologies have been the drivers of productivity growth across broad sectors of the U.S. economy. It cannot be emphasized too strongly that if either scientific and technological limitations or cultural and institutional constraints should delay the emergence of new general-purpose technologies over the next several decades, they will surely result in a slowing of U.S. productivity growth. Endless novelty in the technical elaboration of existing general-purpose technologies can hardly be sufficient to sustain a high rate of economic growth. In the case of the general-purpose technologies that emerged as important sources of growth in the United States during the second half of the 20th century, it was primarily military and defense-related demand that initially drove these emerging technologies rapidly down their learning curves.
As the general-purpose technologies that were induced by defense R&D and procurement during the past half century mature, one must ask whether military and defense-related R&D and procurement will continue to be important sources of commercial technology development.
During the first two decades after World War II, it was generally taken as self-evident that substantial spinoff of commercial technology could be expected from military procurement and defense-related R&D. Although this assumption seemed reasonable in the 1950s and 1960s, the slowing of U.S. economic growth that began in the early 1970s called it into question.
Beginning in the mid-1980s and continuing into the mid1990s, the new conventional wisdom argued that “dual-use” military-commercial technology would resolve the problem of rising cost and declining quality in post–Cold War military procurement at the same time that it stimulated the civilian economy. The Clinton administration initially embraced, at least at the rhetorical level, the dual-use concept.
Clinton administration actions, however, helped to undermine the dual-use strategy. In 1993, Deputy Secretary of Defense William Perry announced an end to a half-century effort by the Department of Defense (DOD) to maintain rivalry among defense contractors producing comparable products such as tanks, aircraft, and submarines. The change in policy set off a flurry of mergers and acquisitions that reduced the ranks of the largest contractors (those with sales of over $1 billion) from 15 in 1993 to 4 in 1987. With substantially reduced competition in their defense and defense-related markets, the remaining contractors felt less pressure to pursue a dual-use technology development path.
In retrospect it seems clear that the dual-use and related efforts were badly underfunded. They encountered substantial resistance from both DOD and the large defense contractors. The 1994 Republican Congress, as part of a general attack on federal technology development programs, eliminated the budget for DOD’s Technology Reinvestment Program, which was intended to help convert defense-only R&D activities to a dual-use focus.
By the early 1990s, it was becoming clear that changes in the structure of the U.S. economy, of the defense industries, and of the defense industrial base had induced substantial skepticism that the military and defense-related R&D and procurement could continue to play an important role in the generation of new general-purpose commercial technologies. By the turn of the century, the share of output in the U.S. economy accounted for by the industrial sector had declined to 1ess than 15%. Defense procurement had become a smaller share of an economic sector that itself accounted for a smaller share of national economic activity. The absolute size of defense procurement had declined to less than half of the 1985 Cold War peak.
Since the end of the Cold War, the objectives of the defense agencies have shifted toward enhancing their capacity to respond to shorter-term tactical missions. Procurement shifted from a primary emphasis on new advanced technology to an emphasis on developing new processes and systems and to retrofitting legacy technologies. This trend was reinforced by an emerging consensus that the threat of system-level war ended with the Cold War. Many defense intellectuals had come to believe that major interstate wars among the great powers had virtually disappeared. The effect has been to reduce incentives to make long-term investments in defense and defense-related “big science” and “big technology.”
Would it take a major war, or threat of war, to induce the U.S. government to mobilize the necessary scientific, technological, and financial resources to develop new general-purpose technologies? If the United States were to attempt to mobilize the necessary resources, would the defense industries and the broader defense industrial base be capable of responding? It was access to large and flexible resources that enabled powerful bureaucratic entrepreneurs such as Leslie Groves (nuclear weapons), Hyman Rickover (nuclear submarines), Joseph Lickleider (computers), and Del Webb (satellites) to mobilize the scientific and technological resources necessary to move new general-purpose technologies from initial innovation toward military and commercial viability. The political environment that made this possible no longer exists for defense-related agencies and firms.
Can private-sector entrepreneurship be relied on as a source of major new general-purpose technologies? Probably not. Most major general-purpose technologies have required several decades of public or private support to reach the threshold of commercial viability. Private firms see little value in investing in expensive high-risk research that might produce radical breakthroughs when the gains from advances in broadly useful technology are so diffuse that that they are difficult to capture.
Decisionmakers in the private sector rarely have access to capital that can wait decades or even a single decade for a return. Lewis Branscomb and his Harvard University colleagues note in Understanding Private Sector Decision Making for Early Stage Technology Development that many of the older research-intensive firms have almost completely withdrawn from the conduct of basic research and are making only limited investments in early-stage technology development.
Entrepreneurial firms have often been most innovative when they have had an opportunity to capture the economic rents opened up by complementary public investment in research and technology development. The U.S. commercial aircraft industry was unwilling to commit to jet aircraft until the reliability and fuel efficiency of the jet engine had been demonstrated by more than a decade of military experience. The development of the ARPANET in the early 1970s was preceded by more than a decade of R&D by the Advanced Research Projects Agency’s Information Technologies Office. It took another two decades of public support before a successful commercial system was developed. Even the most innovative firms often have great difficulty pursuing more than a small share of the opportunities opened up by their own research. It is difficult to imagine how the private sector will, without substantial public support for R&D, become an important source of new general-purpose technologies over the next several decades.
The conclusion that neither defense R&D and procurement nor private-sector entrepreneurship can be relied on as an important source of new general-purpose technologies forces a third question onto the agenda. Could a more aggressive policy of public support for R&D directed to commercial technology development become an important source of new general-purpose technologies?
Since the mid-1960s, the federal government has made a series of efforts to create programs in support of the development and diffusion of commercial technology. Except in the fields of agriculture and health, these efforts have had great difficulty in achieving economic and political viability. Funding of the programs authorized by the 1965 State Technical Services Act, which provided support for universities to provide technical assistance to small and medium-sized businesses, was a casualty of the Vietnam War. The very successful federal/private cooperative Advanced Technology Program of the National Bureau of Standards and Technology barely survived the congressional attacks on federal technology programs that took place after the 1994 midterm elections, and it has been under constant attack since. The SEMATECH semiconductor equipment consortium is another model of successful public/private cooperation in technology development, but it has not been replicated in other industries. The United States has not yet designed a coherent set of institutional arrangements for public support of commercial technology development. Furthermore, even the successful programs referred to here have been designed to achieve short-term incremental gains rather than the development of new general-purpose technologies.
R&D in molecular genetics and biotechnology is a major exception. I argued in Technology Growth and Development that molecular biology and biotechnology will be the source of the most important new general-purpose technologies of the early decades of the 21st century. For more than three decades, beginning in the late 1930s, the molecular genetics and biotechnology research leading to the development of commercial biotechnology products in the pharmaceutical and agricultural industries was funded almost entirely by private foundations, the National Science Foundation, the National Institutes of Health, and the national energy laboratories, and was performed largely at government and university laboratories.
When firms in the pharmaceutical and agricultural industries decided to enter the field in the 1970s, they found it necessary to make very substantial grants to and contracts with university laboratories to obtain a “window” on the advances in the biological sciences and in the techniques of biotechnology that were already under way in university laboratories. When defense agencies in the United States and the Soviet Union began to explore the development of bioweapons and their antidotes, they also found it necessary to tap expertise available only in university and health agency laboratories.
The fact that I do not see any general-use technology revolution on the horizon does not mean that one has not begun to develop. If I had been writing this article in the mid-1970s, I would not have noticed or appreciated the commercial potential of research on artificial intelligence that had been supported by the Defense Advanced Research Projects Agency’s Information Processing Office since the early 1960s. I certainly would not have anticipated the emergence or development of the Internet and its dramatic commercial and cultural effects. It is possible that one or more of the nanotechnologies will produce powerful new general-purpose technologies, perhaps in materials science or in the health sciences, but at this stage I find it difficult to separate solid scientific and technical assessment from the hype about nanotechnology’s promise.
If forced to guess the source of the next economy-rattling technological earthquake, I would name two scientific and technological challenges that are likely candidates because each is likely to attract the substantial public investment that I believe is essential to develop a new general-use technology.
One is in the area of infectious disease: the demand to develop the knowledge and technology to confront the coevolution of pests, pathogens, and disease with control agents. We have been increasingly sensitized to the effects of this coevolution by the resurgence of tuberculosis and malaria, the emergence of new diseases such as AIDS and Ebola, and the threat of a new global influenza epidemic. The coevolution of human, nonhuman animal, and crop plant pests, pathogens, and diseases with control technologies means that chemical and biological control technologies often become ineffective within a few years or decades. This means, in turn, that maintenance research—the research necessary to sustain present levels of health or protection—must rise continuously as a share of a constant research budget.
At present, health R&D tends to be highly pest- and pathogen-specific. It is not apparent that current research will generate broad general-purpose medical and health-related technologies that are capable of addressing the demand for long-term sustainable protection, but at least the possibility exists.
The second is the threat of climate change. Measurements taken in the late 1950s indicated that carbon dioxide (CO2) was increasing in the atmosphere. Beginning in the late 1960s, computer simulations indicated possible changes in temperature and precipitation that could occur due to human-induced emission of greenhouse gasses into the atmosphere.
By the early 1980s, a fairly broad consensus had emerged in the climate change research community that greenhouse gas emissions could, by 2050, result in a rise in global average temperature by 1.5° to 4.5°C (about 2.7° to 8.0°F) and a complex pattern of worldwide climate changes. By the early 2000s, it was clear, from increasingly sophisticated climate modeling exercises and careful scientific monitoring of Earth surface changes such as the summer melting of the north polar ice cap, that what oceanographer Roger Revelle had characterized as a “vast global experiment” was well under way. It was also apparent that an alternative to the use of car-bon-based fossil fuels would have to be found.
Modest efforts have been made since the mid-1970s to explore renewable energy technologies. Considerable progress has been made in moving down the learning curves for photovoltaics and wind turbines. The Bush administration has placed major emphasis on the potential of hydrogen technology to provide a pollution-free substitute for carbon-based fuels by the second half of this century. The environmental threats and economic costs of continued reliance on fossil fuel technologies are sufficiently urgent to warrant substantially larger public support in the form of private-sector R&D incentives and a refocusing of effort by the national energy laboratories on the development and diffusion of alternative energy technologies. A major effort could yield a technological surprise with widespread application.
To be realistic, however, I do not foresee the seeds of a technological revolution in these efforts. Although immensely important, the health and energy technologies that government is likely to pursue will not resolve the problem of achieving rapid U.S. economic growth. In both cases, the emphasis is likely to be on maintenance technologies, which are necessary to prevent the deterioration of health and environment but unlikely to transform the entire economy.
The United States is going to continue investing in basic research that will produce revolutions in scientific understanding, but preeminence in scientific research is only loosely linked to preeminence in technology development. In a number of U.S. high-technology industries, it has been military procurement that enabled firms to move rapidly down their technology learning curves. If defense procurement is not going to force the development of new general-purpose technologies, the United States will need to develop a new strategy for catalyzing radical technological progress.