Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993, by Alex Roland with Philip Shiman. Cambridge, Mass: MIT Press, 2002, 455 pp.
John A. Alic
The Defense Advanced Research Projects Agency’s (DARPA’s) Strategic Computing program was a 10-year billion-dollar initiative to advance “machine intelligence” by pushing forward in coordinated fashion technologies for powerful computer systems that could support human intelligence or, in some cases, act autonomously. DARPA, part of the Department of Defense (DOD), supported R&D on computer architectures and gallium arsenide integrated circuits, as well as on systems intended to appeal to the military services–a “pilot’s associate” for the Air Force, battle management software for the Navy, and robotic vehicles for the Army. Many different firms and university groups participated.
Alex Roland, a Duke University history professor, and Philip Shiman provide a detailed narrative of events inside the Strategic Computing program, drawing from interviews and archival sources on its origins, management vagaries, and contracting. As signaled by the titles of the first three chapters, which are named for DARPA managers, the authors focus on individuals more than on institutions or policies. Technologies themselves are not treated in much depth. Readers with some knowledge of computer science or artificial intelligence will find signposts adequate to situate the Strategic Computing program with respect to the technological uncertainties of the time, such as the need for massive amounts of “common sense” knowledge to support expert systems software. Other readers may wish for more background.
There are a few errors. We are told, disconcertingly, that President Ronald Reagan “practically doubled defense spending in his first year.” In fact, defense outlays rose by 18 percent from 1981 to 1982 (9 percent in inflation-adjusted dollars) and by lesser increments in subsequent years. More significantly for the Strategic Computing program itself, the authors appear to be unaware that the Office of the Secretary of Defense had awarded a near-monopoly on R&D on silicon integrated circuits (the “target” which gallium arsenide circuits would have to surpass) to a different DOD initiative, the Very High Speed Integrated Circuit program. Most observers took this bureaucratic directive to be the reason why DARPA steered its program funding to gallium arsenide chips, rather than gallium arsenide’s intrinsically higher resistance to the ionizing radiation created by nuclear blasts.
These and other small blemishes detract only a little from a book that seems exemplary for what it is: a product of the history of technology, a field in which Roland (also a military historian) is widely known. Like much else from historians, it will probably not satisfy readers interested in government policy. For example, the authors do not try to give a sense of funding levels for similar work within DARPA, much less work sponsored by other agencies, either before or after the injection of “new money” appropriated by Congress–perhaps the first question readers concerned with policy would ask.
More generally, the authors’ analytical apparatus seems attuned to academic debates over technological determinism and “the complex question of how and why great technological change takes place.” The idea seems to be that readers will, at the book’s end, be able to reach their own interpretative conclusions. Those lacking the shared assumptions of historians of technology may be unequal to the task. And, after more than 300 pages, many of them concerned with the twists and turns of DARPA management, I was left unsure what to make of the program itself. The authors, for their part, do not give a concise, plainly stated verdict, perhaps because historians tend to view such attempts as reductive oversimplifications. Readers who know little about DARPA will pick up a good deal of incidental knowledge. Otherwise, I suspect the book will be of greatest interest to those who wish to know more about this particular program, along with scholars who share the authors’ academic concerns.
Strategic Computing contrasts sharply with another of Roland’s books, Model Research, published in 1985, which provided a history of the National Advisory Committee on Aeronautics (NACA). In the earlier book, the accretion of historical detail builds to a clear picture of an agency that self-destructed through excessive conservatism. Roland showed why, long before the near-panic that followed the 1957 Soviet Sputnik launches, NACA had lost the trust of high-level federal officials. In the aftermath of Sputnik, few policymakers thought NACA a viable candidate for the task of bringing order to the many competing U.S. missile and space programs. Instead, the government created two new agencies: DARPA, administratively established by Defense Secretary Neil H. McElroy over opposition by the military (which did not want an R&D organization that would be outside their control), and the legislatively established National Aeronautics and Space Administration, which absorbed NACA. After DARPA, unable to hold on to its coordinating role within DOD was forced to invent a new mission, the agency became the home for visionary, long-term military R&D and prototyping.
By the 1980s, DARPA was widely viewed as one of the government’s most effective R&D organizations, credited with substantial contributions to military technologies, including stealth, and dual-use technologies especially in computing. Yet when Roland and Shiman take us inside the agency’s celebrated Information Processing Techniques Office–known for sponsorship of the ARPANet, mother of the Internet–the reality is unsettling. The Strategic Computing program’s goals changed again and again: “At least eight different attempts were made to impose a management scheme,” the authors say. What began as a broad effort to push forward technologies related to artificial intelligence ended, after about 1990, submerged within the multiagency High Performance Computing and Communications initiative. This effort was structured around development of supercomputers and networks for number-crunching–a far stretch from advancing machine intelligence.
What measure success?
Despite the waste motion associated with the Strategic Computing program’s changes in direction and the unanswerable question of what its outcomes might have been if DARPA had stuck closer to the original structure and objectives, it would be misleading to label the undertaking a failure. There are few metrics for judging the effectiveness of technology programs with such heterogeneous agendas. The relationships between the program’s technical goals, intended to be mutually supporting, were complicated and necessarily shifted over time as earlier uncertainties were resolved and new problems appeared. DARPA’s task was doubly complicated by the need to satisfy the military services, which have limited tolerance for R&D unless closely coupled with foreseeable applications to major weapons systems.
The Strategic Computing program included some research and some development. For research in disciplines that are reasonably well characterized and have frontiers marked by professional consensus, social processes provide the equivalent of evolutionary selection in nature: Scientists vote with their citations and their choices of further research directions. In design and development, where the objective is a concretely conceived good or service delivered to some final customer, selection takes place through market mechanisms. In the case of military systems for which no market exists, experience in the field and the integration of technical systems with military doctrine and operational planning accomplish the eventual winnowing. (A case in point: After the Air Force declined to fly its B1-B bombers during the 1991 Gulf War, few observers could believe that this particular fruit of the Reagan defense buildup had been worth the $30-plus billion expended.) The Strategic Computing program was all of these things, yet none of them to the extent needed for straightforward evaluation.
It is too early to look for many results in DOD’s procurement pipeline, given that it takes 15 or 20 years for new systems to reach the field while the Strategic Computing program ended only a decade ago. So far, civilian spin-offs appear to be few. Of what the authors call the program’s quintessential artifact–the massively parallel computers built by Thinking Machines Corporation before its bankruptcy–fully 90 percent went to federal agencies or DOD contractors, rather than commercial customers. Nonetheless, spin-offs may lie ahead. Long lead times continue to characterize the maturing of complex technologies, notwithstanding short technology cycles in microelectronics. After all, the ARPANet and its successors grew slowly for three decades, virtually unknown outside the user community, until the Internet burst into public consciousness in the mid-1990s. That artificial intelligence has been disappointing enthusiasts for four decades does not necessarily mean that its future holds nothing but further frustration.
Since World War II, the U.S. government has supported computing and related technologies through the policies and programs of many different agencies and subagencies both inside and outside of DOD. These policies and programs go far beyond R&D. Defense spending fostered the early creation of a flourishing research-oriented intellectual community centered on electrical engineering and what is now known as computer science, along with occupational communities of programmers and practice-oriented computer engineers and systems analysts. Regulatory interventions through antitrust spurred the growth of independent software firms from the late 1960s. From the beginning, government agencies have been major purchasers of both hardware and software; although no accounting is available, DOD may now be spending $50 billion or more per year on software alone, including maintenance and upgrades.
As hard as they are to weigh, the impacts (positive and negative) and interactions of these and other policies, which are often uncoordinated and sometimes contradictory, have been a powerful force for innovation in computing and information technologies over more than 50 years. The Strategic Computing program, with its multiple shifts in direction and internal conflicts, resembles the larger U.S. policy structure in microcosm. To some observers, federal policies and programs in their profusion and confusion may seem not only unsystematic but wasteful. Yet as demonstrated most recently by the Internet, the government’s actions remain a major spur, as they have been since the 1940s, to the sprawling family of technologies so often acclaimed as the source of a third industrial revolution (or a first postindustrial revolution). Fruits of the program may show up in unexpected places in the years ahead. Strategic Computing will provide a starting point for identifying them.
John A. Alic (JAlic@att.net) spent many years on the staff of the congressional Office of Technology Assessment.