Science and the Entrepreneurial University

During the second half of the 20th century, research universities in the United States remade themselves into an important engine of the modern economy. Everyone has heard of the technological miracles wrought by Silicon Valley in California and Route 128 in Massachusetts. Less well known is that high-technology activity, much of it stimulated by research institutions, is estimated to account for 65% of the difference in economic growth among U.S. metropolitan regions, according to a new book by sociologist Jonathan R. Cole of Columbia University, The Great American University: Its Rise to Preeminence, Its Indispensable National Role, Why It Must Be Protected. Further, 80% of leading new industries may derive from university-based research. Although research universities represent only a small fraction of the higher-education system—fewer than 200 of over 4,000 postsecondary institutions—they are now recognized as essential to U.S. economic leadership.

Yet this is not a moment for self-congratulation. The U.S. economy is beset with difficulties, and as a result universities, especially public universities, are experiencing a painful disequilibrium of their own. Today’s climate of economic dislocation is reinforcing the pressures on research universities to play a more direct and active role in fostering innovation than ever before. Can they do it? The short answer is a conditional yes. But the nation faces several key challenges in making that happen.

In looking ahead, it is first useful to recognize four broad developments that have shaped the role, distinctive to the United States, of research universities in the economy. These developments are:

  • The historic decision to establish a comprehensive federal policy on the role of science in the post–World War II era. This policy, in large part the creation of President Franklin Roosevelt’s science adviser Vannevar Bush, was embodied in Bush’s 1945 report Science, The Endless Frontier.
  • The 1980 Bayh-Dole Act, which allowed universities to keep the patent rights to inventions resulting from federally funded research at their institutions.
  • Economic analyses that have validated the central role of knowledge in economic growth, influencing both government and university policy on industry/university partnerships.
  • Current experiments with new forms of industry/university collaborative research.

The postwar paradigm

Vannevar Bush’s historic report grew out of the pivotal contributions that science and engineering had made to the U.S. effort during World War II. This effort required Bush and his colleagues to organize scientists and engineers to work toward a common goal on a scale never attempted before, and he and President Roosevelt feared the gains would be lost without deliberate policies: a blueprint for supporting science in the postwar world. Bush’s intention was to provide industry and the military with a permanent pool of scientific knowledge to ensure economic growth and defense. His strategy was to define the different roles of government, industry, and universities in the scientific enterprise.

The federal government’s role would be to support basic science generally, not its applications. Industry would be responsible for applied research. Bush reasoned that industry had little incentive to invest heavily in basic research because its results were not proprietary and might be profitably applied by rival firms. Research universities, he decided, should be responsible for producing the pool of fundamental knowledge on which industry could draw. Federal support for university research would be channeled through a system of grants to individual researchers. Each grant would be awarded to projects whose scientific merit had been endorsed through a process of peer review. Congress established the National Science Foundation (NSF) in 1950 to serve as an independent federal agency devoted to supporting basic research and education in all scientific and engineering disciplines.

The most far-reaching premise of Bush’s report was never explicitly stated in that document. In arguing for the primacy of basic research, Science, The Endless Frontier defined the national research system as residing in its research universities, the locus of most basic scientific research and all graduate and postgraduate education in the United States. Before World War II, the federal government provided virtually no support for research in universities; the very concept of such funding was viewed as a radical idea. In the postwar world, the government committed itself to becoming the major sponsor of scientific research in universities. It was an extraordinary reversal of direction.

Bush’s model—a national scientific enterprise in which basic research, supported with federal funds and conducted by universities, would be implemented by private industry—was a highly simplified version of what actually happens in the discovery and application of new ideas. But his enduring accomplishment was to create a vast system of scientific and technological research organized to produce regular and systematic innovation in the service of economic growth and national security. This is why Science, The Endless Frontier remains to this day the single most important document on U.S. science policy ever written.

Tweaking the paradigm

Vannevar Bush’s report was a landmark of federal policymaking, but by the 1970s the innovative engine it created seemed in need of repair. Strong competition from a reinvigorated Europe and Asia, declining U.S. productivity growth, and rising unemployment made economic competitiveness a major national preoccupation. U.S. universities were producing a rich array of potentially useful research, but innovations were not moving into the private sector as quickly or efficiently as the economy required. The weak link in Bush’s model was at the point of transfer from the public to the private sector. The search for better, faster, and more efficient ways of moving university discoveries to market was under way.

The new urgency surrounding technology transfer was in part an unintended consequence of Bush’s report. University research partnerships with industry had flourished in the early years of the 20th century. But these partnerships dimmed in the years after World War II, eclipsed by the sheer volume of federal research funding that poured into research universities in the 1950s and 1960s.

The U.S. government embarked on a series of actions to rebuild the nation’s competitiveness during the 1970s. This plan included diverse actions such as establishing tax credits for research, funding public/private research centers, and easing antitrust regulations to encourage research partnerships. For universities, the most far-reaching of these actions was the Patent and Trademark Amendments of 1980, better known as the Bayh-Dole Act. Bayh-Dole was intended to invigorate the technology transfer process from universities and federal laboratories to business and industry. It accomplished this through a fundamental shift in government patent policy.

Today’s climate of economic dislocation is reinforcing the pressures on research universities to play a more direct and active role in fostering innovation than ever before.

Before the 1980 legislation, the federal government owned the rights to any patentable discovery coming out of research supported with federal funds. Yet few research results ever made it to market under this arrangement. Bayh-Dole transferred government’s patent rights to universities, leaving it to each institution whether income derived from a patented invention went to individual researchers or the university, or was shared by both. Although the result was to open a new income stream for universities, this was secondary to Bayh-Dole’s primary aim: to see that the public investment in basic research served national economic growth.

The influence of Bayh-Dole has been profound, making it far more attractive for universities and industry alike to partner in the commercialization of scientific discoveries. Between 1988 and 2003, U.S. patents awarded to university faculty increased fourfold, to 3,200 from 800. Technology transfer offices on research university campuses are now ubiquitous. Most patent income flows from a few hugely successful discoveries, such as the basic technique for DNA recombination or, more recently, the development of pioneering new drugs. Not all technology transfer offices make money, and only a few make a great deal. Nonetheless, they are key organizations on university campuses, because they offer a ready means for faculty to move research results into the commercial sector.

Thirty years after the passage of Bayh-Dole, some critics complain that universities still do not do enough active technology transfer, either sitting on patents they own or demanding unrealistic value for proprietary rights to university inventions. One proposed solution would allow faculty members to bypass campus technology transfer offices entirely and negotiate their own licensing agreements. From the outset, a broader objection was that Bayh-Dole would be a step down the road to transforming research universities into job shops for private industry, a threat to the integrity of their research and educational missions. This has occurred in some cases when universities have conducted proprietary research funded by industry. The more common experience, however, is that universities and their industrial partners have managed to negotiate successful research arrangements that respect their differences in mission and culture.

Validating new knowledge

NSF was very much involved in the activities generated by the competitiveness crisis of the 1970s. NSF began an analysis of the technology transfer process and, based on its findings, prepared the legislative draft bill that laid the foundations for the Bayh-Dole Act. It also examined other incentives for investing in research, such as tax credits and industry/university partnerships. These studies led NSF to establish the Industry-University Cooperative Research Program, which supported joint research projects between industry and universities. Industry was responsible for funding its part of the project and NSF funded the university side. The program was novel at the time and created some concerns in the research community. But the quality of the proposals and the excellence of the work quickly established the value of the program, and it now has been replicated at other agencies. NSF also established an extramural research program, funding projects to study the relationship between investments in R&D and various types of economic growth.

Economists have long recognized that new inventions and techniques can spur economic growth and productivity. But for many years, most members of the profession assumed that new technology was less important than labor and capital in driving economic growth. In the 1950s, Robert Solow of the Massachusetts Institute of Technology challenged this view with a mathematical model demonstrating that only half of economic growth can be traced to labor and capital. The remainder, he argued, was due to technical progress.

But relatively little quantitative work on exactly how R&D connects to the economy had been done at the time NSF launched its studies in the 1970s. Edwin Mansfield, an economist at the University of Pennsylvania and an important contributor to these studies, coauthored a landmark 1977 paper on the social and private rates of return on industrial innovations; that is, the benefits that private firms gain from investing in new products and processes as compared to the benefits that accrue to society. Mansfield and his colleagues found that the social rate of return was much higher than the rate of return to the firms themselves. The paper provided empirical evidence for Vannevar Bush’s argument that private industry has little financial incentive to invest in basic research, which should instead be supported by government as a public good.

Toward the end of his career, Mansfield turned his attention to how basic research in universities stimulates technological change. He wrote an influential 1995 paper assessing how academic research contributed to industrial innovation in 69 firms in seven major and diverse manufacturing areas, including information processing, pharmaceuticals, and petroleum. Mansfield found that academic research was responsible for about 11% of the new products and about 9% of the new processes in the companies he studied. His analysis was a systematic attempt to document the sources, funding, and characteristics of academic research that yields industrial applications. This and many other Mansfield studies helped shape government policy on technology and economic growth. Later studies have provided further evidence of Mansfield’s thesis that publicly supported research is a significant source of industrial applications. A 1997 analysis of U.S. industrial patents found that 73% of the papers cited were written by researchers at publicly funded institutions (universities, government laboratories, and other public agencies) in the United States or foreign countries.

Another development, New Growth Theory, has translated broad intuitive ideas about innovation and economic growth into explicit and elegant mathematical models. Stanford University economist Paul Romer has been a major figure in this domain. His seminal 1990 paper, Endogenous Technological Change, begins with a question: Why has U.S. productivity—output per worker per hour—increased 10-fold over the past century, whereas conventional economic theory would lead to an expectation that growth would peak at some point and then level off or decline?

Romer’s answer: technological change. The example he gives is iron oxide. A century ago, the only way to elicit visual pleasure from iron oxide was to use it as a pigment. Today, it is applied to plastic tape to make videocassette recordings. Incremental improvements such as these lie “at the heart of economic growth,” according to Romer, and in this respect his model resembles Solow’s. Technical progress occurs at an increasingly rapid rate because successive generations of scientists and engineers learn from the accumulated knowledge of their predecessors. Further, technological change is driven, in large part, by market incentives. Even if you are a professor on a federal grant with no interest in applying your discoveries, should commercialization occur it will be because an individual or a private firm wants to make a profit. This is why Romer describes technological change not as some external quantity injected into economic activity, but as something endogenous—internal—to the economic system itself. Unlike land, labor, and capital, technological change created by human ingenuity holds out the potential of ever-increasing expansion in the wealth of nations. “The most interesting positive implication of the model,” he concludes, “is that an economy with a larger total stock of human capital will experience faster growth.”

New Growth Theory and subsequent economic and management analyses have provided a greater degree of sophistication in ideas about how the economic innovation process works; society now has a more complex understanding of the relationship between discovery and application than the 1945 Bush model affords. Recent scholars have emphasized the central role of entrepreneurship and the individual entrepreneur in this process. Of particular note, Lynne G. Zucker and Michael R. Darby of the University of California, Los Angeles, have looked at the origins of the first U.S. biotechnology companies. They found that the active, hands-on involvement of “star” scientists—scientists who had made original discoveries in the field and understood how to apply techniques of working with recombinant DNA—was indispensable to the early expansion of the biotechnology industry. In biotechnology and certain other fields, the leap from basic science to innovative product is short and getting shorter.

The federal government should make it easier for foreign-born students who have earned advanced degrees at U.S. universities to stay in this country after their education is finished.

Carl Schramm and Robert E. Litan of the Kauffman Foundation argue that small entrepreneurial companies are key to pulling the United States out of the current recession. Between 1980 and 2005, firms less than five years old were responsible for almost all of the 40 million net new jobs (the jobs left after subtracting positions eliminated by downsizing) created nationwide. In this light, the United States should be creating an environment that is favorable to entrepreneurship generally, but especially to small discovery-based companies. These companies have the capacity not only to grow quickly and generate jobs. They also become the means of spreading transformative innovations, such as the automobile or Internet search engines, that have a deep influence on national prosperity over the long term.

The economic importance of entrepreneurs, startups, and small companies has not been lost on state and local governments, many of which are working directly with universities to advance regional economic growth. A California example is the CONNECT program, initially established at the University of California, San Diego, and now an independent nonprofit agency. As its name implies, CONNECT brings together university researchers with entrepreneurs, angel investors, and venture capitalists from around the country. It has helped launch hundreds of successful startup companies in the San Diego area.

CONNECT and similar efforts reflect the competitive realities of the national and global marketplace and the new demands they are bringing to research universities. One is the demand for more, and more interdisciplinary, research, conducted with industrial partners to help translate basic science into new products, processes, and startup companies. Another is the expectation that research universities will make explicit efforts to take a longer view about the scientific and technological discoveries that will prove essential to the economy 10 or 20 years down the road. A third is that they will educate students who are proficient not only in science and technology but also in entrepreneurship. Some of these goals can be accomplished using the traditional methods and approaches employed in research universities. But others will require a shift in some longstanding attitudes and assumptions about conducting scientific investigation and working with industry. Research universities, in other words, are being asked to become more entrepreneurial themselves.

Promoting a new model

One of the clearest illustrations of this trend is a California initiative to create the next generation of industry/university research partnerships. In 2000, Gray Davis, then the state’s governor, announced four new interdisciplinary research institutes on University of California (UC) campuses, to be chosen by a competitive process and funded through a three-way partnership among government, industry, and the university system. The California Institutes for Science and Innovation conduct fundamental and applied research across many disciplines to achieve the scientific breakthroughs and new technologies that will drive the state’s economy and improve its society. Educating future scientific leaders is part of their mandate as well, which means that students participate in all phases of research. Each institute involves two or more campuses, with one campus taking the lead. The institutes are:

  • The California Institute for Telecommunications and Information Technology, with UC San Diego as the lead campus, in partnership with UC Irvine.
  • The California Institute for Quantitative Biosciences, with UC San Francisco as the lead campus, in partnership with UC Berkeley and UC Santa Cruz.
  • The California Nanosystems Institute, with UCLA as the lead campus, in partnership with UC Santa Barbara.
  • The Center for Information Technology Research in the Interest of Society, with UC Berkeley as the lead campus, in partnership with UC Davis, UC Merced, and UC Santa Cruz.

Each institute collaborates with a variety of researchers, students, and private companies. State government contributed $100 million in capital support for each institute, with the requirement that the institutes raise matching funds on a two-to-one basis to the capital funds. Today, the state provides $4.75 million annually in operating funds, and the university system provides $5.25 million. The rest of the institutes’ support comes from federal grants and industry partnerships.

An inspiration for the California institutes was the renowned corporate research giant AT&T Bell Laboratories, responsible in its heyday for such key scientific advances as the transistor and fiber optics. The era of the big industrial laboratories (Xerox and RCA as well as Bell Labs) is over. But a key lesson of Bell Labs’ phenomenal success was the utility of scale in making rapid progress toward the solution of large scientific or technical questions. Whereas other research enterprises might have a dozen or more scientists focused on a particular problem, Bell Labs could marshal hundreds. What if a series of laboratories were created within the UC system, staffed by a critical mass of researchers from many disciplines, institutions, and industries, all dedicated to creating the scientific discoveries in major fields required for the economic and social prosperity of California?

In part, the California institutes are an effort to repair the weak link in the 1945 Bush model: technology transfer. In part, they are one state’s answer to the innovative vacuum left by the decline of the great industrial laboratories. But they also are an experiment in creating a new research model. This model goes well beyond technology transfer to a closer integration of universities and industry. Its most important assumption is that innovation will be faster and better in institutions that can successfully draw on the different strengths of academic and industrial science.

An example: The California Institute for Telecommunications and Information Technology (more conveniently called Calit2) focuses on the innovative potential of the continual exponential growth of the Internet and telecommunication. Calit2 provides advanced facilities, expert technical support, and state-of-the-art equipment: an environment intentionally designed to allow researchers to work in new ways on new kinds of projects. Fiber-optic cables link the institute to research centers throughout California and around the world.

Most of the space in Calit2 is open, with few private offices. Media artists, cognitive scientists, computer engineers, biochemists, and medical doctors are not normally found working together. At Calit2, it is an everyday occurrence. Most projects last three to five years; some of the technologies being explored will take more than a decade to develop, others a year or so. At any particular moment, there are some 30 grant-funded projects under way. As soon as one project is completed, another takes its place, as researchers rotate in and out of their academic departments.

Calit2’s unique organization and world-class facilities enable it to shift focus rapidly in accord with the research goals of faculty and of private industry. The institute has worked with more than 200 firms, from startups to established giants, to provide them with many services they cannot produce themselves in a cost-efficient fashion. Sometimes a company wants to support a specific project or have a device tested. Or it may choose to invest in longer-term research and so funds a chair in that area, and later hires a graduate student mentored by the faculty member who held the chair. Other companies may establish corporate sponsorships that bring together researchers to think about the most fundamental of problems. Seven Hollywood studios have joined the Calit2 CineGrid project, which conducts experiments in its state-of-the-art, optical fiber–linked visualization facilities. Ten years after its founding, Calit2 is a leader in the fields of green technology, information theory, photonics and optical networks, digital biology, and technologies that integrate art, science, and computer sciences.

Collectively, the California institutes have received high marks to date for the innovative importance of their research accomplishments. Their progress toward the goal of creating the technological infrastructure for the next economy is harder to assess and perhaps premature. At the moment, their most pressing problem is operating funds. Although they have more than succeeded in attaining a two-to-one match for state funds, the Great Recession has left California on the edge of a fiscal precipice.

Need for institutional change

The new research model envisioned for the California institutes required a profound institutional change within the university system itself. One of the institutes’ most striking aspects is the way they are shifting traditional academic boundaries. The co-location of researchers from university, industry, and public agencies (the institutes work on societal challenges as well as economic ones) generates a dynamic environment for thinking about old problems in new ways. It has also created new kinds of learning and career opportunities for students. The institutes are a magnet for both undergraduate and graduate students interested in combining traditional in-depth knowledge of a single field with broad experience of one or two other fields as well. Business students seeking an education in entrepreneurship and innovation find the institutes a rich source of ideas, advisors, and research mentors.

The cross-disciplinary mandate of the institutes has required them to challenge the faculty specialization and physical isolation within a department that are typical of research universities. Calit2, for example, achieved its leadership in taking Internet technologies to the next level because 24 academic departments work across disciplines to tackle complex problems, many of which lead to the movement of intellectual discoveries into the marketplace. The institutes’ state-of-the-art facilities do more than enable state-of-the-art research. They create an experimental laboratory of innovation that is open to a broad cross-section of individuals and institutions, whether public or private, profit or nonprofit.

The future of innovation

Research universities are one of the best reasons why the United States can have confidence in its economic future. But they are under threat, and that is why the question of whether they will succeed in contributing more to economic growth must be answered with a conditional rather than an unconditional yes. It goes without saying that research universities must have more state and federal funding as soon as budgetary realities allow—if not before. They are facing other important challenges as well. To mention a few:

  • Even before the Great Recession, funding increases for academic research were skewed toward just a few fields, principally the health and biological sciences and engineering. The nation has been underinvesting in the physical sciences; the earth, atmospheric, and ocean sciences; and the social sciences. This imbalance has been exacerbated by earmarking. Although the annual total of such appropriations is small compared to other kinds of congressional earmarks, the practice damages the peer-review process that has been a cornerstone of the research university system.
  • In recent years, the level of all federal research funding for universities has increased very slowly. Most contracts and grants do not include sufficient support to recruit graduate students in the numbers we need for national economic competitiveness in key industries or to provide enough postgraduate fellowships. Federal funding constraints have imposed an especially heavy burden on younger faculty members. The overall success rate for proposals submitted to NSF is approximately 30%; the success rate for proposals from newly appointed Ph.D.s is closer to 20%. The fierce competition for funding may discourage faculty, including younger faculty, from submitting proposals that are out of the mainstream and could yield major breakthroughs.
  • The federal government should make it easier for foreign-born students who have earned advanced degrees at U.S. universities to stay in this country after their education is finished. According to research by AnnaLee Saxenian, professor and dean of the UC Berkeley School of Information, and her colleagues, one-quarter of all engineering and technology firms established in the United States between 1995 and 2005 had at least one immigrant founder. A follow-up study revealed that over half of the immigrants who had started engineering and technology companies had received their highest degrees at U.S. universities. The United States has a record of integrating foreign-born students into its science and technology system that few other nations can match. The nation should build on that foundation even as it steps up efforts to recruit more U.S.-born students into scientific and technological fields.

State and federal policies have encouraged universities to become more active in the development of human capital, entrepreneurship, and industry/university collaboration. Economic analyses have given this trend a theoretical and empirical framework and made a compelling case for the benefits to society. Above all, it would be hard to overestimate the transformative influence of Vannevar Bush and his sweeping redirection of U.S. science policy. In making research universities the core of the U.S. system of scientific and technological innovation, he set them on the path to their current, and still evolving, role in economic growth. The age of the entrepreneurial university has only begun.

Science’s Uncertain Authority in Policy

Scientists view science as the ultimate authority on the laws of the universe, but that authority has no special standing when it comes to the laws of nations. The rigors of the scientific method may be humanity’s most reliable approach to attaining rational and objective “truth,” but the world’s leaders very often follow other routes to policy conclusions. Society’s decisionmakers acknowledge the power of science and invoke its support for their decisions, but they differ greatly from scientists in the way they understand and use science. My years as White House science advisor made me aware that science has no firm authority in government and public policy. Scientists might wish that it were otherwise, but if they want to play an effective role in policymaking, they need to understand the political process as it is. A few examples will illustrate my point.

In November 2001, following what were then regarded as incidents of terrorism involving mailed anthrax, Homeland Security Advisor Tom Ridge called me seeking urgent advice on what to do with a very large quantity of anthrax-laden U.S. mail. Working with my staff at the White House Office of Science and Technology Policy, we formed an interagency task group to evaluate and recommend methods to neutralize the spores. The answer we were seeking could not be found in the literature, so we commissioned some research and delivered what was truly “applicable science on demand.” We were able to give the U.S. Postal Service precise instructions on how to employ electron-beam irradiation with equipment normally used for food sterilization. Our directions addressed all aspects of the procedure, including the setting for radiation intensity. The Postal Service officials were delighted, and they enthusiastically went to work destroying anthrax—perhaps too enthusiastically. They reported back to us that some of the first batches of mail burst into flame.

We discovered that our guidance, which I would describe as a narrow form of policy advice, was accepted as to method, but not as to degree. Someone surmised that if five on the intensity dial were good, ten would be better. That agent substituted his or her judgment for a well-defined policy recommendation based on careful science and unambiguous data. Much, of course, was at stake. The Postal Service was responsible for delivering mail that would not be lethal. Better to be safe than sorry. When the intensity was throttled back to our recommended level, the treatment worked just fine. You may smile at this minor episode, but it is a relatively benign example of a potentially disastrous behavior.A serious consequence of ignoring expert technical advice occurred in January 1986 when the Challenger space shuttle launch rocket failed, killing seven astronauts. The best brief account I know of this tragedy is contained in Edward Tufte’s 1997 Visual Explanations, which includes a detailed analysis of the manner in which the advice was given. “One day before the flight, the predicted temperature for the launch was 26° to 29° [F]. Concerned that the [O- rings] would not seal at such a cold temperature, the engineers who designed the rocket opposed launching Challenger the next day.” Their evidence was faxed to the National Aeronautics and Space Administration, where “a high-level NASA official responded that he was ‘appalled’ by the recommendation not to launch and indicated that the rocket-maker, Morton Thiokol, should reconsider… Other NASA officials pointed out serious weaknesses in the [engineers’] charts. Reassessing the situation after these skeptical responses, the Thiokol managers changed their minds and decided that they now favored launching the next day. They said the evidence presented by the engineers was inconclusive.”

Even more was at stake when secret Central Intelligence Agency (CIA) reports to the White House starting in April 2001 advanced the opinion of an analyst—by reasonable standards a well-qualified analyst—that certain aluminum tubes sought by Iraq were likely for use in a nuclear weapons program. That claim was challenged immediately by Department of Energy scientists, probably the world’s leading experts in such matters, and later by State Department analysts, who refuted the claim with many facts. The administration nevertheless decided to accept the CIA version in making its case for war. Thanks to a thorough July 2004 report by the Senate Select Committee on Intelligence, the aluminum tubes case is very well documented. This episode is another example of policy actors substituting their subjective judgment in place of a rather clear-cut scientific finding. Did the small group of senior officials who secretly crafted the case for war simply ignore the science? I was not invited to that table, so I cannot speak from direct experience. But I suspect that the process was more complicated than that.

From the evidence that has become available it appears the decision to invade Iraq was based more on a strong feeling among the actors that an invasion was going to be necessary than on a rigorous and systematic investigation that would objectively inform that decision. I will not speculate about the basis for this feeling, but it was very strong. My interest is in how the policy actors in this case regarded science. They were obviously not engaged in a process of scientific discovery. They were attempting to build a case, essentially a legal argument, for an action they believed intuitively to be necessary, and they therefore evaluated the conflicting testimony of credentialed experts from a legal, not a scientific, perspective. The case against the CIA conclusion, although overwhelming from a scientific viewpoint, was nevertheless not absolutely airtight based on material provided to the decisionmakers. It was reported to the policymaking group by nonscientists who were transmitting summary information in an atmosphere of extreme excitement, stress, and secrecy. I assume that the highly influential CIA briefings on the aluminum tubes did make reference to the Energy Department objections, but this information was transmitted to the decisionmakers in a way that left a small but real opening for doubt. From a strict legal perspective, seriously limited by the closed and secret nature of the process, that loophole was enough to validate the proposition in their minds as a basis for the desired action.

AT MOST, SCIENCE CARRIES A KIND OF CHARISMATIC POWER THAT GIVES IT STRENGTH IN PUBLIC AFFAIRS BUT IN THE FINAL ANALYSIS HAS NO FORCE EXCEPT WHEN EMBEDDED IN STATUTE.

What is important about these examples is that, as a point of historical fact, the methods of science were weaker than other forces in determining the course of action. The actors had heavy responsibilities, they were working under immense pressure to perform, and the decisions were made within a small circle of people who were not closely familiar with the technical issues. Scientists, and many others, find the disregard of clear technical or scientific advice incomprehensible. Most of us share a belief that the methods of science are the only sure basis for achieving clarity of thought. They are not, unfortunately, the swiftest. The methods of science, as even their articulate champion C.S. Peirce himself observed, do have their disadvantages. Peirce, an eminent logician and the founder of the philosophical school of pragmatism, argued in his famous essays that there are four ways to make our ideas clear and that science is ultimately the only reliable one. However, to quote the Wikipedia entry, “Peirce held that, in practical affairs, slow and stumbling ratiocination is often dangerously inferior to instinct, sentiment, and tradition, and that the scientific method is best suited to theoretical research, which in turn should not be bound to the other methods [of settling doubt] and to practical ends.” That the physical evidence for Saddam’s hypothetical nuclear program was virtually nonexistent, that its significance was appallingly exaggerated in statements by high public officials, and that the consequences of the action it was recruited to justify were cataclysmic, is beside the point. The fact is that although many factors influenced the decision to invade Iraq, science was not one of them, and it is a fair question to ask why not.

To my knowledge, no nation has an official policy that requires its laws or actions to be based on the methods of science. Nor is the aim of science to provide answers to questions of public affairs. That science nevertheless does carry much weight in public affairs must be attributed to something other than the force of law. It is worth asking why advocates of all stripes seek to recruit science to their cause and why we are so offended by actions that “go against science.” Studying the source from which science derives its legitimacy may shed some light on conditions under which it is likely to be superseded.

Max Weber, the father of sociology, lists three “pure types of legitimate domination” based on different grounds as follows: (1) “Rational grounds—resting on a belief in the legality of enacted rules and the right of those elevated to authority under such rules to issue commands.” This Weber calls legal authority, and he furnishes it with all the bureaucratic trappings of administration and enforcement of what we would call the rule of law. In this case the authorities themselves are rule-bound. (2) “Traditional grounds—resting on an established belief in the sanctity of immemorial traditions and the legitimacy of those exercising authority under them.” This is the traditional authority of tribes, patriarchies, and feudal lords. And (3) “Charismatic grounds— resting on devotion to the exceptional sanctity, heroism or exemplary character of an individual person, and of the normative patterns or order revealed or ordained by him.” Weber applies the term charisma “to a certain quality of an individual personality by virtue of which he is considered extraordinary and treated as endowed with supernatural, superhuman, or at least specifically exceptional powers or qualities. These are such as are not accessible to the ordinary person.”

Weber intended these types to be exhaustive. It is an interesting exercise to attempt to fit the authority of science in society into one or more of these categories. If we admit that science is not sanctioned by law, then of the two remaining choices charismatic authority seems the best match. But to a scientist this is an absurd conclusion. It is precisely because the operation of science does not require charismatic authorities that we should trust it to guide our actions. We tend to accept the authority of science as uniquely representing reality, and to act against it as a mild form of insanity. Experience shows, however, that such insanity is widespread. (Consider only public attitudes toward demonstrably risky behavior such as smoking or texting while driving.) Unless it is enforced through legal bureaucratic machinery, the guidance of science must be accepted voluntarily as a personal policy. Science is a social phenomenon with no intrinsic authoritative force.

The fact that science has such a good track record, however, endows its practitioners with a virtue that within the broad social context closely resembles Weber’s “exceptional powers or qualities” that accompany charismatic authority. And indeed the public regard for science is linked in striking ways to its regard for scientists. Contemporary Western culture gives high marks for objectivity, and science, as Peirce compellingly argued, is unique among the ways of making our ideas clear in arriving at objective, publicly shareable results. In the United States, at least, there is broad but voluntary public acceptance of science as a source of authority. Its authority is not mandated, but those who practice it and deliver its results are endowed with charismatic authority.

The National Academies and the National Research Council inherit this charismatic quality from the status of its members. I was never more impressed with the power of the Academies and its reports than in a series of events associated with the development of the proposed Yucca Mountain nuclear waste repository. The story began with a 1992 law requiring the Environmental Protection Agency (EPA) to base its safety regulations for the facility on a forthcoming National Research Council report. When the report appeared in 1995, it implied that science did not preclude drafting radiological safety guidelines extending over very long times—up to a million years!—related to the half-lives of certain radioactive components of spent nuclear fuel. Rule-making required estimating the impact of potential radiological contamination of groundwater on populations living in the vicinity of Yucca Mountain over more than a hundred thousand years. The science of such regulations requires constructing scenarios for both the physical processes of the storage system and the human population over that time period. There is no credible and empirically validatable scientific approach for such long times, and the EPA acknowledged this through a change in its methodology after 10,000 years. When the regulations were challenged in court, the U.S. Court of Appeals, to my amazement, ruled that the EPA had not adhered to the letter of the NRC report as required by law and told EPA to go back to the drawing board. A member of the committee that produced the report, a respected scientist, said that he never expected the report to be used this way. It had become a sacred text. In 2008 both the secretary of energy and the EPA administrator asked my advice on how to proceed, but the issue had passed far beyond the bounds of science. I speculated that in far fewer than a thousand years advances in medical science would have altered completely the significance of hazards such as exposure to low level ionizing radiation. But such speculations play no role in the formal legal processes of bureaucratic regulation. Yucca Mountain has become a social problem beyond the domain of science.

What emerges from these reflections is that the authority of science is inferior to statutory authority in a society that operates under the rule of law. Its power comes entirely from voluntary acceptance by a large number of individuals, not by any structured consensus that society will be governed by the methods and findings of science. At most, science carries a kind of charismatic power that gives it strength in public affairs but in the final analysis has no force except when embedded in statute. Advocates who view their causes as supported by science work hard to achieve such embedding, and many examples exist of laws and regulations that require consultation with technical expert advisory panels. The Endangered Species Act, for example, “requires the [Fish and Wildlife Service and National Marine Fisheries Service] to make biological decisions based upon the best scientific and commercial data available.” Also, “Independent peer review will be solicited … to ensure that reviews by recognized experts are incorporated into the review process of rulemakings and recovery plans.” The emphasis on “experts” is unavoidable in such regulations, which only sharpens the charismatic aspect of scientific authority. The law typically invokes science through its practitioners except when adopting specific standards, which are often narrowly prescriptive. Standards too, however, are established by expert consensus.

At this point the question of the source of scientific authority in public affairs merges with questions about the nature of science itself, and its relation to scientists. That society does not automatically accept the authority of science may not come as a surprise. But in my conversations with scientists and science policymakers there is all too often an assumption that somehow science must rule, must trump all other sources of authority. That is a false assumption. Science must continually justify itself, explain itself, and proselytize through its charismatic practitioners to gain influence on social events.

Reassessing Conservation Goals in a Changing Climate

Climate change poses a hierarchy of significant challenges for conservation policy. First, the sheer scale of climate change calls for conservation efforts to be vastly stepped up. Second, the pace and extent of expected climate change will probably undermine the effectiveness of traditional conservation tools focused on protecting designated areas from human intrusion. The search for novel conservation strategies that will stand up to global shifts in climate highlights a third challenge: New conditions and new tools require a reassessment of our conservation goals. This third challenge has so far not been the subject of much debate, but merits closer and more systematic attention. The debate may be uncomfortable, but avoiding it complicates the tasks of prioritizing conservation efforts and choosing conservation tools. More important, the failure to explicitly identify conservation goals that acknowledge climate change is likely to lead to failure to achieve those goals.

The threat of climate change to conservation policy is daunting. Climate change is altering habitats on a grand scale. Species around the world are shifting their ranges to accommodate warming trends. Under any reasonable projection of greenhouse gas emissions, the rate of change will accelerate in coming decades. For species with small populations or specialized habitat requirements, climate change poses special challenges. Although the U.S. Fish and Wildlife Service recently declined to list it as endangered or threatened, the American pika remains an excellent example. The pika, a heat-sensitive mammal that is native to the mountaintops of the American West, can only move so far uphill and cannot migrate to higher or more northerly mountains because it cannot survive the intervening low-elevation habitat.

Unfortunately, the magnitude of impending climate change also worsens the prospects for species whose conservation status is not currently directly tied to climatic limitations. For example, the Florida torreya is an endangered conifer found only in a handful of stands along a 35-mile stretch of the Apalachicola River in Florida and Georgia. These populations are currently threatened by an outbreak of a thus-far unidentified disease. Species such as torreya, currently threatened by multiple stresses such as disease, invasive species, and human development, are common throughout the world. Climate change will make their conservation more difficult.

Overall, climatic shifts will place at risk many more species, communities, and systems than are currently protected. The magnitude and details of the extinction threat are uncertain, but that uncertainty is itself a challenge to conservation efforts, because conservation planning and implementation are long-term efforts. There is little doubt that future demands will strain the resources available for conservation, which have long been stretched thin.

Inadequate conventional tools

Our dominant conservation strategy, the designation of reserves, is mismatched to a world that is increasingly dynamic. The reserve strategy rests on the assumption that nature can be protected in sanctuaries walled off from human effects. But no reserve is immune to changes in atmospheric composition, temperature, and rainfall. Furthermore, no reserve manager has or is likely to be given authority to control the entire spectrum of activities that produce greenhouse gas emissions. In a changing climate, reserves may become inhospitable to the resources they are intended to protect. For example, preserves designed to conserve Florida torreya may, even now, not be adequate to protect and promote the growth of existing populations, which are declining. Large climate shifts could further undermine their effectiveness. Similarly, mountain reserves will do little to save pika populations, because no reserve can hold back the changing climate.

What proportion of the world’s species will find themselves limited by existing conservation strategies? Estimates of climate-driven extinction range as high as one-third of all species, including plants, vertebrates, invertebrates, fungi, and microbes. In addition to species extinctions, climate change threatens genetic diversity within species, as well as the ecosystem functions performed by species and ecological communities, such as providing fresh water and controlling pest populations. Currently, we do not have the ecological knowledge to forecast the magnitude of these effects accurately. However, static reserve systems will probably not be able to accommodate the biotic shifts projected to occur in coming decades.

The expectation that current reserves will prove ineffective has spurred the development of more exotic and novel conservation tools. Ex-situ conservation efforts aimed at preserving rare species and genotypes that have been lost in the wild are enjoying new popularity. New strategies have also been proposed, notably including managed relocation, defined as the deliberate relocation of species, genotypes, or ecological communities to new locations where they have a greater chance of persisting under emerging climate conditions. The Torreya Guardians, a grassroots conservation group, has introduced torreya to forests in North Carolina, far north of the species’ historical range. They argue that northern climates are superior for the growth of torreya and for its resistance to disease. Opponents of managed relocation respond that the deliberate introduction of non-native species, even for conservation purposes, courts the disastrous consequences associated with invasive species. It is still in the early stages, but a robust dialogue has been sparked about the appropriate use of these new tools.

Reevaluating existing goals

The third challenge of climate change—the challenge to established conservation goals—has not yet received enough sustained attention. Confronting this challenge is essential to effectively dealing with the first two, because the priorities necessary in a resource-constrained world cannot be sensibly set, nor can conservation strategies be selected and evaluated without reference to the underlying goals. Many observers have noticed that climate change will make it difficult to achieve established conservation goals. None, however, has grappled in a concrete way with how those goals might need to be reconsidered. In fact, existing conservation goals are insufficiently examined under current conditions. Climate change simply makes the consequences of continuing to avoid that examination more apparent.

There are, of course, many different conservation goals and many different conservation contexts. Here we focus on public conservation efforts, for which both the goals and the strategies are necessarily open to public debate. Public conservation goals are complex and often surprisingly opaque. It is useful to distinguish between the “why” and the “what” aspects of those goals; that is, between the guiding principles that motivate public conservation efforts and the conservation targets selected.

The principles can be quite abstract. They include maintaining useful resources for present and future generations, providing durable opportunities for nonconsumptive environmental experiences, and protecting elements of nature for their own sake. Frequently, multiple principles motivate political action. The potential for tension between those principles is often ignored.

Conservation policy targets give tangible form to the general motivating principles. They must be concrete and identifiable to make policy implementation and enforcement practicable. Conservation targets are commonly tied to the preservation of existing conditions or elements of nature, or the restoration of conditions thought to have existed at some historical reference point. Examples include the preservation or recovery of viable populations of native species; the preservation of iconic landscapes in what is believed to have been their historical condition; and the maintenance of existing ecosystem services. Targets may serve one or more motivating principles. Species protection, for example, simultaneously preserves useful or potentially useful resources, protects the opportunity for specific experiences, and fulfills ethical obligations to protect species for their own sake.

Climate change complicates the achievement of conventional conservation targets in ways that make it necessary to unpack their relationship to the underlying motivating principles, and to sort out priorities among those principles. Changing climate conditions may, for example, make it impossible to maintain the combination of goals we have come to expect of landscapes designated as reserves: historical continuity, protection of current features, and “naturalness,” in the sense that ecological processes occur with only limited human direction or assistance.

Managed relocation

Proposals for managed relocation bring those tensions sharply into focus. The world is becoming a very different climatic place. It will not be possible to preserve some current species in the wild if the climate envelope they require disappears. Other species may survive only if they are moved to locations to which they have no known historic tie. That in turn may affect the native biota of the receiving location in ways that are difficult to predict.

Conservation targets, therefore, may need to change, or new mechanisms for making tradeoffs between conflicting targets may need to be developed. The prospects are daunting, because if targets such as historical continuity or protecting existing constituents of natural systems are relaxed, new end points are not obvious. In other words, it may appear that there is no viable substitute for current conservation targets. Especially because of the political pressures that can be brought to bear against conservation, reopening discussion on those targets presents real risks for conservation advocates. It is not surprising that the conservation community has not been anxious to debate goals.

But there are also costs to avoiding the goals debate. The issue of managed relocation illustrates the kinds of questions that must be confronted when setting priorities or choosing strategies, and why those questions can’t be answered without a clearer understanding of the principles behind conservation policy.

First, if it is sometimes desirable to move species beyond their historic range, what differentiates “good” translocations from “bad” ones? A 1999 presidential executive order allows the deliberate introduction of exotic species if the benefits are thought to outweigh the harms. Is that the right test for managed relocation? If so, how should harm be evaluated? Should relocation be acceptable if, for example, it affects the abundance of native species at the target site but does not rapidly eliminate any of those species?

Second, if historic conditions are no longer the touchstone, how can we identify desirable end points? Is the goal to help species disperse as they would do on their own without anthropogenic habitat fragmentation or anthropogenic acceleration of the rate of climate change? Is that an appropriate and measurable objective? Are more–culturally defined end points needed in light of the challenges that climate change presents to traditional understandings of naturalness, nativeness, and wilderness? Can such end points be made operational?

Third, how should the effects of managed relocation on receiving communities be distributed? Given resource limitations, it is highly unlikely that a candidate species will be transferred to all potential receiving habitats. Governments will almost certainly have to choose. That raises the possibility that decisions will be made solely on the basis of the political power, or lack thereof, of target communities. Should an environmental justice analysis be an element of conservation decisions? Should such an analysis be applied only to strategies that alter historic conditions, or to conventional preservation strategies as well?

Fourth, who should make these choices, and through what processes? Decisions about who decides are particularly complex and uncertain, given the prevailing piecemeal nature of natural resource governance. For the many migrations and translocations that are likely to cross political borders, a host of local, state, federal, multinational, and international authorities with different and often incompatible objectives may be implicated. This is true regardless of whether the primary threat to species survival is climate change, overhunting, habitat loss, the spread of disease, or a synergistic combination of anthropogenic threats and rapid environmental change. For example, some conservationists recently proposed relocating the Iberian lynx, an endangered species that scientists fear will become the first large cat to go extinct within the past 2,000 years. Translocations are proposed from its rapidly shrinking and fragmenting habitat in Spain to what is viewed as a more suitable range in Scotland. Yet the current international system of fragmented and generally uncoordinated authority was not designed to manage broad shifts in climatic and environmental conditions—or to facilitate such long-term, active management of landscape-scale movements of vegetation and wildlife across different jurisdictions.

In addition, what roles should scientists, regulators, direct stakeholders, and the public play in untangling and resolving these difficult tradeoffs? Making a historic baseline the target limits management discretion to some extent, at least if the baseline is reasonably clear. By limiting discretion, a historic baseline keeps the decisions about tradeoffs in the hands of the public. Removing the constraint of historical continuity gives unelected regulators more discretion, so that decisions become more technocratic. At the same time, those decisions may become less scientific, if unconstrained regulators decide to use cost/benefit analysis or tools other than science to make those choices.

We do not pretend to have the answers to these difficult questions. We believe they should be the subject of broad discussion involving, at least, conservation scientists (who can help society understand the biological consequences of different choices), lawyers and resource managers (who can help ensure that policy targets can be put into effect), ethicists (who can help clarify the motivations for conservation and their relationship to choices of targets and tools), and the public (who will inevitably be affected by both conservation outcomes and the costs of implementing conservation programs). Such a discussion is needed now, before resource managers implement potentially irreversible strategies with poorly understood consequences.

One way to begin the discussion might be through the creation of an interdisciplinary advisory group that could provide a forum for a “deliberative community” of the sort recommended by Ben Minteer and Jim Collins of Arizona State University in a December 6, 2005, article in Conservation Biology. Such a committee could develop a set of principles and a broad domestic policy framework under the auspices of the National Academy of Sciences, or perhaps in a parallel effort at the international level, under the International Union for the Conservation of Nature. In any case, the composition of the group is more important than the conveyor, because the group would be asked to deal with the inherently interdisciplinary tasks of exploring the challenges that climate change poses to conservation principles and targets, identifying potential conflicts, and suggesting frameworks for evaluating tradeoffs and choosing among options. Although decisions on these questions ultimately require value choices that are the province of democratically accountable authorities, such an interdisciplinary advisory group could provide valuable guidance and help put the issues on the political agenda.

Although humans have long claimed a stewardship role in the management of the world’s natural resources, the precise contours of that role are called into question by climate change. Managed relocation and similar interventionist strategies to conserve species signal a shift to a far more activist and hands-on approach to conservation. That shift may make many environmental scientists and conservation professionals uncomfortable. Indeed, the heavy human hand required for such efforts, creating the possibility of destructive meddling in ecological systems, opens it up to charges of managerial arrogance, especially by those who place a premium on ecological integrity as a policy goal. Such criticisms of aggressive anthropocentrism and technological dominance have become a mainstay of contemporary preservationist thought. Yet the emerging effects of climate change on natural systems make clear that substantial human intervention has already occurred, and the ecological and human costs of failing to intervene to sustain and promote ecological health and function may be immense.

Ultimately, then, climate change forces us to decide whether we want to be curators seeking to restore and maintain resources for their historical significance; gardeners trying to maximize aesthetic or recreational values; farmers attempting to maximize economic yield; or trustees attempting to actively manage and protect wild species from harm even if that sometimes requires moving them to a more hospitable place.

Making College Affordable by Improving Aid Policy

Higher education plays an important role in U.S. society. In addition to providing numerous public benefits, such as an increased tax base and greater civic engagement, it helps individuals attain economic and social success. Experiences and skills acquired from postsecondary education reverberate throughout life in terms of higher earnings, a lower likelihood of unemployment, and better decisions about health. Yet research demonstrates that one of the primary barriers to college enrollment, especially for low-income students, is the financial outlay required to attend. For this reason, the federal and state governments spent more than $2.5 trillion in 2008-09 on student grants, such as Pell Grants, with the hope of encouraging enrollment.

Although there is a belief that financial aid could greatly improve educational outcomes, there also are many reasons to question the efficacy of the nation’s current system of financial aid. After decades of financial aid policy, there are still significant gaps in college access by income, even after accounting for differences in academic preparation and achievement by income. Low-income high school graduates in the top academic quartile attended college at only the same rate as high-income high school graduates in the bottom quartile of achievement. Such gaps, which are also evident in terms of race and ethnicity, suggest that the aid system has not equalized access to higher education. A 2006 review of the aid system by the federal Commission on the Future of Higher Education concluded what many observers have voiced for years: The financial aid system is not addressing the problems facing students. Although financial aid can dramatically reduce the overall cost of college, many students still have significant unmet need. Moreover, the receipt of financial aid is predicated on navigating a lengthy, complicated process. As noted by the commission, some students “don’t enter college because of inadequate information and rising costs, combined with a confusing financial aid system.”

Although the financial aid system is imperfect, years of research support the notion that financial aid can influence students’ postsecondary decisions. Research has identified effective financial aid policies that improve college enrollment and choice, and the lessons learned from these studies could help inform current debates about how to improve the financial aid system.

Three main lessons are clear from the numerous studies on financial aid. The first lesson is that information and the design of a policy are crucial factors in determining whether a policy is effective in improving access. Therefore, policies should balance the need to target limited resources at specific groups with the fact that making aid application and award processes too complicated is likely to deter students. Second, while recent years have witnessed the growth of merit-based aid, these programs often favor more affluent students who are likely to attend college regardless of whether they are given financial aid. Therefore, if the goal of the nation’s limited financial aid resources is to influence decisions, then there is a strong case to focus on need-based awards. Obviously, grants have larger direct costs than loans, and so loans may be considered a less expensive way to help students. However, the third lesson from the research literature is that loans have their own indirect, long-term costs, which are hard to fully predict or put in monetary terms. Debt can affect educational decisions as well as decisions long after leaving college in ways that are suboptimal to both the individual and society.

The affordability problem

Although there are many barriers to college access and success, a major impediment is cost. As the Commission on the Future of Higher Education concluded, “There is no issue that worries the American public more about higher education than the soaring cost of attending college.” During the 2009-10 school year, the College Board found that the average total tuition and fees at public four-year colleges and universities was $7,020, with average total charges amounting to $15,213. Without any financial aid, the total cost amounts to 30% of the annual median family income. Concerns about affordability are even greater at private four-year colleges and universities, which charged an average tuition of $26,273, or $35,636 including room and board. This constitutes more than half the annual income of a median family. The average low-income student attends and faces the costs of a local community college, and the average full-time tuition at these institutions was $2,544 in 2009-10.

The current situation is the result of skyrocketing prices during the past several decades. From 1979-80 to 2009-10, the average cost of a public, four-year institution increased from $738 to $7,020, a multiple of three times after accounting for inflation. Meanwhile, the median family income has not nearly kept pace with growing tuition costs. Given the high cost of college relative to family incomes, at least some amount of financial aid is necessary for most families.

Given the critical role higher education plays in both individual economic success and the public good, increasing college access should be a major goal of the government.

Of course, financial aid has been a staple of higher education for several decades. To understand the degree to which the current system meets the financial needs of students, one must calculate the price students pay for college after financial aid. After taking into account the multiple sources of financial assistance, the price paid by students is much lower than the list prices in college catalogues. According to the College Board, in 2009-10, the average net price at a public, four-year college was $9,810 and $21,240 at a private, four-year college. Although net tuition prices are significantly lower on average than list price, it is important to keep in mind that these are only mean values with a great deal of variation across and within institutions. Differences in net price at the same school may be based on differences in financial resources, family makeup, and student characteristics, such as academic ability. In a study of the practices of very selective private colleges and universities, which tend to focus on need-based financial aid, researchers found that the net price students face could vary from $7,495 for students from the lowest quintile of family income to $16,249 for students from families in the upper-middle quintile and $23,399 for students in the highest income quintile.

Although the costs faced by students are much less when grant aid is considered, the remaining costs that families must meet are often substantial. A study I did with Erin Riley in 2007 documented the significant amount of unmet financial need faced by many students, particularly students from low-income backgrounds and students of color. After accounting for the family’s expected contribution and the receipt of all grants, dependent students in 2003-04 faced an average unmet need of $7,195. Increasingly, students are turning to loans to make up this remaining difference. However, even after taking into account government and institutional loans, there is still significant unmet need. Researchers have found, for example, that dependent students faced $5,911 in unmet need ($4,503 for older, independent students) after grants and loans.

Although the nation spends billions of dollars each year on financial aid, the estimates on unmet need suggest the current amount of funding may not be enough. Therefore, many calls for reform have focused on increasing the level of financial aid awards. Recent legislation has targeted this problem. With the federal Health Care and Education Reconciliation Act, signed March 30, 2010, the budget for Pell Grants increased more than $40 billion. Still, this is not enough to significantly reduce unmet need for most students, especially with the continually rising costs of higher education. Therefore, reviews of the research literature should keep in mind how inadequate funding levels may limit the effectiveness of current forms of aid. But rather than just asking for more, it is necessary to consider the best ways to alter the aid system guided by what is known about the types of aid and particular policy designs that are most effective.

Does lowering costs increase enrollment?

Grants, or aid that does not need to be repaid, tend to be the focus of most research on financial aid. Although some programs have not demonstrated a large enrollment effect, others have spurred much greater responses. The nature of grants has also changed in recent years. Although the original intent of most grant programs was to increase college access for students who would not have otherwise been able to attend, governments during the 1990s began to introduce grant programs with a very different focus and design. It is useful, then, to identify the distinguishing characteristics of the most effective polices and consider how the change in the focus of grant programs has affected affordability for different income groups.

Because grants are not given randomly to students, but rather often involve favoring students with need or merit or both, a straightforward comparison of students eligible for grants with those who are not eligible gives only a partial view of the role of financial aid. Such comparisons do not isolate the effects of aid from other differences between students, such as background or academic preparation. In recent years, the best studies have used experimental or “natural experiments” to discern the impact of financial aid. The introduction of a new program that affects some students but not others can provide a useful research opportunity with the aid-eligible students being the “treatment group” and ineligible students being the “control group.” In several cases, researchers have compared the enrollment rates of the two groups before and after the creation of a new policy. This type of work has found that subsidies that reduce college prices increase attendance rates, attainment, and choice.

The Pell Grant, which was introduced in 1972 as the Basic Education Opportunity Grant, is the nation’s largest need-based grant program. Research on its effectiveness, however, has left more puzzles than answers. In one line of study, researchers have compared the enrollment rates of low-income students before and after 1972, with ineligible students serving as a control group. In a 1996 study, Thomas Kane found that contrary to program expectations, enrollment grew 2.6 percentage points more slowly for the lowest income quartile, the expected beneficiaries of the Pell Grant. Other research also found no disproportionate growth in college enrollment or completion of a bachelor’s degree by low-income students after the introduction of the Pell Grant. Only public two-year college enrollment seemed to grow more quickly for low-income youth. Interestingly, in other research, the impact of the Pell Grant was found to be large and positive for older students, suggesting that the effects of aid can vary by age.

There are several theories as to why the introduction of the Pell Grant did not result in an increase in the enrollment of traditional-age, low-income students. Some observers suggest that Pell might have had an impact only on college choice, rather than on attendance, as there may have been relative shifts in enrollment among different types of colleges. Others have instead suggested that Pell may have worked well enough to maintain the distribution of students during the 1970s and 1980s; without it, enrollment rates would have fallen much more. However, the most convincing explanations for the lack of a response among low-income students to the Pell Grant focus on problems with the program itself. Researchers suggest that low program visibility, the complexity of the application process, and intimidating audit procedures contributed to limiting the aid program’s impact. It is important to note that the current Pell Grant program is somewhat different than it was in the early 1970s. Therefore, it is unclear whether these studies reflect on the present nature and effectiveness of the policy.

A wave of more recent research has done a much better job discerning the effectiveness (or lack thereof) of a variety of federal and state financial aid policies. Using a number of different research approaches, several studies have convincingly established causal estimates of financial aid programs and given much clearer answers about the potential effectiveness of grants.

Susan Dynarski, for example, examined the impact of eliminating the Social Security Student Benefit (SSSB) program, which gave monthly support to children (age 18 to 22) of dead, disabled, or retired Social Security beneficiaries while they were enrolled full-time in college. At its peak, the program provided grants totaling $3.3 billion annually to one out of 10 students. In 1982, Congress decided to discontinue the program. According to one study, this step reduced college access and attainment by a difference of over 25% between the treatment and control groups. This translates into $1,000 (in 1997 dollars) of grant aid increasing education attainment by 0.20 years and the probability of attending college by 5 percentage points. In contrast to the Pell Grant, awareness among potential beneficiaries of the SSSB program was high due to notification from the government and the extremely simple application process. This gives early clues to the importance of policy and program design.

The Georgia HOPE Scholarship is another grant program that has been evaluated. Introduced in 1993, the program pays for the in-state public tuition of Georgia residents with at least a B average in high school; residents choosing to attend in-state private colleges received $3,000 during the early years of the program. Similar to the SSSB, the HOPE Scholarship is simple in design and much effort was made to publicize the program and to train high school guidance counselors on how to help their students access the program. Dynarski compared enrollment rates in Georgia with rates in other southern states before and after the program, and the results showed that Georgia’s program has had a surprisingly large impact on the college-attendance rate of middle- and high-income youth. The results suggest that each $1,000 in aid (in 1998 dollars) increased the college attendance rate in Georgia by 3.7 to 4.2 percentage points. Also, there was a much larger impact on college choice. Chris Cornwell and David Mustard also examined Georgia HOPE using a different data set, and they estimated that the scholarship increased the overall freshmen enrollment rate by 6.9 percentage points, with the gains concentrated in four-year schools.

The Cal Grant is another large state grant program. Its eligibility criteria mix both need and merit as students must meet thresholds in income, assets, and high school GPA. The results of a study by Kane have suggested that there are large effects (3 to 4 percentage points) of grant eligibility on college enrollment among financial aid applicants, with larger effects on the choice of private four-year colleges in California. Unlike with the SSSB and Georgia HOPE Scholarship, the large response to the Cal Grant seems to be in spite of its design. Some suspect that the impact of the program could have been larger, because reports indicate many eligible students, as many as 19,000, failed to apply.

Implications for aid policy

In summary, the research suggests aid programs can be successful as price and financial aid have been found to influence students’ decisions about college. The programs that have been the most effective are those that are relatively easy to understand and apply for and that include efforts to ensure that potential beneficiaries are aware of them. This observation begs the question: What do students and their families know about financial aid? In order to have an impact on behavior, students and their families must be aware of the policies designed to help them. Unfortunately, awareness appears to be a major barrier to college access, as many students lack accurate information about higher education costs and financial aid. Researchers have continually found a significant lack of information among prospective college students. Most studies have suggested that students and their parents greatly overestimate the costs of college. There also is a lot of misinformation about financial aid among parents and students. A Harris Poll commissioned by the Sallie Mae Fund found that two-thirds of all parents and all young adults planning to go to college did not name grants as a possible source of funds when asked about types of financial aid. Awareness about aid and college costs appears to be especially limited among low-income students.

The low levels of awareness about aid and the misinformation of many families has serious implications for the effectiveness of any policy or program. In a world with many misinformed or unaware families, unless a program is highly publicized and simple to access, it is unlikely to have a major impact on college enrollment. Implicit in policy design are tradeoffs between making a program simple to understand and the need to limit eligibility to only a subset of students due to finite resources. On the one hand, in order to have an impact on behavior, students and their families must be aware of the policies designed to help them and understand how to access them. On the other hand, given the focus on helping a particular type of student (e.g., financially needy students), some type of means testing must be in place to ensure that only students with actual need (or some other criteria) are eligible to receive the aid. For these reasons of efficiency, many arguments have been made for elaborate application procedures for such need-based programs as the Pell Grant. However, introducing complexity into how aid is awarded can also be a source of informational barriers.

Critiques of the Free Application for Federal Student Aid (FAFSA) and the general aid application process highlight the tradeoffs between simplicity and means testing that must be balanced in policy design. At its most basic level, the FAFSA attempts to discern how financially needy students are in order to determine how to distribute limited government financial aid. It collects a wealth of information about a family’s situation in the hope of equitably treating families with similar situations. However, many critics surmise that the lack of information about financial aid is linked to this process. A major critique is that the FAFSA is long and cumbersome. Until recently, to determine eligibility, students and their families had to fill out an eight-page, detailed form containing over 100 questions. To answer three of these, students had to complete three additional worksheets with nearly 40 additional questions. Even the lowest income students, who had already established their eligibility for other federal means-tested programs and were known to be eligible for federal student aid, had to go through this arduous process. Not surprisingly, research suggests that students and their families are often confused and even deterred by the form. In a 2004 study, Jacqueline King found that half of the 8 million undergraduates enrolled in 1999-2000 at institutions that participate in the federal student aid program did not complete the FAFSA. Yet 850,000 of them—more than 20%—would have been eligible for a Pell Grant. Furthermore, of those who did file, more than half missed the application deadline to be eligible for additional state and institutional aid programs.

Given this and other critiques of the FAFSA, many people suggest that perhaps the application process leans too far toward complexity without balancing the need to make the process clear and reasonable for students. Recently, calls to simplify the financial aid process have spurred the Department of Education to implement several changes. The FAFSA now uses “skip logic” in its online version to eliminate questions that do not apply to some students and to give students instant estimates of the Pell Grant and student loan eligibility. The department also is piloting ways to transfer information directly from the IRS to the online FAFSA. These efforts still require families to be aware of the FAFSA and to be able to complete it online, preferably with high-speed internet, but they are still steps in the right direction. Moreover, the Department of Education currently is revising the FAFSA4caster tool to more easily give families early estimates of their financial aid eligibility.

Current research projects are also exploring interventions that might deal with concerns about the financial aid application process. For example, working with Eric Bettinger, Philip Oreopoulos, and Lisa Sanbonmatsu, I developed a project in which tax preparers help low-income families complete their FAFSAs. The intervention streamlined the aid application process and students’ access to accurate and personalized higher educational information. Using a random assignment research design, H&R Block tax professionals helped a group of eligible low- to middle-income families complete the FAFSA. Then, families were immediately given an estimate of their eligibility for federal and state financial aid as well as information about local postsecondary options. Early project results confirm suspicions that a lack of information and the complexity of the aid process are hindering low- and moderate-income students’ ability to apply for aid and enroll in college. We found that individuals who received assistance with the FAFSA and information about aid were substantially more likely to submit the aid application. More importantly, the program also increased college enrollment for the dependent students and for young adults with no prior college experience. Although it will take time to determine the full benefits and costs of simplification, these results suggest that streamlining the application process and providing better information could be effective ways to improve college access. The results also lend additional support to the idea that the most effective aid policies are those in which there are high levels of awareness and the application is relative simple.

Need-based versus merit-based aid

While research demonstrates that grants are effective in encouraging college access, it is also worth considering which types of grants have the largest impact on enrollment rates. To answer this question, it is necessary to ask, who needs support in order to attend college? In other words, what kinds of students might be encouraged to attend college with price subsidies? Although affordability, or the comfort level of paying for the expense, is a concern of all students, most middle- and upper-income students will attend college regardless of whether they receive financial aid. In contrast, the problem of college access, defined as whether to attend college at all, is substantial for low-income students, as illustrated by the gaps in college attendance by income and substantial levels of unmet need for this group. Therefore, if the goal is to maximize the impact of a dollar on college enrollment rates, funds should be directed toward this group. Not surprisingly, price and financial aid have often been found to have larger effects on the enrollment decisions of lower- rather than higher-income students.

Based on the above reasoning, it is important to note that the research literature documents that different types of grants vary in who and how they affect college decisions. For instance, the merit-based Georgia HOPE Scholarship had large effects on college access overall, but the benefits of the program were not evenly distributed. Researchers found that the program widened the gap in college attendance between those from low- and high-income families and between black and white students. In sum, the program disproportionally helped upper-income students. Moreover, the major impact of the policy was on college choice rather than enrollment; that is, Georgia HOPE influenced the enrollment choices of students who would have otherwise attended a different college or university. Although choice is an issue worth considering, whether a student attends college at all is a more important concern.

Grants have been shown to be effective in influencing student decisions if designed properly, whereas research suggests that loans are less effective in increasing enrollment.

Georgia HOPE marked the beginning of a larger trend toward shifting state aid from a need-based to merit-based focus, as many other state merit-based aid programs have followed. Although more money is allocated by states to need-based programs, according to NASSGAP, after accounting for inflation, spending on non-need based grant aid grew 203% during the past decade, compared with only 60% growth in need-based grant aid. These other state policies have differed in how they define merit, in funding sources, and in the impact they have had on student outcomes. Dynarski found that the degree to which more affluent students are favored in these state aid programs appears to be related to how stringent the merit aid criteria are. In other words, the degree to which merit is used in aid criteria has profound effects on whether the policy influences college access among low-income students rather than choice or affordability for upper-income students. Given that the opportunity to perform well on some of the merit-based criteria is related to income either directly or indirectly through school quality, even high-achieving, low-income students can be at a disadvantage for qualifying for merit-based awards. Some researchers have concluded that even among students of equal academic merit, increased emphasis on merit in financial aid may exacerbate the trend toward greater income inequality.

Recent federal aid policies also have moved away from focusing on increasing the basic access of low-income students. In 1992, federal financial need calculations began to exclude home equity, thereby allowing many more middle-class families to qualify for federal need-based support. That year, the Stafford Unsubsidized Loan Program was also created, which made student loans available to all families regardless of income. Then, in 1997, the federal government introduced the higher education tax credits, which were available to families with incomes up to $100,000, far above the national family income average. Most recently, the creation in 2006 of the Academic Competitiveness Grants introduced merit criteria into federal aid for undergraduates. The program gives Pell Grant recipients additional funds for completing certain courses and maintaining a 3.0 GPA in college.

The shifting of aid priorities from need to other criteria becomes clear when juxtaposing the aforementioned trends to what has happened with need-based aid. Whereas other forms of aid have grown, need-based grants have not kept pace. Since its inception, the Pell Grant has declined substantially in value, compared with tuition prices. According to the College Board, in 2008 dollars, the maximum Pell Grant in 1976-77 was $5,393; it was only $5,800 by 2008-09, even though tuition rates grew exponentially during the same period. Despite the recent action to increase the Pell Grant maximum, with so much lost ground, many low-income students still have significant unmet needs.

There is no question that addressing issues of affordability and rewarding performance with merit-based aid are justified goals. However, as demonstrated by research, shifting aid priorities to other goals has negative repercussions for the important goal of increasing access. Careful attention must be paid to the exact criteria used when awarding aid for fear of duplicating the sometimes unfavorable effects that have been found with other types of grants, such as merit-based aid. Again, the question worth asking is: What is the best use of limited funds in order to increase participation?

The role of loans

As documented by unmet need calculations, students face additional costs beyond their means even after accessing all of the grants available to them. Loans have become the most prominent form of student funding for postsecondary education during the past 15 years. This is especially true for full-time, full-year students. In my study with Riley, from 1989-90 to 2003-04, the proportion of full-time, full-year students with loans rose from 36 to 50%. Moreover, average annual loan amounts during this period grew 38% in constant 2003 dollars, from $4,486 to $6,200. While 79% of loan volume is awarded by federal programs (Stafford, Perkins, and PLUS), private loan volume has risen substantially. From 1998-99 to 2007-08, the amount given in private loans grew by a multiple of six, after adjusting for inflation.

Need-based aid is more effective in increasing access for low-income students than other forms of aid.

Naturally, cumulative debt, or the amount students borrow during the course of their educations, has also grown substantially over time. In one study, my colleague and I found that between 1992-93 and 2003-04, cumulative debt accrued by second-year undergraduates at public two-year institutions increased an average of 169%, from $3,087 to $8,296, after accounting for inflation. Fourth-year undergraduates at public colleges faced cumulative debt amounts 76% higher during this period, accumulating an average of $17,507 in loans during four years by 2003-04. Fourth-year undergraduates in 2003-04 at private colleges borrowed an average cumulative amount of $21,946, a 57% increase during the 10 years. Recent trends in student financing and loan policy suggest cumulative debt amounts will continue to grow at a rapid rate.

Has access to loans affected college decisions? Certainly the increasing use of loans by students suggests that they have grown in importance. However, growing reliance on loans as a policy option has important implications for college access and persistence. Research on the role of loans in college decisions is scant relative to that about grants, but there are clues to how this form of aid might affect higher education outcomes.

One issue centers on identifying the effect of loans on enrollment decisions. This question is empirically challenging, because eligibility for federal loans is correlated with observed and unobserved determinants of schooling, thereby biasing any straightforward, simple comparison of students with and without loan eligibility. The effects of loans are also unclear, as the studies that have been completed give mixed results. Dynarski focused on variation in loan eligibility after the Higher Education Amendments of 1992, which removed home equity from the set of assets that are included in the federal financial aid formula. The study concluded that loan eligibility had a positive effect on college attendance. Loans also appeared to influence choice by shifting students toward four-year private colleges. On the other hand, another study examined whether the shift in the composition of aid away from grants toward loans adversely affected college enrollments in the 1970s and 1980s. The results suggested that the probability of attending college falls when loans replace grants, dollar-for-dollar, in the financial aid package.

Although there is little research that examines the effectiveness of loans, there are reasons to believe that they do not have as large an impact on college attendance as do grants. Several studies suggest that some students are reluctant to take out loans because of the complexity they introduce and fear of the repayment conditions. Financial aid administrators report anecdotally that students from traditionally disadvantaged backgrounds often are unwilling to incur substantial debt to attend college. Some research on cultural barriers to debt suggests that this may be related to socioeconomic differences. Other studies also have found that the prospect of substantial borrowing discourages enrollment among some students, especially those from low-income and underrepresented groups. Although socioeconomic differences may play a role in student borrowing, more research is needed to understand how students and their families consider whether to take on debt. It is unclear how many students are kept out of college because of debt aversion, and given the shift in aid policy toward loans, such differences have important implications for college access and success.

Although loans appear to be less effective than grants in increasing college attendance, they may be less expensive for the government to provide than grants, because loans must be repaid by the student. However, any cost-benefit comparison should include more than just the direct costs and initial impact on enrollment. When considering the cost side of loans, it is first necessary to take into account the subsidy incurred by the government in the form of interest paid while in college (for subsidized loans) and the fact that the interest rate charged is below the market rate (for all Stafford loans). Additionally, the government shoulders the costs of guaranteeing the loans and giving incentives to private banks to provide them.

The potential costs of loans do not end there, however. Because they must be repaid, loans are a much more complicated form of aid, and unlike grants, they may have many long-term effects.

Debt burden, defined as the percentage of monthly income a student must dedicate to loan payments, is a particular concern with student loans. In 2004, the American Council on Education concluded that the median debt burden of 7% was manageable and stable for students graduating with bachelor degrees in the 1990s. But Sandy Baum found that one-third of borrowers face debt burdens above 8%, a level considered unmanageable. Another study found evidence that half of the college graduates surveyed reported feeling burdened by their debt payments. Although debt levels may have largely been manageable for most students a decade ago, the situation has probably changed for current students. Higher cumulative debts, combined with recent changes in federal loan programs, including increasing loan limits, suggest today’s college students face even higher debt burdens, which will continue to grow for future cohorts.

Debt burden is especially troublesome for students who do not complete a college degree. In a 2005 study, Lawrence Gladieux and Laura Perna found that for students who began college in 1995 and borrowed money but later dropped out, the median debt was $7,000. Students who dropped out of four-year programs accumulated a median debt of $10,000, while dropouts from two-year programs accumulated a median of $6,000 of debt. These amounts of debt are particularly difficult because the dropouts are unable to reap the full economic benefits of a degree. In one study, 22% of borrowers who dropped out of their degree programs defaulted on at least one loan within six years of originally enrolling in college, compared with 2% of college graduates. Such a stark difference in default rates underscores the importance of degree completion and suggests that persistence is important in determining if a student is able to manage his or her debt.

Another set of concerns about student loans is that they could have unintended negative consequences on student decisions. It has been suggested that debt affects students’ choice of major, deterring students from public service fields, such as teaching and social work. According to the State Public Interest Research Groups’ Higher Education Project, 23% of graduates from public institutions would face unmanageable debt burdens if they entered teaching, based on average starting salaries. For graduates from private colleges and universities, 38% would encounter unmanageable debt as starting teachers. Loans could also impact life decisions after college, such as buying a house, getting married, or having children. Evidence is mixed, but research by Nellie Mae during the past 15 years suggests that attitudes toward education debt are becoming more negative. Another survey by Baum and O’Malley, conducted in 2002, found that home ownership rates declined by 0.2 percentage points for every additional $1,000 in student loans.

The picture, then, is that loans and the resulting debt burden could influence students’ decisions long after college enrollment, perhaps in negative ways. Unfortunately, little is known about the totality of these longer-term effects or how to monetize them. Therefore, although grants mainly have only upfront costs, the full costs of loans are potentially much larger than they appear on the surface.

Making financial aid policy more effective

Given the critical role higher education plays in both individual economic success and the public good, increasing college access should be a major government goal. However, despite substantial increases in access to higher education during the past several decades, postsecondary attendance nationwide continues to be stratified by family income, and students, particularly those with lower incomes, have significant unmet need. In consideration of this problem, it is important to review the evidence on which aid programs have been more effective and why. Aid can work to increase college enrollment, but some programs and formats have been more successful meeting this goal than others. Three lessons can be taken from the extensive research literature on financial aid.

First, when designing an aid program, information and simplicity are important. What is clear from the literature is that the mere existence of an aid program is not enough to encourage enrollment, because the visibility and design of the program also clearly matter. In several cases, researchers have not observed large, general responses to the introduction of financial aid programs (e.g., the Pell Grant). On the other hand, research on examples of highly publicized financial aid programs characterized as being simpler in design and application has found large enrollment responses (e.g., the Social Security Student Benefit Program and the Georgia HOPE Scholarship). In summary, the research suggests aid programs are most successful when they are well publicized and relatively easy to understand and apply for. This conclusion has strong implications for the FAFSA, which needs to be substantially simplified. Moreover, there are calls to enhance the visibility of aid programs, as my research with Bettinger, Oreopoulos, and Sanbonmatsu has shown that such efforts can have dramatic effects on college enrollment rates.

Second, need-based aid is more effective in increasing access for low-income students than other forms of aid. One of the original and most prominent goals of financial aid policy was to enable the college attendance of students who would not otherwise be able to attend. Given gaps in enrollment by income, much of policy has focused on low-income students. However, with the movement from need-based to merit-based and other forms of aid, this aim is being lost. Merit-based aid programs favor more affluent students, and similar results have been found in terms of the federal Higher Education Tax Credits and college savings programs. Given these facts, along with the recognition that the government has limited resources, more attention should be paid to targeting students whose decisions might actually be altered by financial aid, as opposed to helping students who would attend regardless. For low-income students, this means focusing on need-based grants.

Third, all aid is not equal. Grants have been shown to be effective in influencing student decisions if designed properly, whereas loans are less effective in increasing enrollment. Moreover, the increased complexity of loans and their potential negative impact on longer-term outcomes should also be taken into account. Debt burden can have negative effects on a range of outcomes, and it is unclear if recent efforts to reduce the interest rate on government loans and to extend the federal loan forgiveness program will do much to mitigate these indirect effects. Therefore, the government should be cautious in its recent trend toward using loans as the primary form of student financial aid.

Archives – Summer 2010

Penwald 2: 8 Circles

A Performance by Tony Orrico

Performance artist Tony Orrico presented his latest work, Penwald 2: 8 Circles in the Atrium of the Keck Center on May 20, 2010. Lying face down on a large canvas, Orrico used his body as a drafting tool to create a large-scale graphite drawing.

The performance makes reference to da Vinci’s Vitruvian Man and the idea of the body as a machine by which the drawing is created. Orrico inscribed eight large circles, each a revisualization of the geometric form known as an epicycloid. Orrico’s choice of gestures for each of the eight circles pays homage to Hildreth Meiere’s portrayal of the eight sciences on the ceiling of the National Academy of Sciences’ Great Hall.

Orrico has performed with the Trisha Brown Dance Company and Shen Wei Dance Arts. Recently Orrico has been working with performance artist Marina Abramovic at the Museum of Modern Art in a retrospective of her career. (Photograph by Michael Hart.)

A Cell’s Life

Here’s a simple statement most Americans would agree with. If tissue removed during an operation is about to be thrown out with the garbage and has no identifying information, it should be permissible to use it for research without the patient’s consent.

This isn’t just a sensible idea; it’s the law of the land. In fact, the federal regulations don’t even consider this to be research involving human subjects. It is exempt from the federal rules, requiring neither review by an institutional review board (IRB) nor consent from the patient.

If this all seems straightforward and uncontroversial, then you have not read Rebecca Skloot’s remarkable book, The Immortal Life of Henrietta Lacks. Skloot spent the past decade exploring one such research study, which mushroomed into one of the most important discoveries in the history of science: the development of an immortal line of cells known as HeLa cells that have played a central role in tens of thousands of research projects, with incalculable benefits to humankind. Their popularity is related to their ability to reproduce rapidly and easily, their immortality, and their familiarity, because thousands of studies have allowed their biologic characteristics to be known in great detail.

That’s the good news. But while scientists, entrepreneurs, and patients have been enjoying the many benefits of HeLa cells, the extended family of Henrietta Lacks, the woman who was the source of the cells, has endured nearly 60 years of anguish directly related to the success of the HeLa cell line. They remain mostly impoverished and without access to health insurance, unable to share in many of the benefits made possible by Lacks’s contribution.

Skloot’s book tells two parallel stories, reminiscent of Anne Fadiman’s influential The Spirit Catches You and You Fall Down, the celebrated account of one Hmong family’s encounter with the U.S. health care system, interwoven with the history of Hmong civilization. In Skloot’s book, researchers at the Johns Hopkins Hospital discovered one of the holy grails of in vitro cell biology: a human cell line that could propagate itself forever, defying Hayflick’s dictum that every cell has a limited number of times it can reproduce itself. In the other story, a poor black woman dies in a segregated ward with excruciating pain from untreatable cervical cancer, and her family grapples for decades with the consequences of being excluded from any meaningful information about her death and her immortal progeny. It is only in an afterword that Skloot begins to explore a third story: the problem with the “simple” concept that research with unidentifiable residual human tissue should be outside the regulatory structure of human subjects research.

The story of the HeLa cells alone is sufficient for a book and has indeed been the subject of a BBC documentary and numerous other accounts. The numbers alone are mind-boggling: An estimated 50 million metric tons of cells; 60,000 scientific studies; the generation of immeasurable profits; and the saving of untold numbers of lives.

But it is the other two stories that demand our attention. Because informed consent was not required for the development of the HeLa cells, neither Henrietta Lacks nor her impoverished family, including her husband and five children, knew the cells even existed until 20 years after they had become “celebrities” in the world of science. The family was contacted at that time by researchers at Hopkins who wanted blood samples to help them better define the genetic characteristics of the HeLa cells, in part because the cells were so prolific that they were contaminating and dominating cell cultures all over the world, making it difficult for scientists to determine whether they were studying HeLa cells or those from another source. But the inadequacies of the consent process for a simple venepuncture planted the seeds of confusion, leading some of the family to believe they were being tested for cancer.

This was the first of several fantasies and nightmares that came to dominate the thinking of the Lacks family. Words such as “immortal cell lines” that were mundane in the world of science were interpreted as a suggestion that Henrietta herself was still alive, decades after her reported death, held captive at Hopkins as a research subject. The word “cell” supported this fear, as its sole meaning for an impoverished, largely illiterate black family evoked images of a prison.

Skloot pulls off a remarkable tour de force in reporting by maintaining a dispassionate perspective in writing about these events, despite becoming a character in the story. Over the course of 10 years, she developed a personal relationship with several key family members, particularly Henrietta’s daughter Deborah, as she built the trust required for them to begin talking with her and over time to share critical archival material and reveal their deepest feelings.

You won’t often see a passage like this in a book of this quality. It follows an angry accusation by Deborah that Skloot is a spy for Hopkins, part of the conspiracy that has been exploiting Henrietta and the family for 50 years:

“Then, for the first time since we met, I lost my patience with Deborah. I jerked free of her grip and told her to get the fuck off me and chill the fuck out. She stood inches from me, staring wild-eyed again for what felt like minutes. Then suddenly, she grinned and reached up to smooth my hair, saying, ’I never seen you mad before. I was starting to wonder if you was even human cause you never cuss in front of me.’”

Despite the page-turning force of both stories, it is the third story, the ethics and regulation of research involving residual samples, that makes the book required reading for those involved with policies and practices in this area. I have been arguing for 15 years that human research is wildly over-regulated in the United States, dominated by inflexible interpretations and the application of rules with little or no relevance to protection of human subjects. There is a growing consensus that the bureaucracy is discouraging innovation and discouraging students from entering the field. All that is true, but the Lacks family story forces us to ask whether the least risky category of research, on unidentifiable residual tissue, is under-regulated.

One part of the problem is the vanishing relevance of the concept of identifiability. Henrietta Lacks’ immortal cells had a code name so meaningless that they were long believed to have come from an unknown source given a made-up name such as Helen Larson. But 20 years after Henrietta’s death, when the eminent geneticist Victor McKusick had a scientific reason to track down the real source in order to obtain more samples from Henrietta’s descendants, he had little trouble finding them.

The ability to do genome-wide analyses on microscopic samples, coupled with ever-growing databases and repositories (such as the results of genetic research that the National Institutes of Health require to be made public and the collection of blood spots from newborns) has reduced confidence that any sample can be truly “unidentifiable.” Breakdowns in physical security measures, whether thefts of laptop computers or hacking into “protected” databases, have further eroded confidence in the confidentiality of stored samples or data derived from them.

A second problem is the limited value or meaning of consent when many of the most important projects cannot even be imagined at the time when the samples are obtained. Henrietta Lacks might have gladly consented to George Gey’s request to use part of her discarded tumor for studies in his lab. But no one could have anticipated at the time that this small experiment would lead to 50 million metric tons of cells that would transform the practice of medicine or that many of the medical benefits would be beyond the reach of Henrietta’s children. Their mother is buried in an unmarked grave in a town that no longer exists. It is not at all clear that she would have agreed to contribute to that kind of system.

At the least, the Lacks family experience should lead to a reexamination of the current policy of exempting such research from the federal regulations. The DHHS Secretary’s Advisory Committee on Research Protection is the appropriate group to review this issue. In my view, technology has made the concept of “unidentifiable” almost obsolete when human cells are involved. To better protect individuals and families such as the Lacks will require higher standards of consent or stricter rules for sharing access to such materials. The recent settlement with the Havasupai tribe over unconsented use of donated tissue tells us that this issue is not unique to the HeLa story.

If Skloot’s book does not cause you to rethink your position on these issues, then you have missed its most important point. But it is still a page-turner of the first order.

Tracking E-health

When I recently moved to a new city and had to identify a primary care provider and health system, I decided to rule out any physician or provider organization that lacked the infrastructure or philosophy that would allow me to communicate with my doctor and the office staff by email, to secure online access to my lab results and other aspects of my medical record, or to make appointments electronically. Frustrated by my recent experience in another city, where my previous physician had actually changed practices in order to avoid being forced to adopt an electronic health record being implemented by colleagues and where all calls to his office were met with voicemail and promises to return my call, usually at a time that was inconvenient for me, I swore that I would never again subject myself to a healthcare environment or physician who had not adopted modern electronic means of communication, data management, and information dissemination. I recognized that I am a technophile and early adopter by nature, but as I looked at the plethora of smart phones, Facebook pages, and laptops in airport security lines that surround me every day, I suspected that I was not alone in using such “digital literacy” criteria to guide my choice of physician and healthcare system. I have subsequently been pleased to find a suitably robust, electronically sophisticated physician and healthcare environment in my new city and realize that I personally associate such capabilities with quality of care, safety, and cost containment.

But just how typical am I, and what are the trends that will increasingly determine how patients, physicians, other health professionals, and provider organizations will embrace and adopt such technologies in the years ahead? West and Miller provide some answers to such questions in a fascinating monograph that appeared in mid-2009, much of it summarizing and interpreting the results of their national e-health public opinion survey that polled by telephone 1,428 adults from across the United States in November 2005. West is a political scientist from the Brookings Institution who collaborated with Miller, a faculty member at Brown University with expertise in public policy, political science, and community health. Thus, the book is written not from the perspective of the technology per se but as an exploration of the current trends in adoption, acceptance, and pursuit of e-health solutions in the United States and abroad. Taken together, the eight chapters survey a variety of key issues that have determined the rate and degree of e-health penetration, the attitudes toward the technology and to medicine in general, and the differences among individuals based on variables such as education, economic status, access, ethnic background, age, and gender. By assessing other data from studies throughout the past decade, the authors are also able to give us a sense of the trends in many areas—trends that generally show a significant change in the rate of adoption of e-health solutions both by providers and by the public. Thus the data summarized in the volume, while often useful and sometimes surprising, beg the question of how much the situation may have changed by today, some three or four years later.

The authors begin by nicely summarizing the state of the art and the barriers to e-health innovation and adoption, providing background for their extensive study of trends in digital medicine. Their motivating argument is that “in order to achieve the promise of health information technology, digital medicine must overcome the barriers created by political divisions, fragmented jurisdiction, the digital divide, the cost of technology, ethical conflicts, and privacy concerns.” They then devote the rest of the volume to analyses of the key questions in this arena that arose from their own studies or are highlighted by other data available in the literature.

IMPROVED EDUCATION, BOTH OF THE PUBLIC AND OF CURRENT AND FUTURE HEALTH PROFESSIONALS, IS VIEWED AS A KEY ELEMENT IN ANY SOLUTION, AND HAS BEEN TOO OFTEN OVERLOOKED WHEN OTHERS HAVE ASSESSED APPROACHES TO MAKING BETTER USE OF INFORMATION TECHNOLOGY IN HEALTH CARE.

They start by analyzing available health web sites, seeking insights into the differences among public, nonprofit, and commercial sites in the healthcare space. Issues addressed include the presence and role of advertising, potential conflicts of interest, the quality and accuracy of the information on the sites, and barriers to utility such as narrative text that assumes reading levels above those of the general population, language barriers, cultural incompatibilities, or a failure to address the needs of individuals with disabilities (especially visual impairments).

The authors acknowledge that barriers to effective use of online information are determined not only by the nature of the information sites themselves but also by the extent to which individuals who may need such information are familiar with computers and Internet use and have regular access to the technology. Thus, one of their analyses focuses on three particular indicators of e-health use and sophistication by individuals: email communication with providers, access to web sites for information, and use of the Internet to purchase medications or other medical materials. A variety of socioeconomic and demographic barriers to increased use of health information technology are documented, among both patients and providers. As my own experience has demonstrated, the failure to use email to communicate with a physician or with office staff may not reflect reluctance on the part of the patient but refusal to provide such services on the part of the clinician. Similarly, the online purchase of medications often depends on the type of health insurance that covers a patient and the ease of submitting prescriptions to a central online resource, rather than on the patient’s desire for such access.

West and Miller also explore the relationship between e-health participation and attitudes toward the healthcare system. It is fascinating that their analysis of this topic begins with discussion of a 2002 article in The Milbank Quarterly by David Blumenthal, appointed by President Obama to be the National Coordinator for Health Information Technology (HIT) shortly after this volume had gone to press. Blumenthal is quoted as expressing concern that patient satisfaction with the quality of medical care will decline in a wired world. One wonders how his impressions may have evolved since his immersion in the HIT world. The data summarized by West and Miller, however, suggest that there are no guarantees that a wired world is going to produce positive attitudes toward the healthcare system. The mere use of electronic media does not enhance an individual’s perceptions of the health system unless it is accompanied by other indicators of quality that provide reassurance and a sense of personal empowerment and individualized attention.

Another key issue in the dissemination and adoption of e-health resources is the role of demography. It is not surprising that socioeconomic status would have an effect on one’s ability and interest in online health information, email, and prescription purchases, but the authors’ data provide useful insights about other sociodemographic determinants as well. For example, women are more likely than are men to use e-health resources, and younger people are more likely to turn to commercial web sites than public ones, perhaps reflecting privacy fears and distrust of government in the younger age groups. Racial and ethnic determinants are also striking, with Asians in the United States the most sophisticated in their use of e-health resources with substantially lower (but improving) numbers for African-Americans and Hispanics. Language issues are seen as particularly important elements in the digital disparities affecting the Hispanic community. Such issues emphasize the challenges for policymakers who are attempting to close the digital divide, both broadly and in the healthcare setting.

West and Miller also provide sobering data on how the United States compares with other parts of the world in the adoption of e-health capabilities by patients, providers, and governments. Some challenges relate to the differences in basic broadband infrastructure, which saw the United States ranked fourth in 2001 but fifteenth by 2007, a reflection of the nation’s limited national investment and its belief that the private sector will take care of the proliferation of necessary infrastructure to support e-health and other social goods. The United States has invested heavily in high-quality, government-sponsored health-related web sites, and it compares favorably with other parts of the world in this regard. In the use of such resources by the public and the overall adoption of HIT by providers and health systems, however, the United States is woefully behind most of the developed world.

The volume closes by proposing policies and activities that are necessary to advance the role and acceptance of digital medicine in the United States. Their suggestions follow logically from the earlier analyses, are persuasive, and resonate with recommendations from a variety of professional and scientific groups that bemoan the country’s relatively slow progress in advancing this field and realizing its full potential. Improved education, both of the public and of current and future health professionals, is viewed as a key element in any solution, and has been too often overlooked when others have assessed approaches to making better use of information technology in health care. The data in this volume show that we also have a significant need to boost computer literacy, despite the ubiquitous cell phones and Facebook pages to which I referred earlier. Given the economic determinants of e-health use and the digital divide, low-cost technologies and improved access through publicly available means continue to be key requirements. We also need improved public investment to create and maintain the broadband infrastructure necessary to serve the populace and maintain economic competitiveness. This is a fundamentally political issue and one that has lacked significant government support despite the data showing that the United States is falling behind other countries by relying on the private sector for such solutions. Thus, overcoming the stultifying effects of the modern political process, while also taking ethical and privacy considerations seriously, is also an important element in any long-term solution.

In summary, this volume is thought provoking and insightful, although it begs for a current update so that we can understand how the situation has changed over the past few years in this rapidly evolving area. The text is a bit dry, with copious tables and discussion of statistical results, but the summary statements and closing chapter provide an excellent sense of the issues and investments that will determine the success, acceptance, and impact of e-health as digital medicine continues to evolve in the decade ahead.

Pursuing Geoengineering for Atmospheric Restoration

A few decades ago, the notion of actively controlling Earth’s climate resided primarily in the writings of science fiction authors such as Frank Herbert, Isaac Asimov, and Arthur C. Clarke. Today, planetary engineering is being discussed openly by scientists and policymakers in Congress, the UK House of Commons, and many other settings. Clarke’s advice apparently struck a chord: “Politicians should read science fiction, not westerns and detective stories.”

Geoengineering can be thought of as intentionally manipulating Earth’s climate to offset the warming from greenhouse gas emissions. Its activities can be divided into two loose groups. One set of options cools Earth by removing carbon dioxide (CO2) and other greenhouse gases from air, essentially reversing the process of fossil fuel emissions. The other cools the planet by blocking or reflecting sunlight, offsetting the consequences of increased greenhouse gases for temperature but leaving the buildup of greenhouse gas concentrations unchecked.

Several developments have fueled the rise of geoengineering from fiction to possible reality in a remarkably short period of time. The first is our inability to reduce greenhouse gas emissions in any substantive way. A wealth of scientific evidence shows that Earth’s climate is already changing because of such gases, posing a threat to people and other animals and to plants. A second factor is the concern that some planetary engineering may already be needed to reduce the harmful effects of climate change, even if emissions fall in the future. A third is the hope that geoengineering could be cheaper than cutting emissions, even if it treats only a symptom of climate change, not the root cause.

The promise and peril of geoengineering raise a host of unanswered questions. Will such approaches actually work? If they do work, who will control Earth’s thermostat? What other environmental consequences might arise? Where would effects be the greatest, keeping in mind that the environmental consequences should be compared not just against our world today but against a future world with rapid climate change?

There are many risks and uncertainties in the geoengineering approaches being considered. In addition, there will be appropriate public resistance to at least some of them. But given that our climate is already changing and that pressure to use geoengineering may increase, what would be the most practical and sensible ways of proceeding? Our approach involves extending the concept of ecological restoration to the atmosphere, with the goal of returning the atmosphere to a less degraded or damaged state and ultimately to its preindustrial condition. Based on this idea, which we call atmospheric restoration, we recommend three types of geoengineering for fast-track research support. We believe that these approaches could provide the greatest climate benefits with the smallest chance of unintentional harm to the environment.

The basic approaches

The first category of geoengineering removes or “scrubs” CO2 from the atmosphere. Carbon removal can be biological, including planting trees or fertilizing the oceans to stimulate phytoplankton growth. It can also be industrial. Industrial options include using chemicals to capture CO2 from the air, with renewable energy regenerating the chemicals, or mining silicates or other geologic materials that react naturally with CO2, reburying the deposits after they have absorbed carbon. Whether biological or industrial, the goal of the activities is to reduce greenhouse gas concentrations in the air.

The second type of geoengineering reflects or blocks sunlight to cool Earth without reducing CO2 concentrations. Some commonly proposed “sunshades” include placing dust into the stratosphere with rockets and airplanes, placing space mirrors between Earth and the Sun, or increasing the extent and brightness of ocean clouds. Sunshade approaches are conceivable because reducing sunlight by a couple of percentage points is all that is needed to offset the warming from a doubling of atmospheric CO2. There is a natural analog for this approach: volcanic eruptions, such as Mt. Pinatubo in 1991, which blasted sulfur dust into the stratosphere and cooled Earth by 1° Fahrenheit for more than a year. A concise description of both types of approaches can be found in the Royal Society report Geoengineering the Climate, published in 2009.

Sunshade and carbon removal approaches differ in how fast they can be applied and what they will cost. Sunshade technologies could be applied quickly and fairly cheaply to reduce Earth’s temperature, at a price of perhaps several billion dollars per year and within months of a policy mandate for stratospheric dust seeding. This combination of speed and cost is the main reason why sunshade approaches are being discussed. No other technology allows us to alter the effects of global warming so quickly if Earth’s climate begins to spin out of control.

In contrast, carbon removal technologies would take decades to scale up, at significantly higher cost. For instance, at a price or tax of $100 per metric ton of CO2—roughly five times the European CO2 price in May 2010 but cheaper than industry can scrub CO2 from air today—removing a billion tons of CO2 using industrial approaches would cost $100 billion. Removing the entire fossil fuel emissions from the United States would take about $600 billion annually, and $3 trillion would pay for removing the 30 billion tons of CO2 emitted globally each year. These numbers dwarf the cost of sunshade approaches, even if cheaper biological options such as tree planting can help bring the price down.

Is geoengineering dangerous?

Tinkering with our life-support system may at first glance seem like a crazy idea. (To many, it still seems crazy at second and third glances.) What makes it less so is that we are already changing Earth in ways that will last for thousands of years. Why might intentional climate change be worse than unintentional change? Put differently, is geoengineering more dangerous than climate change?

The same things that make some geoengineering solutions quicker and cheaper also make them potentially more dangerous if something goes wrong. Separating the risks and effects of different technologies is crucial for informed debate about geoengineering.

Because of their global nature, sunshade technologies could cause global harm at the same time as they help to cool Earth. For instance, evaporation is roughly twice as sensitive to sunlight as temperature is, so mirrors or stratospheric dust that block the sun are almost certain to reduce rainfall globally. In fact, the same Pinatubo eruption that cooled Earth by 1° also caused a global drought and substantially reduced river flows, as described by Kevin Trenberth and Aiguo Dai of the National Center for Atmospheric Research (NCAR) in a 2007 study.

Less certainly, stratospheric dust seeding could cause ozone depletion elsewhere or prolong the ozone hole over Antarctica, if the wrong chemicals are used or if surprises occur with the “right” chemicals. Simone Tilmes of NCAR and colleagues in 2008 concluded that an injection of sulfate aerosols large enough to compensate for the warming from a CO2 doubling could both delay the recovery of the Antarctic ozone hole by 30 to 70 years and increase Arctic ozone depletion throughout this century because of interactions with stratospheric chlorine.

The history of ozone depletion and chlorofluorocarbons (CFCs) should give us pause here. Paul Crutzen, Nobel laureate for his work on atmospheric chemistry and ozone, once noted that the plausible use of bromofluorocarbons instead of CFCs would have led to catastrophic global ozone depletion, a circumstance we avoided by luck. Loading the stratosphere with chemicals for centuries is risky and could prove downright dangerous. The spring 2010 oil spill in the Gulf of Mexico reminds us that, given enough time, some very unlikely events will eventually happen.

Sunshades also have some fundamental weaknesses compared to carbon removal approaches. Except for temperature, they reduce neither greenhouse gases nor other environmental effects caused by the gases. For instance, acidification from a buildup of CO2 threatens the marine food chain and ocean biodiversity, including the ability of phytoplankton, coral reef species, and other marine organisms to grow and maintain their skeletons. Sunshade approaches might cool Earth but would do nothing to fix this insidious problem.

In contrast, the risks and environmental effects of most carbon-scrubbing technologies are likely to be smaller than for sunshade techniques. Industrial carbon removal is not fundamentally different in risk or scope from current industrial operations we live with today. The primary barrier remains cost.

Two large-scale processes for carbon removal that do raise environmental concerns are enhanced weathering of minerals and ocean fertilization. Mining, using, and reburying billions of tons of silicate minerals to remove CO2 from the atmosphere would be both expensive and immense in scale, probably larger than current coal-mining efforts. The reason is that it takes at least a ton or two of such minerals to absorb one ton of CO2. Ocean fertilization remains a potentially useful but scientifically unproven approach for carbon removal. It isn’t clear that ocean fertilization works to store carbon, and in places it might release other greenhouse gases such as methane and nitrous oxide and would probably produce hypoxia, low-oxygen zones similar to those produced in polluted water. The notion of fertilizing large regions of the oceans to create phytoplankton blooms has also been strongly opposed by the public. Private companies have had to cancel research plans because of such opposition.

Is geoengineering socially acceptable?

Despite the technical challenges and uncertainties of geoengineering, sociopolitical barriers rather than scientific or economic ones will ultimately determine geoengineering’s fate. In fact, barriers to public acceptance will probably keep all global sunshade approaches and some carbon removal ones from ever being applied. These barriers include concerns over risk, ethics, governance, laws, geopolitics, and the perception of geoengineering as a tool for global control.

The divisiveness that accompanies nuclear energy, biotechnology, and other hot-button issues bears on the potential fate of geoengineering. The science behind nuclear fission and genetic engineering surely matters in these debates, but few people would argue that scientific uncertainty is the only cause of the controversies surrounding them. Without public support, or at least limited opposition, approval for implementing many kinds of geoengineering will be hard to obtain.

Public support turns on a series of shifting perceptions. These include the magnitude of the danger or opportunity faced, the risks posed by the ameliorative technology, public confidence in the people behind the technology, the cost of the technological fix, and issues of social equity. To our knowledge, there has been no thorough public assessment of these issues for geoengineering, particularly for individual geoengineering activities. Such an assessment is sorely needed.

Two concerns about geoengineering arise consistently among the many that are voiced. One is the fear that researching or even discussing geoengineering will make it more likely to happen, undercutting efforts to reduce greenhouse gas emissions. Another concern is that geoengineering at a scale large enough to influence climate could have large-scale unintended consequences. This fear of surprises has proven a contentious issue for genetically modified organisms locally. Globally, such concerns will probably be far greater. They surely must be considered and discussed before implementing geoengineering.

Beyond public acceptance, political leadership will be needed to implement geoengineering. The political calculus will be influenced by the same interest groups that influence every political process. For instance, the private sector will emerge as a major player by investing in particular technologies and promoting their use. Numerous companies are investing in industrial carbon-scrubbing technologies, and others have actively promoted ocean fertilization experiments. Diverse advocacy efforts from environmental and scientific organizations are also likely.

What should politicians do? They first need to think carefully about governance. Large-scale geoengineering could change almost every aspect of our planet, from Earth’s albedo to its temperature to its rainfall. It’s not hard to see how these changes may, in turn, influence water availability, patterns of human settlement, agricultural productivity, and other critical factors. In light of these interactions, creating mechanisms to manage rules for notice, environmental impact assessments, compensation for transboundary impacts, and other aspects of implementation are all needed. Importantly, however, not all geoengineering technologies will require huge capital investments. Ocean fertilization or sulfate aerosol seeding may be feasible not only for single governments but even for single companies or wealthy individuals. If so, this raises the very real possibility of unilateral implementation.

Some forms of geoengineering, in other words, may be closer to a backyard project than to creating an international space station. The combination of unilateral implementation and disparate effects among nations suggests that consensus will be hard to reach. As David Victor of Stanford University and colleagues noted in a 2009 article, “One nation’s emergency can be another’s opportunity, and it is unlikely that all countries will have similar assessments of how to balance the ills of unchecked climate change with the risk that geoengineering could do more harm than good.” The 2009 study by Britain’s Royal Society made a similar point, noting that “Geoengineers keen to alter their own country’s climate might not assess or even care about the dangers their actions could create for climates, ecosystems, and economies elsewhere. A unilateral geoengineering project could impose costs on other countries, such as changes in precipitation patterns and river flows or adverse impacts on agriculture, marine fishing, and tourism.”

A key question, therefore, is the extent to which governance mechanisms can be created beforehand to address these potential conflicts. Should the precautionary principle be employed? If so, how would it operate? As one example, a moratorium on the practice of coastal iron fertilization has been called for within the context of the Convention on Biological Diversity, but a moratorium could simply drive R&D to nations that do not comply with the treaty or that make a reservation to this restriction.

Indeed, there are no regulatory mechanisms in place, domestically or internationally, that explicitly address geoengineering. As a result, legal and governance frameworks will probably borrow initially from existing structures. Existing treaty language calls for states engaging in transboundary activities to, among other things, conduct environmental impact assessments, avoid doing harm across boundaries, co-operate in mitigating risks and harms created by their activities, and apply the precautionary principle. These guidelines were not adopted with geoengineering activities in mind, however, and their potential influence remains weak.

Calls for large-scale geoengineering to combat climate change have increased significantly in the past year or two and are likely to grow louder. So what is needed right now? For starters, the government and leading authorities in the private sector and academia need to initiate broad-based discussions with stakeholders about the nature of geoengineering. What is it, what can it do to address the threats of climate change, and what potential concerns does it raise? Increasing R&D on its own will do nothing to ensure the successful implementation of geoengineering and may, in fact, prove counterproductive if it is not matched by comparable investment in strengthening the social and political understanding of geoengineering.

Principles for guiding action

How should we think about the geoengineering option? One promising model resides in the principle of restoration. In a well-cited primer from 2004, the Society for Ecological Restoration defined ecological restoration as “the process of assisting the recovery of an ecosystem that has been degraded, damaged, or destroyed.”

We propose to extend the concept of restoration to the atmosphere, suggesting the term “atmospheric restoration” as a guiding principle for prioritizing geoengineering efforts. The goal is to return the atmosphere to a less degraded or damaged state and ultimately to its preindustrial condition.

Given an umbrella of atmospheric restoration, we prioritize geoengineering efforts based on three principles. The first is to treat the cause of the disease itself, through CO2 removal, instead of a symptom of the disease, through the use of sunshades. Because carbon-scrubbing technologies will take far longer to deploy than sunshades, policy incentives for research on them are needed now. Without such incentives, we will face unnaturally high greenhouse gas concentrations in our air (as compared to the past 100 million years of Earth history) or a world where sunshade approaches must be maintained for centuries once we start using them. For instance, large-scale stratospheric dust seeding, if stopped abruptly, would cause Earth’s temperature to shoot up rapidly. (Consider the analogy of a dim cloud passing, exposing the Earth to full sunlight.) This rapid increase would likely be far more damaging environmentally than a gradual increase to the same temperature would have been. Is global governance likely to keep sunshades in place for 500 or 1,000 years?

A second guiding principle is to reduce the chance of harm. The greater the scale of a manipulation, the more probable it is that the manipulation will cause unforeseen changes or even dangerous surprises. Sunshades, in particular, will have to be regional to global in nature to be globally effective, suggesting that unintended harms may be regional or global as well. We believe that the policy priority should remain on reducing greenhouse gas emissions, coupled with restoring the atmosphere through carbon removal, which would obviate the need for riskier sunshade approaches.

A third and final principle is to prioritize activities with the greatest chance of public acceptance. We remain skeptical that the public will ever broadly accept sunshades, particularly stratospheric dust seeding, and some carbon removal strategies such as ocean fertilization. Recognizing this barrier provides another filter for prioritizing research. The likelihood of public acceptance suggests a few good geoengineering choices from among the broader set of less direct and potentially dangerous geoengineering activities.

Based on these principles and the Hippocratic spirit of first do no harm in medicine, we propose three forms of geoengineering that could provide the greatest climate benefits with the smallest chance of unintentional harm to the environment. All three are forms of atmospheric restoration, will probably have fewer unintended consequences than other forms of geoengineering, and are more likely to be accepted by the public than many other forms of geoengineering.

The first geoengineering activity, forest protection and restoration, is an opportunity available now. The other two, industrial carbon removal and bioenergy linked to carbon capture and storage, need extensive research to make them effective and to reduce their costs. Unlike forest protection, these will take decades to scale up to a level that lowers atmospheric CO2 concentrations substantially, because they require a distributed network of facilities.

The most immediate opportunity is forest preservation and restoration. Plants and other photosynthetic organisms provide one of the oldest and most efficient ways to remove CO2 from air. Efforts to regrow forests or keep forests from being cut both provide greenhouse gas benefits. If a policy incentive keeps a rainforest in Amazonia or Alaska from being harvested, carbon that would have moved to the atmosphere is “removed” from the atmosphere.

An important policy incentive in this area is Reduced Emissions from Deforestation and Forest Degradation (REDD), featured prominently at the 2009 Copenhagen climate change negotiations. Tropical deforestation contributes roughly 5 billion tons of CO2 to the atmosphere each year, approximately one-sixth of fossil fuel emissions. Providing financial incentives to stem this tide of carbon loss and restore degraded forests is an immediate opportunity, although accounting and monitoring protocols still need work. Activities such as REDD help the environment by storing carbon, slowing erosion, improving water quality and flow, and preserving biodiversity. Their benefits reach far beyond climate.

Let’s contrast the benefits of REDD with an equally bad idea for land-based geoengineering. In a 2009 article in Climatic Change, Leonard Ornstein of the Mount Sinai School of Medicine and co-workers proposed turning the Sahara and Australian deserts into lush forests by irrigating them with desalinated seawater. (Frank Herbert’s characters in the Dune series might approve.) They estimated that we could offset all of today’s CO2 emissions from fossil fuels, roughly 30 billion metric tons a year, by greening the Sahara, an area comparable in size to the continental United States. Although it is an interesting thought exercise, such a proposal would cost trillions of dollars (the authors’ estimate), would require massive amounts of energy to make, transport, and distribute fresh water and perhaps fertilizers for each tree, and would create an unsustainable forest vulnerable to dieoff from pests, storms, irrigation loss, and many other “surprises.” The picture of the Sahara as a wasteland to be improved is also in our view ecologically, culturally, and anthropocentrically myopic and doomed to fail.

A second geoengineering opportunity that should be encouraged with research incentives is industrial carbon removal, specifically facilities that use renewable chemicals rather than continuously mined ones such as silicates. Imagine a series of power plants run in reverse. The facilities use renewable energy to drive a chemical reaction that removes CO2 from the atmosphere and regenerates the chemical used in the reaction. It’s as simple as that.

What isn’t simple about the process is its cost. Current amine-based technologies or next-generation chilled-ammonia chemistry for capturing CO2 from power plant smoke-stacks are too expensive to be used widely today. Moreover, CO2 in air is far more dilute than in the exhaust of a coal- or gas-fired power plant, making the job even more difficult and costly. We need immediate research incentives to reduce the costs of industrial CO2 capture.

Another aspect of cost with industrial carbon removal is where the carbon-free source of power comes from. For starters, are you really removing CO2 from the atmosphere with this approach if you could instead plug the carbon-free energy consumed in it into the grid to offset emissions from a coal- or gas-fired plant somewhere else? One advantage here is that a carbon-removal facility could be set up anywhere on Earth where energy is plentiful. You don’t have to be near a power grid in choosing locations. Renewable energy for the process could also be used at times and in places where it isn’t needed for normal uses, such as off-peak hours.

Finally, for industrial carbon removal, you have to do something with the billions of tons of CO2 removed from air. On the one hand, you could generate carbon-based fuels as one possibility. This use of the carbon does not really remove CO2 from air unless the CO2 is subsequently captured, perhaps analogous instead to generating corn ethanol and other biologically based renewable fuels. To be truly carbon-negative, however, you have to store the carbon permanently away from the atmosphere, most likely thousands of feet underground or under the oceans. This, too, is expensive and needs research to guarantee its safety and effectiveness.

Overall, the combination of generating the power needed, capturing the CO2 chemically, and storing it underground is likely to cost at least $100 to $200 per metric ton of CO2 removed with next-generation technologies. We need research to cut these costs, ideally by at least two-thirds. Ultimately, we will have to use at least some technologies that can remove hundreds of billions of tons of CO2 if we are ever to restore the atmosphere to its preindustrial state. That is the scale of the problem we face.

The third technology that we believe needs immediate but perhaps more cautious research support combines bioenergy with carbon capture and storage. This technology fuses aspects of the previous two, including its focus on trees and other plants as a cheap way to capture CO2 biologically instead of chemically, and its reliance on carbon capture and storage to move CO2 from the atmosphere back underground. Unlike the previous option, it has the benefit of supplying its own energy generated from the biomass instead of requiring large energy inputs.

Bioenergy with carbon capture and storage also has some important differences, however. Although bioenergy provides energy from biomass rather than consuming it, harvesting the needed biomass will affect millions of acres of land if applied broadly. In that sense, bioenergy may in some places be at odds with the forest restoration and avoided deforestation efforts highlighted earlier. We acknowledge this contradiction, invoking a 19th-century adage for household management: “A place for everything, and everything in its place.” There are places on Earth where habitat preservation and restoration are particularly important right now, including the tropics, whereas other places have lands that could be managed productively for fast-rotation biomass.

The places for bioenergy generation will take careful consideration to maximize benefits and minimize environmental harm. Municipal garden wastes, crop and forest residues, and trees that have been damaged by insects or are at risk of burning naturally are good places to start. The scale of the problem, though, will require millions of acres of land to be managed and harvested differently if we are to make a difference. The enormous potential footprint of bioenergy is what makes our third recommended option the riskiest in terms of environmental effects. The need for carbon capture and storage technologies is also the component of these activities that will probably face the greatest public opposition, as has already occurred in places in Holland and Germany.

What bioenergy with carbon capture and storage provides is an extensive, cheaper complement to industrial carbon removal. Neither approach is perfect. Both will eventually be needed to draw down the concentration of CO2 in the atmosphere, because energy efficiency and renewables alone can’t get us to a carbon-negative economy.

In conclusion, to discuss even the possibility of engineering Earth’s climate is to acknowledge that we have failed to slow greenhouse gas emissions and climate change. Emitting less CO2 through increased energy efficiency and renewables should remain a top policy priority. These options will be cheaper than most forms of geoengineering and will provide many additional benefits, including improved air and water quality, national security, balance of trade, and human health.

Our climate is already changing, and we need to explore at least some kinds of carbon-removal technologies, because energy efficiency and renewables cannot take CO2 out of the air once it’s there. Some scientists increasingly argue that we need to do research on sunshade technologies as a backup plan if climate change starts to accelerate dangerously. This argument has merit. However, the sooner we invest in and make progress on reducing greenhouse gas emissions today and promote ways to restore the atmosphere through carbon-scrubbing technologies in the future, the less likely we are ever to need global sunshades. The principle of atmospheric restoration should guide us in curing climate change outright, not in treating a few of its symptoms.

From the Hill – Summer 2010

House votes to reauthorize America COMPETES Act

Despite strong Republican opposition, the House on May 28 approved the America COMPETES Reauthorization Act of 2010 by a 262 to 150 vote. The legislation is designed to make investments in science, innovation, and education at three agencies: the National Science Foundation (NSF), National Institute of Standards and Technology (NIST), and the Department of Energy’s (DOE’s) Office of Science. The bill puts basic research programs at the three agencies on a path to doubling authorized funding levels over 10 years.

The major issue of contention over the bill was the proposed increase in spending during a time of large and growing federal budget deficits. The Congressional Budget Office estimates that the bill will cost $86 billion and that it authorizes nearly $23 billion more in spending than current appropriations. Republicans proposed eliminating funding authorizations beyond 2013, freezing funding for existing programs at current levels for 2011-2013 unless there is no budget deficit, and eliminating six new programs from the bill. Republicans also argued that the bill would put too much emphasis on technology commercialization as opposed to basic research.

According to Rep. Ralph Hall (R-TX), the ranking member of the House Science and Technology Committee, the Republican changes “would have saved over $40 billion and restored the original COMPETES priority of basic research. While I am glad we were finally able to reauthorize many of the important research and education program in this bill, the bill that passed today spends too much money, authorizes duplicative programs, and shifts focus away from the bill’s original intent.”

Rep. Bart Gordon (D-TN), chair of the House Science and Technology Committee, acknowledged that the federal budget deficits are serious, but argued that investment in the country’s future is essential. “If we are to reverse the trend of the last 20 years, where our country’s technology edge in the world has diminished, we must make the investments necessary today.”

Gordon pointed out that more than 750 organizations have endorsed the legislation, including the U.S. Chamber of Commerce, the National Association of Manufacturers, the Business Roundtable, the Council on Competitiveness, the Association of American Universities, the Association of Public and Land-grant Universities, the National Venture Capital Association, TechAmerica, the Biotechnology Industry Organization, and the American Chemical Society, as well as nearly 100 universities and colleges.

The legislation would authorize $7.48 billion for NSF in fiscal year (FY) 2011, with the authorization level rising to $10.16 billion in FY 2015. It authorizes $991 million for NIST for FY 2011, rising to $1.2 billion in FY 2015. DOE’s Office of Science is authorized for $5.2 billion for FY 2011, increasing to $6.9 billion in 2015.

Key provisions for NSF include a requirement for the agency to set aside 5% of its Research and Related Activities funding for “high-risk, high-reward research.” The legislation would create an “Innovation Inducement Prize” program totaling $12 million for five prizes. The bill would change the current cost-sharing ratio for the Robert Noyce Scholarship program from an even split between the federal government and a research institution to a 70% federal share. It would establish workshops to help eliminate gender disparities in all federal science agencies and at undergraduate universities, and it would extend funding to researchers who take an extended leave of absence for care-giving responsibilities.

For DOE, the legislation contains a comprehensive five-year reauthorization of the Office of Science, a reauthorization of the Advanced Research Projects Agency-Energy, and the authorization of Energy Innovation Hubs, which are multidisciplinary collaborations designed to overcome a specific technological barrier to achieving national energy innovation goals.

The legislation would elevate the director of NIST to Under Secretary of Commerce for Standards and Technology. The ten NIST laboratories would be consolidated into six laboratories in order to meet the needs of the current high-tech sector, a plan endorsed by NIST leadership. With congressional approval, the Under Secretary of Commerce for Standards and Technology will be able to realign labs in order to meet new needs of the future high-tech industry.

The bill would change the cost-share ratio of the Manufacturing Extension Program (MEP) from 30:70 (federal government versus second party) to 50:50. The MEP, which is partially funded by state governments, universities, or nonprofit organizations, helps increase the competitiveness and technological expertise of small and mid-size manufacturers.

The COMPETES Act reauthorization would also require NIST to give “consideration to the goal of promoting the participation of underrepresented minorities in research areas supported by the Institute” when evaluating fellowships and postdoctoral applicants.

The legislation calls for guidance across agencies on key topics. The bill directs the Office of Science and Technology Policy to work with agencies to develop a consistent policy regarding the management of scientific collections and establishes a working group to coordinate federal science agency policies related to the dissemination of the results of federally supported research. In addition, the bill calls for coordination of federal STEM education activities and the creation of an advisory committee on STEM education. It also establishes an Office of Innovation and Entrepreneurship at the Department of Commerce and establishes the creation of regional innovation clusters.

Congress reconsiders rules on toxic substances

Congress is considering revamping the Toxic Substances Control Act (TSCA), passed 35 years ago to regulate the use of harmful compounds. Sen. Frank Lautenberg (D-NJ) has proposed a bill, the Safe Chemicals Act of 2010 (S. 3209), that would provide the Environmental Protection Agency (EPA) with greatly enhanced capabilities to collect information and regulate substances believed to be dangerous. Reps. Henry Waxman (D-CA) and Bobby Rush (D-IL) have introduced a similar draft bill in the House.

Lautenberg’s proposed overhaul seeks to fix key problems with TSCA that were identified in a 2009 review by the Government Accountability Office (GAO). Under current law, the EPA must demonstrate that there is an unreasonable risk before it can require the testing of a compound. According to the GAO report, difficulties in accessing information, combined with the high burden of proof that TSCA requires in order to ban a chemical, renders regulatory action extraordinarily difficult. Problems remain with the system for introducing new chemicals, too. There is no requirement to submit hazard data for a new compound, and the publication of information collected is hampered by extensive confidentiality claims. These concerns led the EPA to issue a document titled Essential Principles for Reform of Chemicals Management Legislation, which spurred a series of congressional hearings on TSCA reform.

The new legislation seeks to address the problems identified by the GAO by improving the EPA’s ability to collect data and take decisive action. Whereas the EPA must now prove that a compound presents an unreasonable risk in order to implement a ban, Lautenberg’s proposal would implement a model in which companies must demonstrate a “reasonable certainty of no harm” prior to the introduction of a new compound. In order to accomplish this goal, companies would be required to submit a dataset containing hazard, use, and exposure information. The bill would attempt to ensure the accessibility of this information by requiring the EPA to review Confidential Business Information claims and allowing state governments to access confidential data. Furthermore, the EPA would be given strong authority to require testing of existing compounds. Resources would be focused on compounds prioritized as high risk based on their use, toxicity, and other characteristics.

One catalyst for TSCA reform has been a shift in attitudes among key industry players. Leading businesses and industry groups have become concerned because of increased public perception of the risks to human health posed by chemicals in consumer products. Industry supports improved regulatory authority, but it also wants to head off a potential major antichemical backlash that might result in obtrusive regulations similar to those implemented in the European Union. Consensus on such a plan does not exist yet. Although groups such as the American Chemistry Council agree with elements of Lautenberg’s plan, they remain wary of some of its provisions. In particular, the stringent “reasonable certainty of no harm” standard and the lack of provisions to preempt the implementation of stricter rules at the state level are likely to be sticking points.

Bill proposes incentives for carbon capture and storage

In late March 2010, Sens. Jay Rockefeller (D-WV) and George Voinovich (R-OH) introduced a draft of legislation that would provide extensive incentives for the development and deployment of carbon capture and sequestration (CCS) technology. Capturing carbon dioxide ( CO2) from coal combustion and then storing it has the potential to reduce greenhouse gas emissions while allowing continued use of fossil fuels for energy production, but the technology is not yet available on the needed scale.

The bill would authorize $850 million over 15 years for R&D involving partnerships between the Department of Energy and the private sector. To stimulate technology implementation at the commercial scale, the bill calls for a “Pioneer” program to subsidize the construction of CCS facilities. The bill provides tax credits starting at $67 per ton of carbon sequestered. Finally, the bill would mandate the adoption of specific technology standards after the first 10 gigawatts of capacity is built, or in 2030. Funding would come from a tax levied on electrical power generation tailored to raise about $2 billion per year.

Although Rockefeller and Voinovich had indicated their intent to take on the thorny issue of liability in the final legislation, the version released doesn’t contain any details. Liability concerns are a major issue in the CCS field, because of potential leakage of injected CO2 as well as possible trespassing claims if sequestered CO2 seeps into adjacent properties. Without clear guidelines, corporations will have difficulty assessing the risks of engaging in CCS projects.

At a Senate Energy and Natural Resources Committee hearing on April 20, numerous electrical industry representatives and CCS entrepreneurs expressed strong support for the Rockefeller-Voinovich proposal. Ben Yamagata of the Coal Utilization Research Council said, “This proposal in its entirety is the most comprehensive and far-reaching initiative yet proposed to address the variety of issues related to the successful widespread introduction of CCS technology.”

The Obama administration is also examining this issue. On February 3, 2010, the administration announced the formation of an interagency task force dedicated to tackling the challenge of implementing large-scale CCS within 10 years, with a short-term goal of having 5 to10 commercial demonstration projects running by 2016.

Federal science and technology in brief

  • The EPA announced that it is formally listing Bisphenol A (BPA), a chemical widely used in consumer goods, as a “chemical of concern” and will require additional research on it. EPA joins the Food and Drug Administration (FDA) in studying BPA. FDA announced in January 2010 that it had concerns about BPA and would study the potential health effects and ways to reduce exposure to BPA in food packaging. The listing does not trigger new regulations, but EPA officials said they would consider possible regulatory actions to address health effects, if necessary.
  • On April 28, 2010, the federal government approved the first offshore wind farm in the United States. The Cape Wind project, off the coast of Cape Cod in Massachusetts, will have 130 windmills in Nantucket Sound and begin producing energy by the end of 2012. Average expected production will be roughly 170 megawatts, or almost 75% of the demand for Cape Cod and the islands of Martha’s Vineyard and Nantucket. The decade-long debate over the project may not yet be over, because opponents have vowed to challenge the decision in court.
  • The Obama administration finalized rules that impose the first greenhouse gas emissions regulations on vehicles. Crafted jointly by EPA and the Department of Transportation, the rules require automakers to have an average fleetwide fuel economy of 34.1 miles per gallon by 2016, which is four years earlier than required by a 2007 law, and to meet certain greenhouse gas emissions reductions.
  • Under the authority of the Clean Water Act, the EPA on April 1 announced tough new rules on mountaintop removal coal mining in Appalachia. The new regulations would sharply curtail the practice of dumping rubble from mountaintop mining, which can fill valleys and streams, leach toxins into watersheds, compromise water quality, and destroy ecosystems. Under current practices, the new requirements would all but eliminate mountaintop mining.
  • Committees in the House and Senate are reviewing the proposed Federal Research Public Access Act, which would require agencies with research budgets of $100 million or more to provide online access to research manuscripts stemming from federal funding within six months of publication in a peer-reviewed journal. The bill gives individual agencies flexibility in choosing the location of the digital repository for this content, as long as the repositories meet conditions for interoperability and public accessibility and have provisions for long-term archiving.
  • On April 15, the House Committee on Homeland Security reported out favorably a bill to reauthorize the Department of Homeland Security’s Science and Technology Directorate. The bill would increase authorized funding to $1.12 billion in 2011 and $1.16 billion in 2012, while also requiring administrative measures intended to make the department more effective and transparent. The bill authorizes funding of key research areas, especially cybersecurity. The directorate would be asked to develop more inherently secure Internet protocols, mitigate the effects of digital attacks, and support standardized testing of cybersecurity-related technology. The directorate would also be required to commission a National Research Council study on cybersecurity incentives. The study would tackle a number of provocative questions, including whether or not companies should be held liable for digital security failures. Chemical and biological security would continue to be a major focus of the S&T directorate’s research. The bill reasserts the importance of developing assays for biological and chemical agents, planning strategies for first responders, and developing advanced bioforensics techniques.
  • On Earth Day, April 22, the Senate Committee on Commerce, Science, and Transportation Subcommittee on Oceans, Atmosphere, Fisheries, and Coast Guard held a hearing on the economic and environmental effects of ocean acidification, which occurs when CO2 dissolves into sea water. The majority of witnesses agreed that ocean acidification will greatly affect the ocean environment and could cause species extinction and food chains to collapse. Industry witnesses stated that damage to coral and shelled organisms will harm the fishing and diving industries. Also on April 22, the National Research Council released a report detailing needed ocean acidification research.
  • The FDA released draft guidelines aimed at enhancing transparency in the work of its advisory committees. Currently, members of advisory committees can seek conflict-of-interest waivers. They must disclose work with a sponsor or competitor of a drug or device under FDA review. Under the new guidelines, they would have to go further, revealing publicly the names of the relevant companies and how much money is involved.
  • The Engineering Education for Innovation Act, intended to strengthen engineering education in K-12 schools, was introduced in the House (H.R. 4709) and Senate (S.3043). The bill would implement many of the recommendations of the National Academy of Engineering report, Engineering in K-12 Education: Understanding the Status and Improving the Prospects.
  • On March 24, the Senate Commerce, Science, and Transportation Committee approved a far-reaching cybersecurity bill (S.773). The measure, sponsored by Sen. John Rockefeller (D-WV), the committee’s chair, would authorize cybersecurity R&D and workforce development through NIST and the NSF. It would also seek to improve coordination between the government and industry on cybersecurity issues and increase government oversight of companies designated as “critical infrastructure.”
  • The NIH and FDA announced an initiative designed to boost translational research; that is, to speed up the process of improving medical therapies as a result of scientific breakthroughs. The agency will establish a Joint NIH-FDA Leadership Council and will put nearly $7 million over three years toward regulatory science, which would focus on better approaches to evaluating the safety and effectiveness of medical products.
  • The NIH plans to launch a registry of genetic tests next year. NIH will collect and make publicly available information on genetic tests—there are currently more than 1,600—submitted voluntarily by test providers. The registry could, for example, allow doctors, researchers, and patients to locate labs that offer particular genetic tests.

Forum – Summer 2010

Standards for synthetic biology

Jonathan B. Tucker is helping to track the emerging field of synthetic biology in writing “Double-Edged DNA: Preventing the Misuse of Gene Synthesis” (Issues, Spring 2010) in addition to his 2006 article coauthored with Raymond Zilinskas, “The Promise and Perils of Synthetic Biology” (The New Atlantis). Although the current article effectively states the reasons why a voluntary approach to managing synthetic genomics can be effective, several statements require clarification, consideration, or correction.

As early as 2006, Tucker identified and assessed the two main options for screening synthetic nucleic acid orders to address the potential risk posed by the commercial availability of synthetic genetic sequences of concern: (1) requiring U.S. suppliers under the penalty of law to implement mandatory screening, or (2) relying on the due diligence of U.S. providers to prevent malicious uses of their products by voluntarily implementing screening for pathogenic sequences. These options have also been considered by the U.S. government, and the issue is revisited by Tucker in the current article.

Upon extensive consultation with industry and on behalf of a broad interagency working group, on November 27, 2009, the U.S. Department of Health and Human Services (HHS) released the draft report Screening Framework Guidance for Synthetic Double-Stranded DNA Providers. The primary goal in developing guidance for synthetic double-stranded DNA providers is to minimize the risk that unauthorized individuals or individuals with malicious intent will gain access to toxins and organisms that are of concern because they were created or modified with nucleic acid synthesis technologies, while at the same time minimizing any negative effects on the conduct of research and business operations. In the document, sequences of concern are identified as those unique to Select Agents and Toxins (and, for international orders, sequences unique to pathogens and toxins on the Commerce Control List). The draft document offers guidance to providers of synthetic genomic products regarding the screening of orders so that these orders are filled in compliance with current U.S. regulations and to encourage best practices in addressing potential biosecurity concerns. Providers also may implement additional safeguards if they deem necessary (“The U.S. Government acknowledges that there are synthetic nucleic acid sequences from non-Select Agents or Toxins that may pose a biosecurity concern. Synthetic nucleic acid providers may choose to investigate such sequences as part of their best practices”).

In order to be effective, risk mitigation should be international in scope and dependent in part on the willingness of the research community to do their own diligence and patronize only companies that have screening procedures in place. In developing the draft guidance, the U.S. government took into consideration not only the best practices in the industry but also the need to promote screening universality by considering the ease of integration, with minimal cost and within existing protocols, for U.S. companies, U.S.-based firms operating abroad, and international companies.

Tucker indicates that “one currently unresolved issue is whether gene-synthesis companies should supply synthetic DNA to researchers who lack an institutional affiliation.” The draft guidance released by HHS indicates that the “lack of affiliation with an institution or firm does not automatically indicate that a customer’s order should be denied. In such cases, the U.S. government recommends conducting follow-up screening.”

With regard to recommended sequence-screening methodologies, Tucker notes that critics of the Best Match approach state that Best Match is “weaker than either industry standard because it cannot detect genetic sequences of pathogens and toxins of biosecurity concern that are not on the Select Agent List.” We believe that the quoted language illustrates a common misunderstanding about Best Match, specifically the confusion between the screening methodology (the how) with what one hopes to identify (the what).

In the Best Match approach, sequence orders are screened against a comprehensive and publicly available database of all known nucleotide sequences, such as GenBank, and the identity of the top match is determined. For example, the Best Match might be to a sequence from a Select Agent, such as Bacillus anthracis, or to a non-Select Agent, such as Escherichia coli. In the draft guidance, it is recommended that providers use this approach to flag sequences unique to Select Agents and Toxins. However, Best Match could be used to identify sequences from any number of lists or curated pathogen databases. Therefore, the statement that Best Match cannot be used to identify non-Select Agent and Toxin sequences is inaccurate. In fact, the Best Match–plus method described by the author could be achieved by simply using the Best Match approach to screen for any sequence unique to a Select Agent or Toxin or to a sequence contained in the curated pathogen database suggested by the author, without using the two-step process proposed. Thus, identical results could be achieved by simply using an (automated) Best Match approach that is set up to identify sequences from Select Agents and sequences in a curated database. When any hits are identified, further follow-up screening by human experts would occur. As such, Best Match could easily be adapted by providers to identify sequences from Select Agents or non-Select Agents, if desired.

BEST MATCH COULD EASILY BE ADAPTED BY PROVIDERS TO IDENTIFY SEQUENCES FROM SELECT AGENTS OR NON-SELECT AGENTS, IF DESIRED.

Tucker generally focuses on the critiques of the Best Match approach and does not mention that key benefits of Best Match were noted at the January 11, 2010, American Association for the Advancement of Science (AAAS) meeting. As noted by one attendee and reiterated in public comments, Best Match “automatically adapts to new sequences entered in GenBank … and is adaptable to completely synthetic genes.”

Tucker also presents the views of an attendee at the AAAS meeting, who indicated that by endorsing the Best Match approach, the screening standards in the industry will be downgraded. In fact, industry providers such as Integrated DNA Technologies and Blue Heron Biotechnology, which helped lead industry efforts to implement voluntary screening methodologies in the absence of any specific government guidance, indicated (as summarized in Nature, December 4, 2009) that it is inaccurate to assert that “the government guidelines are less rigorous and far-reaching than industry’s already-adopted standards.” Additionally, this view implies that companies who have led the way in voluntarily developing high screening standards are now eager to reduce those standards.

Tucker notes that “the draft U.S. government guidelines are widely considered inadequate.” In fact, a number of media sources have noted broad support for the U.S. government guidance. Specifically, in Microbe, vol. 5, no. 4, it is noted that “Although [AAAS meeting] participants identified several specific concerns and offered HHS several ways to address them, in general members of this industry niche and their counterparts at universities support the HHS set of voluntary recommendations,” based on information from Gerald Epstein of AAAS, who helped to organize the January AAAS workshop. Additionally, in the December 2009 Nature article referenced above, Damon Terrill of Integrated DNA Technologies stated that “The approach recommended in the draft government guidelines in fact produce[s] exactly the information we need to ensure safety and security in the real world of gene synthesis.”

Deliberations within the U.S. government to consider potential changes to the draft guidance are ongoing, but no final decisions have been made. In addition to the comments presented at the AAAS meeting, there are a number of other considerations, including formal public comments submitted in response to the Federal Register Notice and existing federal regulations. Finally, the U.S. government is also considering (as stated by Tucker and Zilinskas in their 2006 article) “how best to guide synthetic biology in a safe and socially useful direction without smothering it in the cradle.”

JESSICA TUCKER

Coordinator of the Interagency Working Group to Develop Federal Screening Guidance

Office of the Assistant Secretary of Preparedness and Response

DANA PERKINS

Member of Sub-Working Group to Implement and Evaluate Guidance

Senior Science Advisor

Office of the Assistant Secretary for Preparedness and Response

U.S. Department of Health and Human Services

Washington, DC


Jonathan Tucker paints a vivid picture of how U.S. officials write biosecurity regulations. But as Tucker explains, that’s only half the story. The real news is how a small group of European companies [the International Association Synthetic Biology (IASB)] wrote their own standards and then pushed 80% of the industry to follow their lead. Security experts have long urged industry to practice self-governance. Here, finally, they got their wish. What have we learned?

The first and most surprising lesson is that private standards can be tough. The IASB’s Code requires human screeners to investigate every gene shipped to customers. Satisfying this requirement is far more costly—but also more effective—than the government’s Best Match algorithm. As Tucker points out, the United States may yet upgrade Best Match to include human screening. If it does, industry will have led the way.

The second lesson is that the private standard-setting is remarkably swift and decisive. Here, we should recall that the IASB’s Code had a competitor. In August 2009, two of the industry’s largest companies (DNA 2.0 and Geneart) announced what they called a fast and cheap alternative. Within three months, however, they had dropped the idea and replaced it with a Harmonized Protocol that echoes the IASB Code. Why the quick turnaround? Economics. Gene synthesis is a volume business where companies survive by keeping their customers happy. And big customers hate controversy. When “fast and cheap” became too controversial, even its authors decided to dump it. Compare this with conventional regulation, where debates typically drag on for years.

The third lesson is that private standards have teeth. Tucker is right to ask whether rogue companies could earn a profit by defying industry standards. But what is the market? No sensible company would risk losing its customer base for one or two terrorist orders, particularly when the “terrorists” may actually be “red teams” from a business rival or the media. Moreover, the synthetic gene market, unlike national law and most treaties, is inherently global. This means that the business case for adopting standards in Iowa is more or less identical in Shanghai. The fact that two Chinese companies have already signed the IASB’s Code surely confirms this. It’s also a safe bet that still more Chinese companies would have joined the Code but were confused by the U.S. government’s failure to write a Best Match requirement that includes human screening. This is still another reason to fix Best Match.

The final and most troubling lesson is that the government has so far refused to say which private standards it prefers. But standards wars are rough-and-tumble affairs, and their outcomes are unpredictable. If officials prefer one proposed standard over another (and they usually do), then they have an obligation to say so publicly. Nor is there any legal or ethical obstacle to prevent this. Officials routinely criticize companies for raising prices and overpaying CEOs. Why should inadequate biosecurity practices be any different?

The private standards story is just beginning. Indeed, the IASB has said that it wants to extend the concept by launching a shared threats database [the Virulence Factor Information Repository (VIREP)] and other industry-wide collaborations. Shrewd government engagement can strengthen and shape these ventures.

STEPHEN M. MAURER

University of California

Berkeley, California


Girl power

It’s not as if we have not been investing in girls who call the developing world home. Reams of annual reports attest to well-meaning efforts on the part of many. And that includes investments in health and education, the focus of the pungent clarion call offered by Miriam Temin, Ruth Levine, and Sandy Stonesifer in the spring 2010 Issues. What sets “Start with a Girl: A New Agenda for Global Health” apart is the contextual filter through which girls are viewed. No longer should girls (and especially adolescent girls) be viewed only as children deserving of all we can offer. Girls of today are and must be thought of as the women of tomorrow.

It is the women of the developing world, separate and distinct from their male counterparts, who are broadly and rightly viewed as holding the key to progress. Simply put, without investing in women, and hence in girls, little meaningful change for the better can be anticipated in arenas as diverse as education, economic growth and productivity, climate change, and even national security. It is no accident that the 2009 rollout of an expanded version of “Start with a Girl” at the Washington, DC–based Center for Global Development was headlined by Melanne S. Verveer, the first-ever U.S. Ambassador-at-Large for Global Women’s Issues.

Investing in women is investing in today. Investing in girls is investing in tomorrow. We need to hear that at a time when much global attention is mostly directed at the women of the developing world, a focus galvanized by the ever-intractable tragedy of maternal mortality and morbidity affecting low- and middle-income countries. We need to do both: Invest in tomorrow as well as today. In this context, “Start with a Girl” is doing us a great service by reminding us of the seemingly obvious but all too often forgotten importance of prevention when girls are still girls.

Investing in girls is an unassailable imperative. However, the global health system has not yet fully grappled with translating advocacy into action. Making the case is one thing. Implementation is quite another. As the authors acknowledge, “the commonly identified roots of the problem are factors that are difficult to change.” Alas, vaccination alone will not reverse deeply ingrained gender norms rooted in a long-standing Gordian knot replete with religious, social, and cultural overtones. Progress is inevitable. It is just that overnight progress is unlikely. Recall the seminal importance of the 20th century to the well-being of women in the developed world. Let’s hope that the 21st century is as kind to the women and girls residing in less fortunate places on this planet of ours.

ELI Y. ADASHI

Professor of Medical Science

Outgoing Dean of Medicine and Biological Sciences and Frank L. Day Professor of Biology

Brown University

Providence, Rhode Island


Miriam Temin, Ruth Levine, and Sandy Stonesifer offer a valuable analysis of the ways in which the systematic marginalization of adolescent girls can have a detrimental impact on society as a whole. As the authors note, this is most strongly experienced in the developing world, where there are some 600 million girls between the ages of 10 and 19 who disproportionately lack adequate access to education, health care, and decent employment.

The authors point out that the international community has historically failed to match its rhetoric with the kinds of actions that would actually improve the lives of adolescent girls worldwide. Sadly, they are right. For the most part, adolescent girls’ issues have been largely neglected; reproductive health education or even things as basic as providing girls-only toilets in schools have remained in the margins of national economic development strategies, if included at all. For their part, donors have also not been particularly attentive to the needs of adolescent girls.

But this is rapidly changing. The international community is now on the verge of making adolescent girls the center point of development strategies. For example, at the recent Commission on the Status of Women meeting at the United Nations (UN), the heads of six key UN agencies (the International Labor Organization, UNESCO, UNFPA, UNICEF, UNIFEM, and the World Health Organization) unveiled a strategy document that, for the first time, places the interests of women and girls at the heart of the development agenda. The plan was issued under the auspices of the newly formed UN Adolescent Girls Task Force, of which these six agency heads are members. It commits them to support efforts to educate adolescent girls, improve their health (including their reproductive health), keep them free from violence, promote their economic and social development, and better monitor their progress, so policies can be developed “to advance their well-being and realize their human rights.”

Part of the reason why adolescent girl issues are moving to the forefront of the international community’s discussion on development can be attributed to the upcoming 2010 Summit to review progress on achieving the UN Millennium Development Goals. These goals were adopted by 191 world leaders in 2000 as targets for global development; they include reducing child mortality, cutting poverty, stopping the spread of HIV/AIDS, and achieving universal primary education by 2015.

For many of the reasons outlined by Temin, Levine, and Stonesifer, leaders are now realizing that progress on the Millennium Development Goals cannot be sustained unless the needs of girls are addressed directly. The connection between fulfilling our obligations to girls and achieving many of the Millennium Development Goals is becoming firmly entrenched in UN circles.

As always, civil society has an important role to play. Aligned with the UN’s renewed focus on adolescent girls, the UN Foundation has launched a campaign called Girl Up to harness the enthusiasm and entrepreneurial spirit of U.S girls as advocates for their counterparts in the developing world.

The authors deserve praise for so thoroughly articulating the linkages between the lives of girls and the overall welfare of their communities. The rest of the world, finally, is beginning to catch on.

KATHY BUSKIN CALVIN

Chief Executive Officer

UN Foundation

Washington, DC


Miriam Temin, Ruth Levine, and Sandy Stonesifer issue a call to action for the international community to finally “walk the talk” of its numerous reports on the importance of helping adolescent girls experience healthy transitions to adulthood. Rather than focusing on the individual behavioral interventions that target girls, the authors direct our attention to the underlying social determinants of girls’ poorer health and education outcomes in societies around the world. They highlight the need for attention to the social factors affecting girls’ lives and well-being, such as societal gender norms that make girls vulnerable to infection with HIV and gender-based violence. They also critique the education systems that fail to retain girls through secondary school and thereby hinder the positive impact that girls’ education can have on population health.

The authors provide helpful examples of projects in Latin America and sub-Saharan Africa that have successfully worked to change local gender inequalities between young men and women (in Brazil) or that have enhanced girls’ social networks and successful school participation (in Ethiopia). Although the authors emphasize the importance of including boys and men in the effort to improve girls’ health and well-being, additional social science research is needed on the effectiveness of intervening with younger children (including boys). For example, interventions with younger children could prevent the negative impacts of gender inequalities on older adolescent girls’ and women’s health and economic futures; this may be more effective than trying to undo social norms already formed. Using an interdisciplinary perspective, policymakers, researchers, and practitioners need to recognize the gendered dynamics of girls’ everyday experiences and to implement structural and environmental changes in girls’ school and community environments.

INTERVENTIONS WITH YOUNGER CHILDREN COULD PREVENT THE NEGATIVE IMPACTS OF GENDER INEQUALITIES ON OLDER ADOLESCENT GIRLS’ AND WOMEN’S HEALTH AND ECONOMIC FUTURES; THIS MAY BE MORE EFFECTIVE THAN TRYING TO UNDO SOCIAL NORMS ALREADY FORMED.

The authors highlight the need to support girls in becoming their own advocates; they emphasize how girls who speak up for their own health and well-being are the most effective agents of social change. The importance of this final point cannot be overstated and could be enhanced by identifying successful interventions around the world where girls’ voices and recommendations are already being successfully incorporated into interventions and government policy. In January 2005, when UNICEF and the International Water and Sanitation Center brought together Ministers of Education and schoolgirls from across low-income countries to speak about how the onset of menses was disruptive to school participation, the girls’ articulate explanations convinced the predominantly male ministers of structural gender discrimination in schools that was hindering the closing of the gender gap in education (Carol Watson, UNICEF, personal communication). As a 12-year-old representative from Nigeria noted at the meeting, “It is no longer a cliché that any long-term development goal without the involvement of children and young people, who are the future-oriented generation, is not a positive step.” (Oxford Roundtable, 2005, p. 28). This holds true for meeting all of the girls’ health goals called for in this article.

Promoting the health of young women means supporting the agency of young women and working with young men and women to reduce gender inequalities. “Start with a Girl” is a valuable roadmap for this effort.

MARNI SOMMER

JOHN SANTELLI

Mailman School of Public Health

Columbia University

New York, New York


To innovate, educate

In June 2009, the Pew Research Center conducted a survey of more than 1,000 randomly sampled American adults. Each participant took a brief quiz testing their science knowledge including one question that asked whether or not lasers work by focusing sound waves (they don’t). More than half of those surveyed got the question wrong, and no single demographic group scored better than 66%.

Many approaches to stoking innovation in the United States focus on R&D funding or other advanced options. Within education, most attention is paid to increasing the number of Americans graduating from college with a STEM-related degree. Yet this ignores a fundamental truth about the innovation agenda: Developing economic strength based on a foundation of innovation is only as good as a nation’s capacity to implement what is yielded from R&D. As such, enhancing individual capacity for understanding math and science across the entire spectrum of our society is necessary to drive the nation into an era of global competitiveness.

The truth is that modern manufacturing facilities require that workers understand innovations such as lasers or the complex computers and space-age fuel cells used in today’s automotive industry. Math and science education is crucial to instilling the ability to critically analyze and problem solve, two of the vital skills that all employers increasingly demand and decreasingly find in their workers. Although many people point to the growing “green economy” as the way to reduce the employment rates, many U.S. companies are forced to outsource production of products such as solar panels because such a large proportion of the nation’s workforce has only a high school diploma or less. Skilled workers are becoming increasingly difficult to find.

In their article “United States: A Strategy for Innovation” (Issues, Spring 2010) Diana Farrell and Thomas Kalil describe the U.S. export control systems as being “rooted in the Cold War era of more than 50 years ago.” If this line were applied to public education, it would vastly understate just how archaic the system has become. We cannot expect to foster a love of math and science in our children when our classrooms continue to exclude higher levels of learning from the curriculum. We cannot expect students to pursue STEM-skilled careers—at any level—when we consider career technical education and work-based learning opportunities to be a lesser educational experience.

The Carnegie-IAS Commission on Mathematics and Science Education recently outlined many necessary changes to the education system in its report The Opportunity Equation. These foundational reforms include: the establishment of clear, common math and science standards; improving math and science teaching; and redesigning schools to deliver excellent and equitable math and science education. In support of these reforms, the U.S. Chamber of Commerce’s Institute for a Competitive Workforce is mobilizing its grassroots network to support policies that develop a STEM-capable workforce.

Ultimately, we must foster a culture that makes math and science desirable. That may eventually be achieved through the nation’s classrooms, but the work begins with modern public policy that recognizes education as the cornerstone of innovation and economic development.

ARTHUR J. ROTHKOPF

Senior Vice President

U.S. Chamber of Commerce

Washington, DC


Bringing U.S. Roads into the 21st Century

Information technology (IT) has transformed many industries, from education to health care to government, and is now in the early stages of transforming countries’ transportation systems. Although many think that improving a country’s transportation network means solely building new roads or repairing aging infrastructure, the future of transportation lies not only in concrete and steel, but in the implementation of IT. IT enables assets throughout the transportation system—vehicles, buses, roads, traffic lights, message signs, and so forth—to become intelligent by embedding them with microchips or sensors and empowering them to communicate with each other through wireless technologies.

Doing so empowers actors in the transportation system—from commuters, to highway and transit network operators, to the actual devices themselves—with information (that is, intelligence) to make better-informed decisions, whether it is choosing which route to take, when to travel, whether to take a different type of transportation (for example, mass transit instead of driving), how to use traffic signals to allow the most efficient flow of vehicles possible, where to build new roads, or how to hold providers of transportation services accountable for results. This information can be used to maximize the performance of the transportation network and to move toward performance-based funding of transportation systems.

The power of intelligent transportation systems (ITS) to revolutionize transportation makes them an important factor contributing to countries’ economic competitiveness. The many advanced countries that have taken moderate to significant steps to deploy ITS include Australia, France, Germany, Japan, the Netherlands, New Zealand, Sweden, Singapore, South Korea, the United Kingdom, and the United States, with China rapidly catching up. Several of these countries have particular strengths in ITS, notably the real-time provision of traffic information in Japan and South Korea; congestion pricing in Sweden, the United Kingdom, and Singapore; vehicle-miles traveled systems in the Netherlands and Germany; electronic toll collection in Japan, Australia, and South Korea; and ITS in public transit in South Korea, Singapore, and France. Overall, Japan, South Korea, and Singapore stand out as world leaders in ITS deployment.

Although the United States has pockets of strength in ITS, at a national level it clearly lags the world leaders, just as it does in health IT, contactless mobile payments, digital signatures, and other emerging IT application areas. In all of these areas, the United States trails for a variety of reasons, including because it has lacked a comprehensive strategy, has underinvested, and has assumed that the private sector could develop and deploy these technologies by itself. In contrast, leading ITS countries have benefitted from strong government leadership, greater funding, and an ability to successfully forge public-private partnerships.

If the United States is to achieve even a minimal ITS system, the federal government will need to assume a far greater leadership role in not just ITS R&D, but also ITS deployment. It is time for the United States to view ITS as the 21st century digital equivalent of the interstate highway system of the 1950s and 1960s, in which the federal government took the lead in setting a vision, developing standards, laying out routes, and funding construction.

A wide range of benefits

ITS encompass a wide variety of technologies and applications that can be grouped into five categories:

  • advanced traveler information systems, which provide drivers with real-time travel and traffic information, such as transit routes and schedules, navigation directions, and information about delays due to congestion, accidents, weather, or road repairs;
  • advanced public transportation systems, which apply ITS to public transit, so buses or trains can report their position to inform passengers of their status in real time;
  • advanced transportation management systems, which include traffic control devices, such as traffic signals, ramp meters, dynamic (variable) message signs, and traffic operations centers;
  • ITS-enabled pricing systems, which help finance transportation through means such as electronic toll collection, congestion pricing, high-occupancy toll lanes, and vehicle miles traveled usage-based fee systems; and
  • cooperative vehicle-highway systems, such as vehicle-to-vehicle or vehicle-to-infrastructure integration, which enable connectivity and communication between vehicles and infrastructure such as roadside sensors or traffic lights, or to other vehicles.

ITS deliver five classes of benefits by increasing driver and pedestrian safety; improving the operational performance of the transportation network, particularly by reducing congestion; enhancing personal mobility and convenience; delivering environmental benefits; and boosting productivity and expanding economic and employment growth.

ITS can help reduce the 1.2 million fatalities that occur annually on the world’s roads, including the 5.8 million crashes and 37,621 fatalities in the United States in 2008. Widespread use of electronic toll collection could largely eliminate the 30% of highway crashes that occur in the vicinity of toll collection booths. Highway ramp metering has been shown to reduce total crashes by at least 15%. If IntelliDrive, the U. S. vision for a nationwide cooperative vehicle-highway system, were deployed, it could address 82% of vehicle crash scenarios involving unimpaired drivers. Deploying ITS could thus help mitigate the $230-billion annual economic impact, equivalent to nearly 2.3% of U.S. gross domestic product, of traffic accidents and associated injuries or loss of life.

ITS improve the performance of transportation networks by maximizing the capacity of existing infrastructure, reducing the need to build additional highway capacity. For example, applying real-time traffic data to U.S. traffic signal lights can improve traffic flow significantly, reducing stops by as much as 40%, reducing travel time by 25%, cutting gas consumption by 10%, and cutting emissions by 22%. ITS can contribute significantly to reducing congestion, which in the United States costs commuters 4.2 billion hours (a full work week per driver) and 2.8 billion gallons of fuel each year, costing the U.S. economy up to $200 billion annually. One study found that traffic jams could be reduced as much as 20% by 2011 in areas that use ITS. Other studies have found that region-wide congestion pricing could reduce peak travel volume by 8 to 20%, and that if congestion pricing was implemented on the nation’s interstates and other freeways, vehicle miles traveled would be reduced by 11 to 19%.

By improving the performance of the transportation network, ITS deliver environmental benefits, enhance driver mobility and convenience, and even boost productivity and economic growth. For Japan, ITS have been crucial as it strives to meet its goal of reducing CO2 emissions by 2010 to 31 million tons below 2001 levels, with 9 million tons of reduction coming from more fuel efficient vehicles, 11 million tons from improved traffic flow, and 11 million tons from more effective use of vehicles, the latter two a direct benefit of the country’s ITS investments. Not only does reduced congestion enhance driver mobility, it ensures that businesses can rely on the transportation network to support just-in-time supply chains. For many countries, ITS is a rapidly expanding, export-led growth sector that contributes to national competitiveness and employment growth. The U.S. Department of Transportation (DOT) has estimated that the ITS field could create almost 600,000 new jobs over the next 20 years, and a study of ITS in the United Kingdom found that a $7.2-billion investment would create or retain 188,500 jobs for one year.

ITS deliver superior benefit-cost returns when compared to traditional investments in highway capacity. Overall, the benefit-cost ratio of systems-operations measures enabled by ITS has been estimated at about 9 to 1, far above the addition of conventional highway capacity, which has a benefit-cost ratio of 2.7 to 1. The benefits of traffic signal optimization alone outweigh costs by 38 to 1. A 2005 study of a model ITS deployment in Tucson, Arizona, consisting of 35 technologies that would cost $72 million to implement, estimated that the average annual benefits to mobility, the environment, and safety totaled $455 million annually, a 6.3 to 1 benefit-cost ratio. According to the U.S. Government Accountability Office (GAO), the present value cost of establishing and operating a national real-time traffic information system in the United States would be $1.2 billion but would deliver present value benefits of $30.2 billion, a 25 to 1 benefit-cost ratio.

Challenges to deploying ITS

Despite their technical feasibility and significant benefit-cost returns, many nations underinvest in ITS because there are a significant number of challenges involved in developing and deploying them, including system interdependency, network effect, and scale as well as funding, political, and institutional challenges. Whereas some ITS, such as ramp meters or adaptive traffic signals, can be effectively deployed locally, the vast majority of ITS applications—and certainly the ones positioned to deliver the most extensive benefits to the transportation network—must operate at scale, often at a national level, and must involve adoption by the overall system and by individual users simultaneously, raising complex system coordination challenges. For example, systems like IntelliDrive in the United States must work on a national basis to be effective. It does a driver little good to purchase an IntelliDrive-equipped vehicle in one state if it doesn’t work in another. Likewise, drivers are not likely to demand on-board telematics units capable of processing real-time traffic information if that information is unavailable from government or private sector providers. It makes little sense for states to independently develop a vehicle miles traveled usage-fee system because, in addition to requiring a device on vehicles, VMT requires a satellite and back-end payment system, and it makes little sense for states to independently replicate these infrastructure investments.

Apart from generally being underfunded, another challenge for ITS projects is that they often have to compete for funding with conventional transportation projects that may be more immediately pressing but don’t deliver as great long-term returns. Unfortunately, as one GAO study found, in many cases, “information on ITS benefits does not have a decisive impact on the final investment decisions made by state and local officials.” And although many state transportation departments have in-depth expertise in conventional transportation technology, such as pavement and bridges, they often lack either knowledge or interest in ITS, and therefore centralizing that knowledge in one location at the federal level may be more effective.

Although deploying ITS raises a number of challenges, none of them are insurmountable, and a number of countries have overcome them. Japan, South Korea, and Singapore, in particular, stand out.

Japan leads the world in ITS based on the importance ascribed to ITS at the highest levels of government, the number of citizens benefitting from use of an impressive range of deployed ITS applications, and the maturity of those applications. Japan clearly leads the world in traveler information systems. Its Vehicle Information and Communication Systems (VICS) program delivers up-to-the minute, in-vehicle traffic information to drivers through an on-board telematics unit, and has been available nationwide since 2003. Japan’s Smartway system is capable of marrying knowledge of a vehicle’s precise position with location-specific, real-time traffic information, enabling it, for example, to warn a driver via voice instruction, “You are coming up to a blind curve with congestion backed up behind it, slow down immediately.” Smartway also provides visual information of road conditions ahead, via live camera images of tunnels, bridges, or other frequently congested areas. Impressively, Smartway evolved extremely fast, from a concept in 2004, to limited deployment in 2007, to national deployment in 2010. At least 34 million vehicles have access to real-time, in-vehicle traffic information in Japan.

Japanese citizens can use the Internet or their mobile phones to access comprehensive real-time traffic information about almost all highways in the country through an integrated road traffic information provision system. The Web site features maps that display a broad range of traffic information, including warnings about traffic restrictions, congestion data, weather conditions on roads, and repair activity. Japan has also focused on providing real-time traffic information during natural disasters and has designed mechanisms to automatically feed data about such events into Smartway and VICS.

Japan is also a world leader in electronic toll collection, with 25 million vehicles (about 68% of all vehicles regularly using Japan’s toll expressways) equipped with on-board toll-collection units. Japan operates a single national standard for electronic toll collecting, thus ensuring nationwide system compatibility. In short, drivers can go anywhere in the country with only one tag, unlike the United States, where multiple tags are needed. In designing its electronic toll collection technical architecture, Japan adopted an active method for two-way communication based on the 5.8GHz-band system, which enables roadside and on-board units to interact with each other, instead of a passive method, in which the electronic tag on the vehicle reacts only when “pinged” with a signal from a roadside toll collector device. This design decision has been crucial in expanding electronic toll collection so that private companies, such as parking garages or gas stations, can offer electronic payment options. Japan also regularly uses variable pricing, easy to implement electronically, so that prices can be changed to reflect traffic conditions and thus manage congestion.

South Korea’s strengths in real-time traffic information provision, advanced public transportation systems, and electronic toll collection make it a world leader in ITS. Busan, South Korea, will host the 2010 ITS World Congress, which will show off Busan as 1 of 29 South Korean cities with comprehensively deployed ITS infrastructure.

South Korea’s Expressway Traffic Management System collects real-time traffic information and transmits it to the country’s National Transport Information Center (NTIC) via a high-speed optical telecommunication network deployed specifically to support the country’s ITS infrastructure. Collected and processed traffic information is provided to South Korean citizens free of charge through various channels, including variable messaging signs, the Internet, and broadcasting. The NTIC Web site offers an interactive graphic map that citizens can access to see a consolidated view of traffic status on the country’s roads. Real-time traffic information is available not only on expressways but also on national and urban district roads. Almost one-third of all vehicles in South Korea use onboard vehicle navigation systems.

Public transportation information systems, particularly for buses, are highly deployed in South Korea. Seoul alone has 9,300 on-bus units equipped with wireless modems and GPS position detectors. About 300 bus stops communicate with Seoul’s central traffic operations management center via wireless communications to provide an integrated, up-to-the second view of Seoul’s bus transportation network. The service includes bus arrival time, current bus location, and system statistics. Bus stop terminals are equipped with liquid crystal display message screens to alert riders of bus status and schedules. South Koreans regularly use the location-based tracking feature in their GPS-enabled phones to access a Web site that presents a list of available public transportation options; the system recognizes where the passenger is located and provides walking directions to the nearest option.

South Korea’s Hi-Pass electronic toll collection system covers 260 toll plazas and more than 3,200 kilometers of highway. Five million South Korean vehicles use Hi-Pass, which has a highway utilization rate of more than 30%. Hi-Pass covered 50% of highways in 2009, will cover 70% by 2013, and be available nationwide thereafter. South Koreans can also use their Hi-Pass card for paying for parking and buying gas and other products.

Singapore collects real-time traffic information from a fleet of 5,000 taxis acting as vehicle probes. Data including speed and location are fed back to an operations management center, where an accurate picture of traffic flow and road congestion is generated. Singapore disseminates traffic information via its Expressway Monitoring and Advisory System, comprised in part of variable message signs placed strategically along expressways. In April 2008, Singapore launched a nationwide system consisting of roadside variable messaging signs, which alert drivers to the availability of parking spaces at various locations.

Singapore, which has had some form of congestion pricing in place in its city center since 1975, is a world leader in electronic road pricing. In 1998, Singapore implemented a fully automated system with an in-car unit that accepts a prepaid smartcard called the Cashcard. The system has since been expanded beyond Singapore’s downtown restricted zone to its highways and arterial roads. Singapore’s scheme uses traffic speeds as a proxy for congestion. Rates are raised or lowered to achieve traffic optimization along a speed-flow curve, 45 to 65 kmph for expressways and 20 to 30 kmph for arterial roads. In effect, the system uses market signals to manage supply and demand on Singapore’s roads. Singapore is currently evaluating moving to a next generation system that would use satellite-based GPS technology to make distance-based congestion charging possible. The government estimates that the economic benefit of time savings due to shorter delays on expressways, largely achieved through use of congestion charging, amounts to more than $40 million annually.

Singapore made public transportation a more attractive option for commuters by installing real-time bus arrival panels in January 2008 at almost all bus stops throughout the country. Another service, launched in July 2008, advises commuters on optimal public transport travel routes from origin to destination.

The United States lags behind

The United States lags world leaders in aggregate ITS deployment, particularly the provision of real-time traffic information by transportation agencies, progress on cooperative vehicle-highway integration, adoption of computerized traffic signals, and maximizing the effectiveness of already fielded ITS systems. Although the United States certainly has pockets of strength in ITS in particular regions or applications—including use of variable rate highway tolling, electronic toll collection, certain advanced traffic management systems such as ramp metering, and an active private sector market in telematics and travel information provision—overall the implementation of ITS in the United States varies immensely by state and region, thus tending to be sporadic, isolated, and not connected into a nationally integrated system.

Regarding the collection of real-time traffic information, a 2009 GAO report found that the technologies used by state and local agencies to do so covered only 39% of the combined freeway miles in 64 metropolitan areas providing information. This is a significant gap, given that urban freeways account for the majority of the nation’s traffic, congestion, and travel time variability. The picture was not much better for the dissemination of real-time travel information. In 2007, according to the GAO, 36% of the 94 data-providing U.S. metropolitan areas provided real-time highway travel time, and 32% provided travel speed information. For arterial roadways, only 16% of the 102 data-providing metropolitan areas disseminated real-time travel speed information and only 19% distributed travel time data. The United States does better at distributing incident information in real time, with 87% of metropolitan areas providing real-time information about incidents on freeways and 68% sharing incident information on arterial roads.

The United States has room for improvement in maximizing the value of already deployed ITS systems and taking advantage of readily available and implementable ITS applications, such as adaptive traffic signal lights. For example, in 2007, the National Transportation Operations Coalition, an alliance of national associations, practitioners, and private sector groups representing the interests of the transportation industry at the state, regional, and local levels, gave the United States a “D” grade because the vast majority of signalized intersections were using static, outdated timing plans based on data collected years or decades before. The San Francisco Bay area had 4,700 traffic sensing detectors along its 2,800 freeway miles in 2003, with 29% of the roads incorporating sensors spaced every mile, and 40% with sensors spaced every two miles. However, about 45% of the devices were out of service, significantly reducing the system’s ability to produce reliable traffic data. The GAO’s 2009 report reiterated that many of the problems pertaining to inadequate funding for the operation of already fielded ITS have not improved appreciably since 2003.

For the most part, these problems have resulted from continued inadequate funding for ITS and the lack of the right organizational system to drive ITS in the United States, especially the weak federal role. The U.S. ITS effort focuses on research, is funded at $110 million annually, and operates out of the DOT’s Research and Innovative Technology Administration’s (RITA’s) ITS Joint Program Office. To reorganize and reanimate the U.S. ITS effort, on January 8, 2010, RITA unveiled a five-year ITS Strategic Research Plan, 2010-2014, that will make an assessment of the feasibility, viability, and value of deploying IntelliDrive. Although the strategic plan represents an important step forward, the United States needs to make a fundamental transition from a mostly research-oriented approach to a focus on deployment and to accelerate the speed at which ITS technologies reach the traveling public. But the pace of progress is slow. Whereas Japan took just six years to move from conceptualization to deployment of its cooperative-vehicle highway system, Smartway, it will take the United States five years simply to research and to make determinations about the feasibility and value of IntelliDrive.

The United States has every bit the technological capability that Japan, South Korea, and Singapore possess in ITS, and actually held an early lead in ITS in the 1990s, with the advent of GPS technology and first-generation telematics systems. In fact, many ITS technologies were initially developed in the United States but found quicker and greater deployment elsewhere. But institutional, organizational, policy, and political hurdles in the United States have allowed other countries to take the lead.

The need for a national commitment

Policy factors are centrally important in explaining international ITS leadership. Overall, countries leading the world demonstrate, first, a national-level commitment and vision, giving the government a strong leadership role, and second, make substantial investments in ITS deployment.

A major reason why Japan, South Korea, and Singapore lead is because they view ITS as one of a suite of IT applications or infrastructures that will transform their societies and drive economic growth. As such, they have focused on establishing policies for digital transformation generally and ITS transformation specifically, and made both national priorities. As an ABI Research report noted, “Japan and South Korea lead the world in ITS, and national government agendas are among the most significant drivers for the development of ITS [there].” In contrast, there has been no national vision for IT transformation in the United States. To the extent that it gets attention and funding, ITS has been seen simply as an adjunct tool that might make transportation marginally better.

Japan’s 2001 eJapan Strategy, which sought to transform the country into one of the most advanced IT nations within five years, explicitly recognized the importance of public transport systems that rely on advanced information communications technologies. In June 2007, Japan announced a long-term strategic vision, called Innovation 25, which set the following goal: “By 2025, ITS will have been constructed that integrate vehicles, pedestrians, roads, and communities; and that have made traffic smoother, eliminated traffic congestion, and almost entirely eliminated all traffic accident fatalities. Smoother traffic will mean lower CO2 emissions and logistics costs.” Japan wants to reduce traffic fatalities below 5,000 by 2012 and eliminate them altogether by 2025.

South Korea’s government has also acknowledged the power of IT to drive economic growth and improve citizens’ quality of life, and has recognized the impact IT can have in improving the country’s transportation system. In 2004, South Korea announced a strategy that identified key IT areas, including ITS, where the country would seek world leadership. Beyond its strategic IT plan, South Korea also created a national ITS master plan. Likewise, Singapore has both a national IT strategy and an ITS master plan.

The leading countries invest heavily in ITS. South Korea has pledged a $3.2-billion investment, an average of about $230 million annually, from 2007 to 2020. Japan invested approximately $645 million in ITS from April 2007 to March 2008 and $664 million in ITS from April 2008 to March 2009. Aggregate ITS spending at all government levels in the United States in 2006 was approximately $1 billion, most of it spent at the state level. As a percentage of GDP, South Korea and Japan each invest more than twice as much in ITS as does the United States.

Accelerating deployment

Compared to other countries that recognize the key role of government in assisting their countries through IT-enabled economic transformation, the United States has largely believed, incorrectly, that this is something the private sector can do on its own. To the extent the United States has developed an ITS plan, it is not connected to a national IT strategy, is relatively late in coming, cautious in its goals, and not yet a plan for national ITS deployment.

Since the interstate highway system was for the most part completed, the surface transportation policy community has collectively struggled with defining the appropriate role of the federal government in the nation’s surface transportation system. In the 21st century digital economy, one key role for the federal government should be to take responsibility for the development and implementation of a world-class ITS system across the United States. Just as building the interstate system did not mean an abandonment of the state role, neither does this new role. But just as the building of the interstate system required strong and sustained federal leadership, so too does the transformation of our nation’s surface transportation system through ITS.

Policy action is urgently needed if the United States is to reposition itself as one of the world’s ITS leaders. To get there, Congress should take these steps:

  • Significantly increase funding for ITS at the federal level, by $2.5 to $3 billion annually, including funding for large-scale demonstration projects, deployment, and the operations and maintenance of already deployed ITS systems. Specifically, the next surface transportation authorization bill should include at least $1.5 billion annually in funding for the deployment of large-scale ITS demonstration projects. The authorization should also provide dedicated, performance-based funding of $1 billion for states to implement current ITS systems and provide for operations, maintenance, and training for already deployed ITS systems at the state and regional levels.

    The call for at least $1 billion in federal funding to support operations of already deployed or readily deployable ITS comes closer to matching the amount ITS leaders such as Japan spend on a per-capita basis and would go a long way towards alleviating the problems documented here of citizens not realizing the full benefits of already deployed ITS due to insufficient funding. Moreover, the recommendation is in line with recommendations from both the American Association of State Highway and Transportation Officials and ITS-America (a public-private organization promoting ITS) to spend at least $1 billion annually to support deployment of ITS technologies and intermodal integration.

  • Expand the mandate of the DOT’s ITS office to move beyond R&D to include deployment.
  • Tie federal surface transportation funding to states’ actual improvements in transportation system performance. Currently, the funding allocations for the major transportation programs are largely based on formulas reflecting factors such as state lane miles and amount of vehicle miles traveled. As a result, although there is substantial process-based accountability for how federal funds are used, there is woefully little attention paid to results. Performance measurement, evaluation, and benchmarking are notably absent from surface transportation funding. Transportation agencies at all levels of government face virtually no accountability for results. Holding states accountable for real results will allow federal and state transportation funds to go farther, achieving better results for the same amount of funding. It will also provide stronger incentives for states to adopt innovative approaches to managing highways, including implementing ITS.
  • Charge the DOT with developing, by 2014, a national real-time traffic (traveler) information system, particularly in the top 100 metropolitan areas, with this vision including the significant use of probe vehicles. Quickly focusing on ensuring the availability of real-time traffic information to U.S. motorists is warranted because the technology to do so is readily available, would benefit large numbers of drivers, and as the GAO reported, the $1.2-billion expenditure would result in total cost savings of $30.2 billion due to benefits including enhanced driver mobility, reduced environmental impact, and increased safety.
  • Authorize a comprehensive R&D agenda, including investments in basic research, technology development, and pilot programs, to begin moving the United States to a mileage-based user fee system by 2020. As documented in the National Surface Transportation Infrastructure Financing Commission’s 2009 report Paying Our Way, the current federal transportation funding structure that relies primarily on taxes imposed on petroleum-derived vehicle fuels is not sustainable over the long term. Moving toward a funding system based more directly on miles driven (and potentially other factors, such as time of day, type of road, and vehicle weight and fuel economy) rather than indirectly on fuel consumed, is “the most viable approach” to ensuring the long-term sustainability and solvency of the Highway Trust Fund.

Personal Health Records: Why Good Ideas Sometimes Languish

At first blush it would seem that maintaining a personal health record (PHR) has many merits. Almost everyone would want to have health information about themselves readily available in a digital format and completely under their control. They could then make it accessible to anyone else they choose; for example, emergency health personnel or a new specialist physician. Yet only a very small minority of Americans have a PHR, which should not to be confused with electronic health records (EHRs) maintained and controlled by doctors and hospitals. A number of explanations are offered for this surprising finding, but the most compelling one comes from Sigmund Freud. In public policy as in personal psychology, unconscious or subterranean forces exert a powerful but underappreciated influence. The hidden resistance to PHRs could be the most powerful reason that explains why they have made so little progress in spite of their manifest virtues.

The basic definition of a PHR, as put forth by a 2008 U.S. Department of Health and Human Services (HHS) Office of the National Coordinator for Health Information Technology report, is “An electronic record of health-related information on an individual that conforms to nationally recognized interoperability standards and that can be drawn from multiple sources while being managed, shared, and controlled by the individual.” Furthermore, PHRs can be designed in ways that would allow individuals to decide which parts of the information included can be accessed by others, including various medical personnel.

A PHR may include information about an individual’s conditions and ailments, medication and dosages, test results, immunization history, and allergies. As information is added over time, a PHR can also serve as an evolving medical record of treatments provided and their effectiveness.

According to HHS and medical researchers, PHRs improve the care provided by health care personnel. Given that people are treated by a variety of specialists in addition to their primary care doctor, often travel, and move with relative frequency, a PHR enables new health care workers to gain a much fuller and more reliable record of an individual’s health and medical history than they gain when they have to rely on the patient’s memory or wait for records to be collected from previous sources of care. Health care also benefits from not having to repeatedly prepare a health and medical history. This history can be prepared once and incorporated into the PHR. From then on, it can me made available to any subsequent health care professional the individual sees.

PHRs also increase patient safety. Given that medical information is be cataloged and easily accessible, PHRs “… increase patient safety through exposing diagnostic or drug errors, recording non-prescribed medicines or treatments, or increasing the accessibility of test results or drug alerts,” according to an analysis published in the British Medical Journal (BMJ).

In addition, PHRs encourage patients to become more involved with their own care. For instance, BMJ argues that the use of PHRs leads to “improvements in … confidence in self care [and] compliance in chronic disease.” Furthermore, “patients with long-term conditions, who have the most need to track their illness and treatments, and patients experiencing episodic periods of care or treatment that generate new needs for information or communication” stand to benefit. Moreover, in emergencies when the patient is unconscious, an authorized family member can provide access to the individual’s PHR, thus providing health care personnel with information they may not have otherwise been able to obtain in short order.

Experts expect that PHRs can advance communication and trust between health care personnel and patients. Paul C. Tang, vice president and chief medical information officer at the Palo Alto Medical Foundation, and Thomas H. Lee, the network president for Partners HealthCare System, Boston, and an associate editor of the New England Journal of Medicine believe that “the more access provided, the stronger the partnership that will be cultivated between patients and clinicians.” Assuming that a patient’s physician has access to the patient’s PHR, these physicians would be able to track the disease collaboratively with the patient, thus potentially improving communication between the two and making it easier for physicians and patients to “create a shared patient record and formulate a shared treatment plan.”

PHRs reduce health care costs. They make test results more accessible, which reduces the number of duplicate tests needed, leading to a decrease in costs. Studies show that at least 10% of tests ordered by medical professionals are redundant. In addition, by aiding patients with chronic conditions, PHRs lower chronic disease management costs. Further areas of potential cost reduction include medication costs and wellness program costs.

More than 90% of Americans believe that patients should have access to electronic medical records maintained by their own physicians, according to a 2007 poll. (Note that the survey deals with EHRs and not necessarily PHRs.) Furthermore, over 60% of the public would use at least one feature of an online medical record if available, according to a 2003 Markle Foundation survey. The same survey found that 71% believed that access to online medical records would help clarify their doctors’ instructions, 65% felt that online records would “give then a greater sense of empowerment regarding their health,” and 65% felt that online records would help prevent mistakes.

All in all, given the rather obvious benefits and wide public support, one would expect PHRs to be widely introduced and used.

Disappointing progress

Despite the many advantages of PHRs and solid public support, most patients do not have one. In fact, they have not even been discussed extensively by policymakers. Over the years, discussion in the medical community and at the state and federal level has focused on electronic health records (EHRs) used and controlled by health care personnel, not by patients.

True, PHRs did receive some attention. A few early PHR-type systems focused on providing services to a small, mobile group. For example, in 1961 the U.S. Public Health Service produced a printed form that migrant workers could carry with them that contained their health information; each subsequent physician could add information to the form. In 2003, MiVIA, a similar service based online, was launched to cater to the needs of migrant workers in Sonoma Valley, California. This program has since expanded to other such populations. A PHR-type system is currently being offered by the U.S. Department of Veterans Affairs.

Some private PHR-like systems were created in the 1990s. However, many of these systems were difficult to use, charged fees, and disappeared as many dot-com companies collapsed. A study in the International Journal of Medical Informatics identified 27 limited PHR-type systems in 2000; only 7 of those systems were still available in 2003. Furthermore, the initial 27 identified were “beta releases” that were in early stages of development and had not achieved widespread use. Most would not qualify as PHR systems by the definition provided by HHS in 2008.

Since the mid-2000s, the interest in providing private PHR systems has increased, especially with the introduction of Google Health, Microsoft HealthVault, and other Web-based services. However, even these PHR systems, although more advanced than the Web-based systems of the 1990s, fall short of full-fledged PHRs, especially because of a lack of interoperability. As of May 2009, approximately 7.3 million adults in the United States use an online PHR, according to Marc Donner, director of engineering for Google Health, far fewer than those who have expressed interest in using them.

Manifest barriers

Observers have identified a number of apparent reasons for the very slow progress in the introduction and use of PHRs.

Missing building blocks. For PHRs to be developed beyond some very primitive forms, the information lodged in the offices of physicians, hospitals, and other health care personnel must be in a digital form. However, as of 2008, only 1.5% of hospitals had a comprehensive electronic records system in all major units, and only 7.6% had such a system in at least one clinical unit, according to a New England Journal of Medicine study. In addition, only 17% of physicians use basic or comprehensive electronic records, despite the great merits of adopting this technology (for purposes other than PHRs) and considerable public support.

Not reimbursed. In the same study, hospitals cited “inadequate capital for purchase” and “concerns about maintenance costs” as the two most common barriers against the adoption of electronic records (at 74% and 44%, respectively). One-third also cited being unclear about the return on the investment. According to Ashish Jha, associate professor of health policy and management at the Harvard School of Public Health, it can cost a single hospital, depending on its size, anywhere from $20 million to $200 million to implement an electronic record system over several years. For individual physicians, the cost is in the tens of thousands of dollars.

To help offset these costs, the Obama administration has made $45 billion available to doctors and hospitals as part of the stimulus plan. However, the stimulus bill requires that hospitals and doctors pay for the systems and then be reimbursed if they meet specific usage standards. Furthermore, the $45 billion may well not be sufficient; estimates on the total cost range from $75 billion to $150 billion. As these lines are written, it is not yet possible to establish the extent to which the $45 billion was actually distributed and the use to which it was put. In any case, it is clear that most health care facilities are have not yet computerized their records.

Fear of productivity loss. More than a third of hospitals studied found resistance to the adoption of electronic records on the part of physicians. This resistance is said to reflect concerns about reduced clinical productivity. A 2007 Journal of the American Medical Informatics Association study of electronic record adoption in Massachusetts found that 81% of respondents identified loss of productivity as a barrier to the adoption or expanded use of electronic records. A 2005 BMJ study of the user perception of an electronic record system in Kaiser Permanente Hawaii found that 17 of the 26 individuals interviewed reported reduced clinician productivity. Reasons cited varied from poor system design to a “lack of clinical capacity to absorb changes during implementation.” Fourteen clinicians reported that the additional time burden created by the system remained even after the learning period.

Missing interoperability. The electronic records that exist at present, for the most part, lack interoperability. By and large, the electronic records systems currently operational in doctors’ offices and hospitals cannot exchange detailed information with each other. Private PHR-like systems such as Google Health and Microsoft HealthVault are directly interoperable with only a few clinics, hospitals, insurers, and pharmacies. Although the federal standards for meaningful use of EHRs under the Electronic Health Record Incentive Program proposes that these records should have the “capability to [electronically] exchange key clinical information … among patient-authorized entities” such as personal health record vendors, these rules have yet to be adopted. Currently, many digitized records obtained by a patient would still be difficult to upload into his or her own PHR. Furthermore, given the small number of health care facilities with electronic records, most patients would be forced to input each piece of information into their PHRs manually to update their records. This process is both laborious and error-prone.

The risks of incomplete data. Some health care personnel prefer EHRs over PHRs. The main reason is that with PHRs, individuals choose what to include and what to allow others, even emergency personnel, to access at any given time. Thus, health care personnel who rely on PHRs face the risk of making decisions based on incomplete, partial, or possibly patient-edited data.

Consider this example. Patients are urged, even when they do not keep PHRs, to have at least a list of their medications and dosages and to keep the list with them at all times. A physician reported that one of her patients kept such a list but did not update it, thus leaving the anticoagulant medication Coumadin on the list after he stopped taking it. When he was hospitalized for an irregular heartbeat, he was not given Lovenox, a quick-acting anticoagulant, because the emergency room physician assumed that the patient was protected against clotting. The physician added that if it was discovered that the patient’s blood was not clotting properly, he might well have assumed that this was due to the Coumadin and would not have looked for other causes.

One reason why policy remedies do not meet our high expectations is that the root causes of many problems are more deep-rooted and less visible than we realize.

Privacy concerns. More than 9 of 10 respondents to a 2003 Markel Foundation survey cited privacy and security as “very important” concerns when it came to online medical records. More than half of Americans felt that “the use of electronic medical records makes it more difficult to ensure patients’ privacy,” although many agreed that the benefits “outweigh privacy risks,” according to a 2007 Wall Street Journal/Harris Interactive poll. Moreover, one out of every four Americans stated that privacy concerns would prevent them from using such a tool, according to the Markle Foundation. Also, private providers such as Microsoft and Google are not subject to the legislation that protects the privacy of an individual’s health information and regulates its use and disclosure.

A Freudian approach

Policy analysis, whether we are dealing with the introduction of PHRs or some other policy, would greatly benefit if it would superimpose what I call “Freudian macroanalysis” (FMA), which entails an examination of the subterranean forces that may resist change and the ways these may be overcome. Freud assumed that there are no accidents in personal life; that behavior that seems abnormal or irrational serves some underlying cause. If such behavior is to be changed, this cause must be addressed. FMA suggests that the same is true for societal problems. Poverty, drug abuse, violent crime, and discrimination all persist not because we are unaware of them or have not made efforts to tackle them. They persist because we often address the symptoms and not the root causes, made the wrong diagnoses of these subterranean causes, or do not have the needed knowledge and resources to change them.

I digress to suggest that U.S. culture possesses what I consider a form of hyperoptimism; it assumes that progress can be made, that where there is a will there is a way. This attitude is derived from many sources and is deeply embedded. Although it has benefited the nation in many instances, over time this hyperoptimism backfires. When social problems prove intractable, as they often do, it leads to cynicism because the public loses trust in the statements and promises of the government and the leaders of various public institutions. People lose their faith in society’s ability to engender reform. Public officials are accused of waste and abuse when resources are expended but do not yield the promised results. One reason why policy remedies do not meet our high expectations is that the root causes of many problems are more deep-rooted and less visible than we realize. This is where FMA can help us to be more realistic and ultimately more effective.

I must acknowledge that when one first engages in FMA, it tends to lead to pessimism if not fatalism, because one often finds that the forces resisting change are formidable and the forces that promote change are relatively weak. It is only as one learns to identify ways to reduce resistance and amplify the forces of change, albeit often on a considerably narrower front than initially hoped for, that one finds more realistic ways to proceed.

Tackling PHRs

FMA can help us explore the latent factors that seem to hold up the wide use of PHRs. Freud distinguishes between manifest factors that are known by an individual and others and the latent factors that are lodged in the subconscious. Given that we are dealing here with social systems and not personalities, the latent factors of which policymakers and policy analysts may not be aware are lodged not in the subconscious but in economic, political, or cultural layers of society.

Informal interviews with health care personnel, most of whom are physicians, suggest a slew of reasons that explain why PHRs have not been adopted on a wide scale but are not discussed as a rule in public and not considered by mainstream policy analysts. Unless these are addressed, I contend that the use of PHRs is likely to continue to grow only slowly. Given the way these reasons were unveiled, via limited and informal interviews, they are best treated as hypothetical, as suggestions for systematic and quantitative research, rather than as established evidence. At the same time, readers familiar with the U.S. medical system and culture will be able to judge the face validity of these informal and preliminary findings.

Defensive medicine. U.S. health care professionals, especially physicians, are constantly mindful that they may be sued and—it is well established—draw on a variety of measures to protect themselves from such suits. These measures, collectively referred to as defensive medicine, include controlling access to records. In principle, the default position of health care personnel is to minimize access to records because disclosure may be used in legal action against them. True, the Health Insurance Portability and Accountability Act (HIPAA) requires physicians to release information requested by patients (with a few exceptions), but they are still often reluctant to proceed. Various costs are imposed for making copies, responses to requests are delayed, and the records released are often not complete.

From this view, PHRs are antagonistic to the basic interests of those who practice medicine because they make it much easier for lawyers to determine that some procedures that should have been ordered were not, that incorrect procedures were ordered, that counterindications to interventions were ignored, that proper follow-up was not undertaken, and so on. The fact that defensive medicine may be less of a problem in many other countries may well be one reason why accessible EHRs are making more progress in other nations, especially the United Kingdom.

Avoidance of oversight. In primary and secondary education, principals regularly visit classrooms to determine the quality of instruction and then use the information they gain to encourage better teaching and to promote and otherwise reward good teachers. When the information is seriously adverse, they may fire or refuse to extend the contracts of poor teachers, especially in private and charter schools. In many universities, such oversight is against the norms and almost never practiced. Hence, the visibility of teaching performance is low and the ability to affect it is rather limited.

A considerable part of medical practice is carried out as if it were university teaching. Many physicians, after they complete their training, work under conditions of low visibility to those higher in rank and to their colleagues. Some work is done in the isolation of individual offices and some in teams that tend to close ranks. True, surgeons engage in morbidity and mortality review conferences that are often fairly candid, but their findings are not released beyond the circle of those who participate. True, informal communications abound in which the quality of performance is discussed, but these are not as a rule available to patients. The same is true of the success and complication rates of individual surgeons and specialists.

In one case, a doctor noted, “Those PAs [physician assistants] do all kinds of horrible things, and I must sign the chart.” When asked for an example, he said that a physician assistant had given Prednisone, an immunosuppressant, to multiple patients, even if they were suffering from just a bad cold. The physician said he told the PA to desist, but the PA persisted. It was not “political” to make him call the patients and cancel the prescriptions. “After all, it was not life-threatening.”

Several physicians who were interviewed said that among the reasons they did not want the records they kept to be circulated were “sometimes I am sloppy,” “sometimes I am not as thorough as I ought to be,” or “I hit the high points but did not flesh out my notes.”

If PHRs were widely used, it would lift much of the veil concealing the performance of health care personnel not only for lawyers, but also for other doctors and their patients. Although such a change may serve the common good, it is not one favored by those who fear that their colleagues and supervisors will readily and regularly be able to monitor and document their failings.

Aggravate the patients and time-consuming. Many health care personnel seek to keep from antagonizing patients in order to retain them. They fear that if patients have access to their records, they may discover notes that will trouble them. Some physicians stated that they are sometimes indiscreet. In one case, a physician said that he noted in his records that a patient’s pain seemed to come from anxiety and not a physical cause. When the patient was so told, he became confrontational. A more discreet doctor would write that the cause of the pain is “supratentorial.” Also, some state that nurses are “less schooled” in what to write and write more openly. For instance, they might describe a patient as irrational or abusive. A patient who sees this is likely to become upset and demand that the record be changed. All the physicians interviewed expressed at least some level of concern about the “fuss” they would have to deal with if their records were incorporated into PHRs. They felt that either their notes would have to be left out or they would be forced to tone them down, which in turn would hinder effective care, because other physicians reading the record would have to read between the lines.

When HIPAA was passed in 1996 to allow patients to see their medical records and to request corrections to them, physicians were concerned that they would have to spend a considerable amount of time explaining their records to their patients, particularly if they noted suspicions such as that the patient was neurotic, alcoholic, or the subject of abuse. Others sought to keep notations and interpretations they made out of the accessible record. All these concerns apply, only much more so, to PHRs, because they disclose more and make the records heretofore kept in doctors’ offices much more accessible.

One physician explained that this problem arises most often these days when patients transfer from one doctor to another and bring their records with them. In several cases described, patients wrote notes or memos or argued that the records were inaccurate. “The pain did not last seven days but ten days; it did not start from the leg but from the foot,” the physician gave as an example. She then had to spend what she considered an inordinate amount of time dealing with the patient’s complaints, although none of the additional information had any medical significance.

Physicians use shorthand, some shared, some idiosyncratic, such as CHF for congestive heart failure, Hx for history, and Dx for diagnosis. If patients are to read their documents, doctors will have to explain all these or modify their notion habits to avoid the hassle.

Ambiguous instructions. Furthermore, the current policy regarding who can see patients’ records is unclear, at least to health care personnel. None of those interviewed were clear in their own mind what the regulations said. One doctor said that the “charts belongs to the hospital” and implied that they might not be available to the patients. Another said that his hospital had a policy that patients may see their charts but that someone from the staff must be present. None of the doctors could tell if the patients have a right to see the whole record (including notes) or only parts, if they can get copies at will, or if they can or cannot ask to make corrections or add their own comments.

Not computer-savvy. The older generation of health care personnel is not computer-savvy. This is one reason why the digitization of records in doctors’ offices is so slow. And without this digitization, PHRs are much more difficult to mass produce.

The slow introduction of PHRs, despite their obvious merits, can be explained only in part by manifest factors such as the costs involved. Other factors that agitate against a wide introduction of PHRs seem to be latent. Identifying them and treating them is required if more progress is to be made. The same holds for many other social problems the nation faces, and FMA could be a valuable tool in crafting more effective solutions.

Book Review: Futurama

It goes without saying that most Americans love their cars. Yet today there is a great deal of debate about the appropriate future role of the automobile. For decades, cars have been by far the most convenient and cost-effective way to travel to and from work, shopping centers, schools, and doctors’ offices. The downside of our car-based transportation system has also long been recognized: traffic congestion, injuries, and deaths from accidents, air pollution, reliance on oil imports, and more recently, climate change. In recent years, there has been a push for alternatives: transit, walking, and bicycling.

But cars are not going away. More than 1 in 8 workers in the United States depend on the car, directly or indirectly. And most Americans continue to strongly resist alternative modes of travel, even though many endure long and frustrating commutes.

In their stimulating Reinventing the Automobile: Personal Urban Mobility for the 21st Century, William J. Mitchell, Christopher E. Borroni-Bird, and Lawrence D. Burns consider these realities and envision a different future, one in which technology has transformed our transportation system, with a new, more intelligent car at its heart. They are eminently suited for their task. Mitchell, an architect and urban theorist, serves as director of the Smart Cities Program at the Massachusetts Institute of Technology’s (MIT’s) famous Media Lab. Borroni-Bird is director of Advanced Technical Vehicle Concepts at General Motors (GM). Burns served for a decade as a senior vice president for R&D of GM. This dream team of advanced thinkers was helped by a bevy of creative students at MIT who enrolled in GM-sponsored courses and design studios that were part of the Media Lab’s Smart Cities Program. The book is thus the result of the interaction over several years of many minds. It blends insights and images from people who like to dream with those from people who have had their earlier dreams harshly tested by market realities.

Indeed, people have long theorized about technological advances that will let us be whisked cleanly and efficiently from where we are to where we want to go with greater safety and comfort in far less time without getting lost and without long delays. A utopian fix always seems to be near, but just out of reach. At the 1939 New York World’s Fair, GM’s Futurama exhibit helped millions of visitors imagine that they would soon travel on automated highways, with people in their cars reading newspapers, talking to one another, and eating meals while being carried from coast to coast without looking at the road or touching the steering wheel. In 1997, illustrating the promise of what were then called intelligent transportation systems, a platoon of driverless cars traveled at close spacing at 60 miles per hour on a highway near San Diego, with Vice President Al Gore, and one of the authors of this book, riding in the procession. This demonstration by the Partners for Advanced Transit and Highways Program of the University of California presented a vision that has inspired continuing research, testing, and anticipation, but the vision remains mostly unfulfilled.

Reinventing the Automobile addresses the future of the automobile and the highways on which it travels in terms of four basic transformations. By successfully integrating these four concepts, the authors argue, we can transform urban passenger travel. First is the transformation of the underlying design principles of vehicles, which they call the DNA of vehicles. The new DNA is based on electric-drive vehicles and wireless communications. By shedding the internal combustion engine, manual control, and petroleum fuel, vehicles can be much lighter, far more flexible in form, and designed to avoid crashes while being physically attractive. The second idea is the “mobility Internet,” which will connect cars electronically with one another and with roadways and destinations, enabling vehicles to collect, process, and share information to help manage flows to make travel times more predictable and reliable. The third concept is the integration of electric-drive technology with smart electric grids so that clean, renewable energy sources can supply the vehicles with needed power while the vehicles store energy to intermittently supply the grid with energy when it is needed. The fourth element is real-time control capabilities that link the vehicles and drivers with their energy sources wirelessly, providing information and varying prices to influence travel behavior and parking choices, and balancing supply and demand to increase the efficiency of the use of streets, highways, parking facilities, and energy supplies.

Each of these four ingredients is developed carefully in some detail, citing the history of the current system and reflecting its strengths and weaknesses, explaining recent technology trends that make the proposed end state seem feasible, and arguing how the proposed benefits are actually achievable. The book features images of cute lightweight vehicles that hold one or two passengers, have the intelligence to find their destinations and available parking spaces, and can be parked perpendicular to the curb or travel in “trains” when and where that is desirable.

Although I know a lot less than the authors do about engines, the technology of batteries and electric grids, and the workings of computer information systems, their careful articulation of their positions enabled me to follow their arguments and to find them plausible. The attractiveness of the book is the clarity of the vision by which it presents its promising technological utopia. I felt challenged by the possibilities and concluded that their vision might someday be attainable. Still, knowing that similar visions of Futurama and the promise of intelligent transportation systems have not been realized despite the passage of decades during which thousands of people labored to make them realities, I was less convinced than I would have liked to have been.

In the final chapter, entitled “Realizing the Vision,” the authors conclude their imaginative and stimulating presentation by stating what we must do to bring about the system they would like to see in place within several decades. They say that “we must learn from the invention, development, and widespread acceptance of networkbased systems with the scale and scope of personal urban mobility systems.” They assert that their vision will lead to the adoption of “appealing new services” that are not yet well defined, just as computer networks have given rise to new products such as Amazon’s Kindle. They urge us to be attentive to the need to develop broad, open protocols like those that allowed the development of the Internet. In the closing sections, they observe that we “must develop effective strategies for overcoming the enormous inertia in today’s automobile transportation system.”

It is difficult to disagree with these observations, but the devil is always in the details, and it also is relatively easy to see that we have not achieved the vision of Futurama in large part because to do so our society needs not only new technology but also substantial institutional change to facilitate adoption of the technology on a large scale. The technological changes put forth in this book are themselves enormously complex and challenging. But in all likelihood, should the author’s vision of the future fail to happen, it will most likely be because of unmet challenges related to governance and social institutions. The technological challenges are themselves surely enormous, even though the authors deconstruct those very well and creatively put the pieces back together. But the societal and institutional challenges are even larger, and the authors leave those to be dealt with later and by others.

But these are not mere details that can be worked out later. Our ability to address them is critical to attaining their vision. How will we deal with goods movement that today uses the same road systems as personal automobiles? How should we manage the complex processes of transition from where we are today to where they say we someday will be? There are, after all, different technologies and regimes of control in use on one system at the same time. How will we deal with liability issues should fledgling or experimental systems fail with disastrous consequences? What will be the economic costs of achieving the technological vision provided so powerfully by the authors, and how should we attempt to finance the transition that they advocate, with financial responsibilities surely falling on many different private industries, their customers and suppliers, and government bodies as well as financial institutions?

Reinventing the Automobile presents a fascinating and challenging model of technological possibilities. The authors go further than many others by articulating in depth the many interrelated components of a future system of personal mobility. I thoroughly enjoyed reading what might in the future be possible, and I encourage transportation specialists, urban planners, environmentalists, and policymakers to consider their thoughtful visions. Yet the future will look quite different once lawyers, insurance agents, state and federal legislators, and consumers have given these concepts careful consideration. That is, after all, why the world envisioned in Futurama is still not at hand.

Critical Minerals and Emerging Technologies

The periodic table is under siege. Or at least that is what one might imagine after hearing some of the cries of alarm that have begun echoing across the United States. We hear that the latest cell phones, electric vehicles, or critical weapons systems might no longer be feasible because some element that most people have never heard of is in short supply or being hoarded by another country.

Among the alarms issued in just the first few months of 2010, The New Yorker published an essay on lithium supplies (which may be essential for batteries in electric vehicles) and the potentially critical role of Bolivia as a supplier in the future. The Atlantic published an article on China’s activities in Africa to secure—even “lock up”—primary commodities needed by its growing manufacturing sector. Science published a special section in one issue describing new materials for electronics, and the report included commentary on possible scarcities of essential elements that could constrain expansion. Even the U.S. Government Accountability Office weighed in, publishing the findings of its investigation on the availability of rare-earth elements for essential military applications and vulnerability to shortages.

One factor giving rise to concerns is that modern mineral-based materials are becoming increasingly complex. Intel estimates that computer chips contained 11 mineral-derived elements in the 1980s, 15 elements in the 1990s, and potentially up to 60 elements in the coming years. General Electric estimates that it uses 70 of the first 83 elements in the periodic table in its products. New technologies and engineered materials create the prospect of rapid increases in demand for some minerals previously used in relatively small quantities. On the list are such elements as lithium in automotive batteries for electric vehicles; rare-earth elements in compact-fluorescent light bulbs and in permanent magnets for wind turbines; and cadmium, indium, and tellurium in photovoltaic solar cells.

The U.S. government should fight policies of exporting nations that restrict raw-material exports to the detriment of U.S. users of these raw materials.

On the supply side, meanwhile, some mineral markets are becoming increasingly fragile. The United States has become significantly more reliant on foreign sources for many minerals. Some exporting nations, most notably China, have imposed export restrictions on primary raw materials to encourage domestic processing and fabrication of mineral-based materials into final products. Some mineral markets have production that is concentrated in a small number of companies or countries—such as platinum-group metals in South Africa—creating vulnerability to geopolitical risks and to the possibility of opportunistic pricing. More broadly, supply chains are more fragmented because mining, processing, and manufacturing increasingly take place in different countries. Together, these factors create the specter of supply risks for essential mineral-based elements.

The United States can manage and reduce these risks, however, if government policymakers and industrial producers learn from the experience of previous supply scares, focus carefully on the most important concerns, and plan strategically.

Concerns familiar and new

The availability and adequacy of mineral resources have been perennial, if intermittent, national and world concerns. In the decade after World War II, concern focused on securing the resources necessary to replace reserves depleted during the war and to facilitate postwar reconstruction. In the 1970s, concern shifted to the security of foreign sources of oil and minerals (such as bauxite and cobalt) and to the long-term adequacy of supply of energy and mineral resources generally following two decades of significant economic growth worldwide. Many observers worried: Was the world running out of nonrenewable natural resources essential for modern society? In the 1980s and 1990s, concern shifted away from security of supply and long-term adequacy toward environmental and social issues. Observers then worried: Could adequate supplies of mineral resources be obtained in ways that minimized damage to the natural environment and disruptions to local communities?

Concerns today are similar to those of the past in that they are motivated, in part, by high commodity prices. It is no coincidence that past periods of concern coincided with periods of booming commodity prices. Such periods have included the early 1950s during postwar reconstruction and the Korean War; the early 1970s and the Arab oil embargo, resource nationalism in many mineral-exporting nations, and continued strong economic growth; and much of the past decade, which has been fueled by surprisingly strong economic growth in the developing world and, in retrospect, insufficient investment in new mines and production capacity. The recent two years of financial crisis, from which the United States and much of the rest of the world now seem to be emerging, merely slowed the price boom.

Today’s concerns, however, are different in several respects, starting with security of supply. Concerns about security of oil supplies in the 1970s and intermittently since then have focused primarily on the risks of higher prices and the resulting economic costs on the economy as a whole. The risks were attributed largely to a powerful supplier (the Organization of the Petroleum Exporting Countries, or OPEC) in a politically unstable part of the world (the Middle East).

In contrast, concerns about the security of mineral supplies now mostly center on the physical availability of essential inputs for a variety of products. There are concerns, for example, about supplies of rare-earth elements used in military hardware and compact-fluorescent light bulbs, lithium used in automotive batteries, and platinum-group metals used in pollution-control equipment. In most cases, the cost of these elements is but a small part of the overall manufacturing cost of the product, so a significant increase in price will likely have a relatively small effect on manufacturing costs. Supply risks are less about the prospect of higher prices and more about the possibility that a “no-build” situation might occur. The actions of several nations are helping to fuel such concerns. China, which currently produces almost all of the rare-earth elements used industrially, has enacted export restrictions. In addition, Bolivia, which has large and promising undeveloped resources of lithium, has signaled that it will not welcome foreign investment.

TABLE 1

Key Characteristics of Selected Elements of Current Concern

Principal applications 2009 U.S. use in manufacturing (metric tons) Production characteristics 2009 U.S. primary production (metric tons)1 Top primary-producing countries (in rank order) U.S. net import dependence (% of consumption)2
Cadmium Batteries

Solar panels

228 Byproduct of zinc-concentrate processing

Recycling of spent nickel-cadmium batteries

700 refined metal Refined metal:

China

Republic of Korea

Kazakhstan

Japan

Mexico

Net exporter

Indium Solders

Indium-tin oxides for flat-panel displays

Solar panels

120 Byproduct of zinc processing

Recovery from manufacturing wastes

Little recycling of post-consumer scrap

No production of refined metal

Ore produced at one mine in Alaska, processed in Canada

Refined metal:

China

Republic of Korea

Japan

Canada

Belgium

100

Lithium Ceramics and glass

Batteries

1200 Most lithium recovered from subsurface liquid brines, also produced from mining of lithium-carbonate rocks

Little recycling at present, although increasing (lithium batteries)

One brine operation in Nevada, production not reported Mine production:

Chile

Australia

China

Argentina

Portugal

> 50

Platinum-group metals Catalysts, especially for pollution control in motor vehicles

Jewelry

Estimates:

Platinum 120

Palladium 80

Most platinum-group metals produced jointly with one another at the same mineral deposits

Significant recycling of spent automotive catalysts

Platinum 3.8

Palladium 12.5

Mine production

Platinum:

South Africa

Russia

Zimbabwe

Canada

U.S.

Palladium:

Russia

South Africa

U.S.

Canada

Zimbabwe

Platinum 89

Palladium 47


Rare-earth elements Permanent magnets

Batteries

Catalysts

Phosphors

7,410 (2008) Most rare-earth elements produced jointly with one another at the same mineral deposits, although relative concentrations of the elements vary from deposit to deposit

Small quantities recovered through recycling of spent permanent magnets

No mine production

Processing of stockpiled ore at Mountain Pass, California

Mine production:

China (97%)

India

Brazil

Malaysia

100

Tellurium Alloying element in steels

Cadmium-tellurium based solar cells

Not reported

Could be as much as 50 metric tons (author estimate)

Byproduct of copper refining

Essentially no recycling of post-consumer scrap

One refinery complex in Texas, production not reported Refined metal:

Japan

Peru

Canada

U.S.

Not reported

Source: U.S. Geological Survey, Mineral Commodity Summaries, minerals.usgs.gov.

1Primary production refers to mining and subsequent processing. It does not include recovery of materials from the recycling of post-consumer scrap.

2Net import dependence as a percent of consumption = (imports − exports + inventory adjustments) as a percent of consumption

Today’s concerns also differ in that they are no longer focused primarily on major metals—such as aluminum, copper, iron, lead, or zinc—but on rare or specialty metals that are produced primarily as byproducts. Most indium, for example, is recovered during the processing of zinc ores, and most tellurium is recovered from the processing of copper ores. (See Table 1.) In such cases, the availability of the byproduct is strongly influenced by the commercial attractiveness of the main product. In the short term, the availability of the byproduct is constrained by the amount of the byproduct in the main-product ore.

The markets for these rare or specialty metals are much smaller and typically more fragile than those of the major metals. A new use in an important technology has the potential to overwhelm the ability of existing producers to respond rapidly to the increase in demand, especially if the element is produced largely as a byproduct. Mineral demand can change significantly in less than five years, whereas it takes five to ten years for significant additions to production capacity to occur. Moreover, there often are only a small number of important producers of these rare metals, and as a result markets are not transparent.

Drivers of resource reserves

The United States and world are not running out of nonrenewable resources, at least any time soon. The world generally has been successful in replenishing mineral reserves in response to depletion of existing reserves and growing mineral demand. Reserves are the subset of all resources in the earth’s crust that are known to exist with a high degree of certainty and capable of being extracted at a profit with existing technology. Reserves are a dynamic concept. They increase as a result of successful mineral exploration and technological advancements in all stages of production. They decline as a result of production at existing operations. Over time, reserve additions typically have at least offset depletion.

The United States Geological Survey (USGS) publishes annual estimates of worldwide reserves for many minerals. The estimates are presented as ratios of reserves to annual production (R/P ratios)—that is, the ratios indicate how many years today’s reserves would last at current rates of production. (See Table 2.) For essentially all nonrenewable resources, R/P ratios indicate reserve lifetimes of several decades or more. More striking, and illustrating the dynamic nature of reserves, the R/P ratios exhibit no systematic upward or downward trend over time. To be sure, R/P ratios for some minerals, such as copper, now are lower than they were in the 1970s, while for other minerals, such as rare-earth elements, they are higher. Although insufficient information is available (at least publically) to estimate R/P ratios for several of the elements of particular current interest, such as indium, platinum-group minerals, and tellurium, historical experience suggests that more resource is available than current estimates of reserves would indicate.

So rather than focusing on tons (or ounces or pounds) of reserves, it is useful to consider costs of production (as an indicator of the quality of reserves), the location of reserves and production, and the time frames over which there is concern about reliability and availability of mineral supplies.

Nations and industries tend to use lower-cost mineral resources first—those contained in deposits that are large, close to the earth’s surface, high in metal content, and easy to process metallurgically. Over time, users move to lower-quality deposits, resulting in higher costs unless improvements in extraction and processing are sufficient to offset these cost increases. So the limit to mineral-resource availability, in this sense, is what price users are willing to pay for a resource.

Geographic location can be critical in understanding supply risks. Mineral resources in concentrations sufficient to allow commercial extraction are not distributed evenly worldwide. Other things being equal, supply risks will be higher the more geographically concentrated production is in the hands of a small number of countries. But geographic location is not the sole determinant of supply risk. Rather, the concentration of production by a small number of companies or in a small number of mines also helps make users vulnerable to supply disruptions or high prices. Indeed, even domestic U.S. production can be risky if in the hands of a single producer, while foreign sources of a mineral can be quite safe if production comes from a diverse set of companies and countries. Moreover, import reliance can be good (cost effective) if foreign sources are available at lower costs than alternative domestic sources of a mineral.

Time frames also are important in understanding supply risks. Risks in the short to medium term (up to a decade or so) are much different than those in the long term (more than a decade). In the short to medium term, the issue is how adequate and reliable are supplies from existing sources of supply, as well as from new facilities that are sufficiently far along in design and construction to be reasonably certain of coming into production within a few years. The important risk factors include whether supply is concentrated in a small number of mines, companies, or countries; the geopolitical risks; the prospect of rapid demand growth; the reliance on byproduct production; whether or not there is excess or idled capacity that could be restarted quickly; and whether or not there is an availability of low-grade material or scrap from which an element could be recovered in the short to medium term.

In the long term, the important issues relate to fundamental availability and are largely geologic, technical, and environmental. Does an element exist in the earth’s crust or in scrap or products that could be recycled? If so, do users have the technology to extract and use it? Can users extract, process, and use the element in ways that society considers environmentally acceptable?

TABLE 2

Reserve/Production Ratios for Selected Minerals (lifetimes of remaining reserves at then-current rate of annual production)1

1978 1995 2009
Copper 65 32 41

Crude oil 29 41 422

Iron ore 183 150 70

Cadmium 38 29 31

Lithium Insufficient information 350 550

Rare-earth minerals 221 1,390 798

Indium, platinum-group metals, tellurium Insufficient information Insufficient information Insufficient information

Sources: U.S. Geological Survey, Mineral Commodity Summaries, minerals.usgs.gov.; BP Amoco, Statistical Review of World Energy 2009, www.bp.com.; various U.S. Bureau of Mines publications for 1978 estimates.

1Reserves represent those mineral resources that are known to exist and capable of being extracted with existing technologies under current market conditions.

22008.

The power of markets

Markets are not panaceas. As the recent financial crisis illustrates, markets do not always work well and by themselves will not solve every problem. There is an important role for government. But market pressures can be quite effective in encouraging investment that invigorates supply (and reduces supply risk) and in encouraging users to obtain “insurance” against mineral supply risks.

On the supply side for rare-earth elements, for example, concerns about the reliability of Chinese supplies, combined with the prospect of significant demand growth, have fueled a boom in exploration for mineral deposits containing rare-earth elements. There are a significant number of advanced exploration projects in North America and elsewhere around the world. Most of these deposits will remain simply interesting geologic concentrations of rare earths. But over the next decade, a few will become mines if demand grows as anticipated and the Chinese restrict exports. In the United States, Molycorp hopes to re-open the Mountain Pass rare-earths mine, located in California, which used to be the world’s largest source of rare-earth elements until it shut down in the 1990s. The biggest impediment to the opening of rare-earth mines outside of China is the reality that China is and likely will remain the low-cost producer of rare earths worldwide and probably could supply most world demand at prices lower than those necessary to justify new mines. The spanner in the works, however, may be China’s restrictions on exports, which are leading some users to view China as a risky supplier and may provide incentives for users to adopt new strategies to ensure supplies.

Users of elements for which there are supply risks have a number of options. In the short to medium term, they can maintain stockpiles, diversify sources of supply, develop joint-sharing arrangements with other users, or develop tighter relations or strategic partnerships with producers. Over the longer term, they might invest in new mines in exchange for guaranteed supplies.

Over the longer term, users also have the incentive to substitute away from elements that are difficult to obtain or for which there are supply risks. Substitution comes in two forms. The first form involves replacing a “risky” element or material with another element or material that has similar properties but is in greater (or surer) supply. In the 1980s, concerns about cobalt availability led to substituting nickel and other alloying elements for cobalt in certain types of steel. The second type of substitution involves making more efficient use of an element (and thereby requiring less of the element) in the same application. Molybdenum prices increased six-fold in the late 1970s, and in the decade that followed steel makers learned how to reduce the amount of molybdenum needed in alloyed steels by about 25% through more heat treatment. Indium provides another example. Indium-tin-oxide (ITO) thin films are an essential part of flat-panel displays, such as television sets and smart phones. Demand for these products—and, in turn, for indium—exploded about a decade ago, leading to much higher indium prices. Over the following several years, manufacturers of ITO thin films responded by increasing the efficiency of their manufacturing processes by about 50% by recycling indium that previously was discarded in manufacturing waste. Even though the amount of indium in each flat-panel product remained essentially the same, manufacturers needed to purchase less indium per product.

In early 2010, a senior engineer at Rolls-Royce Group, manufacturer of engines and power systems, was quoted in American Metal Market as saying that he would like to “design out” the following elements from Rolls-Royce products: cobalt, hafnium, molybdenum, nickel, rhenium, tantalum, tungsten, and yttrium. To be sure, this is easier said than done. Each element provides materials with specific properties, and some are inherently more difficult to substitute away from than others without sacrificing performance in the material. But the drive clearly is there.

Roles for government

Despite the powerful incentives provided by markets, the federal government has an important role to play in making sure that critical materials are available at affordable prices and are used efficiently. Government activities should focus on:

Encouraging undistorted international trade. One of the central tenets of modern economic theory is the benefit of free and open international trade in goods and services. Although economists disagree on many matters, on this they are essentially unanimous. The reduction and removal of barriers to international trade is one of the major international success stories of the past 50 years, increasing incomes and improving living standards around the world.

Export restrictions are analogous to import restrictions. Both isolate the domestic market from the world market. Import restrictions on a good or service create advantages for domestic producers of the restricted good or service while hurting domestic users and foreign exporters. Export restrictions create advantages for domestic users while hurting domestic producers and foreign users. When China restricts exports of a primary raw material, such as rare-earth elements, it presumably is doing so to create an advantage for those manufacturing industries that use rare earths domestically in goods that will be sold both domestically and internationally.

Thus, the U.S. government should fight policies of exporting nations that restrict raw-material exports to the detriment of U.S. users of these raw materials. Similarly, the government should fight import restrictions on processed and semi-processed minerals and metals that create an artificial barrier to downstream processing of mineral resources in mineral-exporting (usually developing) nations.

Improving regulatory approval for domestic resource development. Although foreign sources of supply are not necessarily more risky than domestic sources, it remains true that in some instances, at least, domestic production can offset the risks associated with unreliable foreign sources. Developing a new mine in the United States appropriately requires a pre-production approval process that allows for public participation and consideration of the potential effects of the mine on the natural environment and on local communities. Similar processes apply to all proposed industrial or public infrastructure projects. However, these processes are time consuming and expensive—arguably excessively so, not just for mining but for all sectors of the economy. No simple remedy is obvious. But it is clear that more attention should be paid to finding better ways to balance regulatory approval requirements with the benefits of augmenting domestic production of needed minerals.

Facilitating the provision of information and analysis. Within the private and public sectors alike, sound and rational decisions require good information. Government plays an important role in making sure that sufficient information exists. Consider, for example, the requirement that food producers include nutritional information on food packages. Perhaps an even better illustration is the macroeconomic data that the government collects (on gross domestic product, housing starts, investment spending, and retail sales) that users throughout the private and public sectors analyze in making a variety of investment decisions.

A 2008 National Research Council (NRC) report on critical minerals, Minerals, Critical Minerals, and the U.S. Economy, recommended that the U.S. government enhance the types of data and information it collects, disseminates, and analyzes on minerals and mineral products, especially as they relate to elements that are essential in use and subject to supply restrictions. The report identified gaps in existing public mineral information and recommended that special attention be given to those parts of the mineral life cycle that currently are underrepresented. This category includes reserves and subeconomic resources, byproduct production, stocks and flows of materials available for recycling, in-use stocks, material flows, and materials embodied in internationally traded goods. At present, the USGS Minerals Information Team is the focal point of federal activities on mineral data and information. The U.S. Energy Information Administration, which has more autonomy and authority than the USGS Minerals Information Team, provides a possible model for expanded federal activity in minerals information.

Facilitating research and development. Over the longer term, the keys to responding to concerns about the adequacy and reliability of mineral resources are scientific and technical. There is a need for better knowledge about the earth’s resource base; more-efficient techniques for mineral exploration, mining and processing, and material manufacturing; improved recycling; and better materials that provide improved performance using elements that are more available and abundant and subject to less supply risk. Government can play key roles in facilitating research and development, especially pre-commercial, basic research and development that is likely to be underfunded by the private sector because its benefits are diffuse, difficult to capture, risky, and far in the future. The NRC’s 2008 report on minerals and the U.S. economy recommended that federal agencies develop and fund activities, including basic science and policy research, to encourage innovation and to enhance the understanding of mineral resources and mineral-based materials. It called for, among other things, the development of cooperative programs involving academic organizations, industry, and government to enhance education and applied research. The report also said special importance should be given to the recycling of rare and specialty metals used in small quantities in emerging applications.

Ensuring materials for national defense. The U.S. Department of Defense (DoD) and its use of mineral-based materials comprise a special category of public policy. At one level, the DoD is simply another user of materials. As such, it should be in the best position to “buy its own insurance” against supply risks by stockpiling some materials, striking special purchase arrangements with suppliers, diversifying its sources of supply, and so on.

As the chief protector of national defense, however, the DoD is not simply another user. Since just before World War II, the DoD has relied largely on the National Defense Stockpile to deal with threats to the supply of materials essential for national defense. But an NRC report, Managing Materials for a Twenty-first Century Military, issued in 2008, concluded that the National Defense Stockpile is ineffective, that the DoD does not seem to fully understand its materials needs, and that the department lacks sufficient data and information on which to evaluate its materials needs and vulnerabilities. In its recommendations, the report said that the Department of Defense should establish a new system for managing the supply of strategic materials, and the federal government should enhance its systems for gathering data and information on materials necessary for national defense. Since the NRC report was published, the DoD has begun to respond to and act on these recommendations.

Attention, not panic

The bottom line is that although the chorus of critics is right that the United States should be paying more attention to the supply of many important minerals, there is no need to panic. If the nation’s policymakers and industrial producers take a few sensible actions, there is no reason to expect that the nation will be in crisis anytime soon.

As they generally do, market forces will help to encourage producers and users of critical minerals to undertake activities that reduce supply risk or the need for the critical mineral. Where market forces alone are not sufficient, government can step in to help shape and determine how well markets work. The government has an essential role to play in facilitating international trade in critical minerals, ensuring that domestic mineral production can occur where appropriate, facilitating the collection and dissemination of mineral data and information, making sure that the military appropriately deals with supply risk, and—perhaps most important—seeding the basic research and development that over the longer term will be the key to understanding the nation’s mineral-resource base and designing better materials.

European Union: Measuring Success

In his keynote speech to the National Academy of Sciences on April 28, 2009, President Obama said that “science is more essential to our prosperity, our security, our health, our environment and our quality of life than it has ever been before.” He went on to announce massive increases in public support for a range of agencies and activities and to commit the United States to an annual investment in R&D of more than 3% of gross domestic product (GDP). His initiative has its parallels around the world. For the European Union, the 3% goal was set in the Lisbon agenda of 2000. Finland, Israel, Korea, Japan, and Sweden have already surpassed this target, and for the 30 countries of the Organization for Economic Cooperation and Development (OECD), this indicator has risen from 2.06 to 2.29% of GDP during the past decade. The commitment to science as a driver of economic growth and competitiveness has now become commonplace.

Things were not always so. Throughout human history, wealth was first achieved by the exploitation of natural resources, then by the husbanding and better management of natural resources, followed by conquest and the acquisition of the wealth and natural resources of others, and finally by the application of knowledge to the creation of wealth by increasing what the French call the technicité of society.

Adam Smith’s partitioning of wealth in society into land, labor, and capital was appropriate to its time and place. Two centuries of innovation have shown that knowledge is now the principal source and component of wealth. Unlike other resources, it has the advantage that it is inexhaustible, constantly expanding, and for the most part free.

This new partitioning of wealth began in the 1990s as an attempt to account for the value of a company’s intangible assets. It has since been applied at the level of national economies. The essential novelty of this approach is to distinguish among natural capital (such as oil in the ground), produced capital (such as buildings and railways), and intellectual capital (all the rest). Intellectual capital includes the knowledge and skills of individuals as well as the collective knowledge, competence, experience, and memory contained in our institutions.

A World Bank team calculated these three components of wealth for all the countries of the world. It estimated that high-income OECD countries have per capita wealth of $439,000, whereas the poorest countries have a per capital wealth of $7,216. The first notable feature of the analysis is the scale of the disparity between the rich and poor. The top 10 countries (Japan, the United States, and eight European countries), with a population of 959 million, had an average per capita wealth of $502,000. The 10 poorest countries (nine African countries plus Nepal), with 278 million people, had an average per capita wealth of $3,000.

The limitations of GDP are clear: It ignores nonmarket outputs, does not attempt to capture preexisting wealth, counts some negatives (such as military expenditures and the destructive extraction or resources) as positive, ignores other negatives (such as crime and environmental deterioration), and takes no acount of inequality in society.

The second notable feature is the sources of wealth in each case. Among high-income OECD countries, 2% of wealth is natural wealth, 17% is created wealth, and 80% is intellectual capital. Among low-income countries, the natural and created wealth are respectively 29% and 16% of the total, leaving the intellectual capital at 55%. Thus, whereas the wealth ratio of rich to poor in overall wealth is 62:1, for natural capital it is 5:1 and for intellectual capital 89:1. This comparison of the extremes is reflected also in comparisons among developed nations; the wealthier countries have a higher proportion of their total value in their intangible or intellectual capital.

It is clear that increases in wealth, and therefore in social progress and prosperity, now flow mainly from the creation and use of new knowledge. Should wealth creation therefore be the main objective of investment in knowledge creation? By what metrics should the investment and the returns be measured? With what models should the investment be planned?

Is GDP enough?

Shortly after he took office, President Sarkozy of France set up a Commission on the Measurement of Economic Performance and Social Progress. The motivation was the perception that “what we measure affects what we do.” The most widely used measure of the wealth produced by a society in any one year is its GDP. The aim of the commission was to identify the limits of GDP as an indicator of economic performance and social progress, and to look beyond it to alternative ways of setting and measuring the goals of society. The OECD has undertaken a similar project.

The limitations of GDP are clear: It ignores nonmarket outputs, does not attempt to capture preexisting wealth, counts some negatives (such as military expenditures and the destructive extraction of resources) as positive, ignores other negatives (such as crime and environmental deterioration), and takes no account of inequality in society. In addition, it is clear that increasing the average wealth in a society does not translate into progress for people on all fronts. High levels of economic growth have in many cases been accompanied by a decline in trust, in personal and family security, in leisure time, and in happiness or general satisfaction with life. Broad environmental challenges such as climate change must also be balanced against increased average incomes.

Among the many attempts to develop metrics for these additional measures of progress, perhaps the most widely acknowledged is the Human Development Index (HDI), used by the United Nations Development Programme since 1990. It has three components: life expectancy at birth, knowledge and education, and GDP per capita. All three of these indicators are highly correlated across countries, so it is no surprise that the ranking on GDP is not very different from the ranking on HDI.

Other indexes, such as the Gini Coefficient (emphasis on equality), Gross National Happiness (sustainability, cultural values, environment, and governance), and the Happy Planet Index (satisfaction, life-expectancy, and environmental footprint), do not align very well with GDP. For example, the highest-ranking OECD country on the Happy Planet Index is the Netherlands, which is ranked 43rd.

The first great challenge in measuring inputs to and outputs from the knowledge economy is therefore the reality that GDP growth alone will not deliver all the benefits that society desires. However, without the generation of wealth, it becomes more difficult to meet the rising expectations of people. Translating wealth into broad benefits is therefore a second phase of harvesting the fruits of the knowledge economy. How effectively this is done depends on the political, social, and cultural structures of the society, and these in turn could be expected to benefit from rising average incomes. GDP is therefore not the end point of this investment but the enabler of the multiple end points. In this sense, it is a broadly reasonable objective for investment, and particularly public investment, in the knowledge economy.

The second challenge in attempting to quantify benefits is that they vary greatly in their nature, timing, and duration. Investments in health technology, for example, might lift life expectancy or quality of life for the whole population for the indefinite future. Investments at the boundaries of science might not bring direct benefits for a generation and therefore require appropriate discounting. Investments in some technologies, for example in the military field, might produce no benefits at all as they are overtaken by competing technologies. Investment at the national level should therefore be seen in a portfolio context in which probabilities, timing, and the extent of impact are highly variable. Developing models to accommodate such variability is, to say the least, challenging.

Linking investment to returns

In an editorial in Science in 2005, U.S. presidential science advisor John Marburger drew attention to the fact that despite massive investments in science and technology by government and business and the almost universal acceptance that this was worthwhile, metrics and methodologies for evaluating investment and returns were poorly developed. He therefore suggested increased attention to what he called the science of science policy. Such a program has now been established jointly at the U.S. National Science Foundation and Department of Energy.

In the European Union (EU), some progress has been made on this front. Because of its nature as a union of 27 independent countries, most of the investment takes place at the level of the individual states. In contrast to the United States, where 94% of public R&D funds are federal, just 7% of public funds flow through central EU programs. The 27 EU countries therefore constitute a sort of rolling experiment in which useful comparisons can be made across countries and time.

The most widely acknowledged assembly of data in this respect is the European Innovation Scoreboard (EIS). It currently uses 29 indicator statistics, 19 of them tracking various measures of investment and the remaining 10 being measures of output or impact. These are then assembled into a composite index that broadly represents the state of development in science and technology in each country.

The metrics used are grouped according to function. The first group, called “enablers,” consists of five measures of education and four measures of financial and technical support. A group of “firm activities” consists of 11 measures of business expenditure and initiative, together with measures of intellectual property. The “outputs” include three measures of innovation at the level of firms and six measures of economic effects as reflected by employment in and sales from high-tech activities and firms. Making sense of these varied statistics is highly challenging and is still a work in progress.

Despite these qualifications, the EIS does show that countries that score high on the sub-indices of inputs see this reflected in their output measures. On both aggregate scores, the Scandinavian countries, together with Germany and the United Kingdom, occupied the highest places and are classified as the innovation leaders. A further group, including the Netherlands, France, Belgium, Ireland, and Austria, also scores above the average of the EU 27 and is classified as innovation followers. The lowest ranks on the 2008 EIS are held by countries that are still emerging into the modern world from two generations of the communist experiment.

Sufficient data on nine of these indicators are available from 48 countries, and this information has been used to develop the Global Innovation Scoreboard (GIS). In compiling the GIS, activities and outputs at the firm level are given 40% of the weighting, human resources and education 30%, and infrastructures and public investment 30%. Sweden, Switzerland, and Finland still top the list, followed by Israel, Japan, and the United States.

These science and technology indicators are proving to be increasingly useful as countries use them to inform public investment with a view to improving competitiveness in the short term as well as growth and prosperity in the longer term. However, as a model for investment and return, they have two major deficiencies. First, they do not take account of the relationships between all of the indicators. Some may be highly and positively correlated and so be partially redundant. Others may be negatively related (as in competition for limited investment funds), so that tradeoffs are required. Second, they are all simultaneous metrics, snapshots of one year’s statistics. As such, they fail to take account of events that transpire over time. Some investments must be sustained over many years if they are to have their desired effect, the outputs can take many years to emerge, and the expected pace of emergence will vary among outputs.

Challenges of this kind have led to the development of efficient models in some more specific fields. One such field is quantitative genetics, particularly in planning genetic improvement in animal populations. The model that has evolved in this case is called the selection index. It begins with a definition of the desired outputs; for example, increased milk production, higher protein or fat content, better fertility, or disease resistance in dairy cows. For each of these traits, an appropriate economic weight is calculated. If appropriate, discount factors are applied to allow for delayed returns. A selection objective is thus defined (comparable in the wider context to GDP). Next, the inputs requiring investment are identified. These include expenditures on recording the performance of large groups for indicator measurements. Then the relationships between all of these input and output indicators are calculated from historical data. Finally, an index of the inputs is derived which maximizes the gain in the selection objective. Secondary statistics can be calculated that measure the contribution of the inputs, singly or in groups, to the gain achieved and also the contribution of gain in each component of the output to overall gain. These statistics help to refine the investment process.

If GDP growth is accepted as the first objective of public investment in science and technology, the immediate task is to define GDP in terms in which impact can be measured. Of the three standard definitions, the one that serves this purpose best defines GDP as the sum of gross value added (GVA) in different sectors of the economy. Because the objective is GDP per capita, this can be achieved by growth in GVA per capita in individual sectors through productivity gains or by an increase in the numbers employed in the sectors with higher GVA per capita, with a parallel reduction in those employed in the sectors with lower GVA per capita or among those not in employment. This makes it possible to build a quantifiable objective function that captures the expected benefits from investment in science and technology. The remaining task is to develop metrics for the various inputs and to quantify the network of relationships among the inputs and between these and the elements of the objective output function. With appropriate data, this model could help to guide investment in building the knowledge economy.

Public investment in R&D is on the order of 1% of GDP in many advanced countries. In the EU, it is approximately $100 billion per year, of which 87% is in the civilian sector; in the United States it is roughly $150 billion, of which 42% is civilian. These enormous funds are invested to create knowledge, and through it progress and prosperity for future generations. This 1% is our seed corn. Better metrics and models to guide its investment are required.

Book Review: Job prospects

In this peculiar little book, six prominent economists—the two listed authors plus four distinguished commentators, all writing separately—take stock of the possibility that expanding international trade in services, particularly service imports to the United States, will disrupt the U.S. labor market. They debate the potential scope of such trade, how fast it might grow, and who will gain and lose from it. To a lesser extent, they argue about what Americans, individually and collectively, ought to do about it.

One might think that these challenging topics would easily be enough to fill 144 pages, and indeed they certainly could have. But some of the authors also see fit to exchange insults, pursue diversions, and engage in speculation. The volume thus provides an interesting window into the internal workings of the discipline of economics as well as a few insights into its main subjects.

The centerpiece of the book is an essay by Princeton University’s Alan Blinder entitled “Offshoring: Big Deal or Business as Usual?” Blinder argues that “the confluence of rapid improvements in information and communications technology coupled with the entry of giants like China and India into the global economy is creating a situation that, while not theoretically novel, may be historically unprecedented.” Just as it has become possible for phone calls, medical images, insurance claims, and many other sorts of information to be conveyed thousands of miles at ever-lower costs, a huge new labor force has become available to respond to, analyze, process, and otherwise work with this information. Many occupations that were previously protected from lower-wage international competition, Blinder therefore claims, are now becoming vulnerable to it. He likens the process to “a new industrial revolution.”

Blinder estimates that about a quarter of all U.S. workers currently work in occupations that are potentially “offshorable.” His method for making this estimate is primitive, and his colleagues give the method a well-justified pounding. Yet his estimate of tens of millions of jobs is consistent with more sophisticated approaches, such as that of Lori Kletzer of the University of California, Santa Cruz, who provides a brief introduction to her work in one of the commentaries in this volume. The two largest occupations on Kletzer’s list, accountants and bookkeepers, employ almost 3 million Americans. Of course, not all of the jobs on the list will be taken by foreigners if the driving forces of offshoring are allowed to play out without government intervention. Employment in some occupations open to international competition will grow, because of the creativity and productivity of U.S. workers. Other occupations may be sustained in this country by lowering compensation toward overseas levels.

The authors agree on this much. They disagree about the pace of offshoring and its impact. Blinder argues that this transformation will happen more rapidly than workers will be able to adjust to it; so fast, in fact, that we should expect offshoring to be “one of the biggest political issues in economics over the next generation.” He points out that it’s harder to change occupations than to change jobs and suggests that this more difficult kind of adjustment will become increasingly common in the future. In contrast, commentator Robert Lawrence of Harvard University labels Blinder “Chicken Little.” “[C]hange will come slowly,” Lawrence assures the reader, a view echoed by his fellow commentator, Douglas Irwin of Dartmouth University.

Lawrence claims that his position is supported by “an overwhelming amount of empirical evidence,” but that is an exaggeration. International trade in services is in its infancy. Measurement of it is poor. Nevertheless, early data indicate that trade in services is not yet a serious problem. Kletzer (with J. Bradford Jensen) and others who are not represented in the book, such as Runjuan Liu and Daniel Trefler, have shown in recent papers that the effects on U.S. workers through the first half of the decade of the 2000s have been modest, even trivial. Moreover, their best estimates suggest that these effects have been positive, rather than negative, for domestic earnings and employment.

Whether an economist is willing to project these preliminary results into the future with confidence turns in large part on how he or she interprets the impact of the expansion of international trade in goods (as opposed to services) during the past several decades. Falling transportation costs, economic liberalization, and business-model innovation have driven a labor market adjustment in the U.S. manufacturing sector of roughly equal magnitude to that anticipated for the service sector, producing the much larger body of evidence to which Lawrence refers. How bad was it? “Nothing calamitous has happened,” Blinder concedes, reviewing the numbers.

But calamity is a subjective experience. What seems not so calamitous in the leafy precincts of Cambridge or Princeton may seem quite calamitous in Detroit or Cleveland. To be fair, Blinder goes on to say “I don’t think we … handled [the manufacturing adjustment] very well.” Kletzer takes a stronger position, labeling the costs “considerable.” Harvard’s Richard Freeman, another contributor to this volume, cites public opinion data that reveals Americans’ deep fear of offshoring. In a 2004 poll, 30% of workers reported that someone they knew had lost a job due to offshoring.

It is surprising, then, that this widespread concern has not coalesced into a serious political movement. The appearance of such a movement would be calamitous from the point of view of Jagdish Bhagwati of Columbia University, whose slight contribution is for some reason given the first slot in the volume, a sort of prebuttal to Blinder’s essay. Bhagwati is not especially interested in engaging the substantive arguments of his colleagues; his main concern is to head off the perception that anything less than unanimity on free trade reigns among economists. As Blinder says, “The thought police are on patrol.” And the volume shows how effective they are: None of the authors proposes any constraints on trade, no matter how big he or she believes the threat posed by offshoring to be.

What they do propose are policies that they would probably support under any circumstances. Freeman, who is furthest to the left, wants stronger unions, a more robust welfare state, and income redistribution. Irwin, manning the other end of the spectrum, is skeptical that government can make much of a difference. Lawrence and Kletzer can comfortably sign on to Blinder’s centrist proposals for a stronger safety net and for supporting creativity, innovation, and entrepreneurship, even though they disagree about the pace and impact of services offshoring. As Irwin says, “After getting us all excited about offshoring, [Blinder] offers us plain-vanilla remedies.”

Blinder’s only spicy offering is for U.S. educators to do more to prepare kids for jobs that he believes will be immune from foreign competition in services. “Nurses, carpenters, and plumbers,” he suggests, are the kinds of jobs that will always have to be done in physical proximity to the customer, no matter how much technology improves.

This proposal, like Blinder’s data and analysis, is eviscerated by his colleagues, both left and right. The difficulties include predicting which jobs fall in this category, how well they might pay, and whether they might be subject to another of the modes of international competition in services, namely the physical migration of people to jobs and of jobs to people, such as in medical tourism.

The reader who shares Blinder’s intuitions about the scope and pace of the challenge, then, is left without much hope for addressing it. It is a testament to the power of the economic orthodoxy that the notion of a national strategy for competitiveness is never mentioned in the book.

Global governance to manage the tensions of economic globalization receives no attention either. U.S. economic hegemony is long gone. Yet whether the 21st century brings multipolar rivalry or new forms of cooperation, the U.S. government and its dominant intellectual paradigm seem woefully unprepared.

Much of this book was written in 2007. The financial crisis and “great recession” of the intervening years have cast some of the issues in a new light. U.S. politics have proven even more resistant to protectionist impulses than Bhagwati and company might have expected. Even in a period of high unemployment, not to mention rising uncertainty about social protections such as health insurance and pensions, no major-party candidate for president in 2008 advocated significant trade or immigration restrictions. Nor did a Ross Perot or Pat Buchanan give voice to nativist discontent through a third-party campaign, perhaps because this global crisis was so obviously made in America.

The prompt monetary and fiscal policy response to the crisis also deserves some credit for this outcome. In the 1930s, the length and depth of the depression did lasting damage to the idea and the reality of a liberal global economy. This time around, at least so far, the long-term forces that are propelling the expansion of international trade remain undisturbed. If anything, therefore, the issues highlighted in this volume, both explicitly and implicitly, are more important now than when they were written.

The symposium on which this book is based commemorates Alvin Hansen, a Harvard economist who helped bring Keynesianism to the United States in the 1930s and 1940s and is thus indirectly responsible for the relatively successful response to the current crisis. Hansen is also famous as a failed prophet, predicting that the country would fall back into depression at the end of World War II. Only Irwin among this volume’s authors invokes Hansen, and he suggests that Blinder, too, will prove to be a failed prophet. Perhaps so; the evidence points that way right now. But the evidence is limited and the underlying forces undeniably strong. This reader would feel a lot more comfortable if, along with prophecy and timeless verities on the virtues of free trade, economists offered fresh ways of thinking about the evolving global economy and how to govern it.

An -ology of technology

Postmillenial debate on the appropriate human uses of technology is still framed by the struggle between technophiles, who never met a technology they didn’t like, and Luddites, who manage to believe simultaneously that nature is on her last legs at the hands of technology and that she will have her revenge on us in the end.

Brian Arthur is a technophile’s technophile, an accomplished interdisciplinary scientist with degrees in, among other things, electrical engineering, operations research, math, and economics. He is well regarded for his work on the economic theory of increasing returns, which examines how small occurrences (such as the famous example of how the slightly longer capacity of VHS tapes enabled them to triumph over Betamax tapes in the dawn of the consumer videotape industry) can be magnified by positive feedback into large differences; and as one of the fathers of complexity theory, which argues for the desirability of studying complex systems in order to discover “emergent properties,” even if the discipline has come up with little in the way of tools for doing so. He has a strong relationship with the technophile world through his affiliations with Stanford University, the Palo Alto Research Center (PARC), and the mountain fortress of technophilia, the Santa Fe Institute. As John Seely Brown of PARC fame says on the cover of the book, “Hundreds of millions dollars slosh around Silicon Valley every day based on Brian Arthur’s ideas.”

In The Nature of Technology, Arthur aims to develop a theory of technology, or as he puts it, an “-ology of technology.” A desire to create a correspondence with Darwin’s work on natural selection permeates this engaging book, and Arthur flirts with various links between his essay and Darwin’s. Is there an underlying unity to the apparently bewildering variety of technologies? How do existing technologies give rise to new ones? Is there a mechanism for explaining the origin of utterly novel technologies? Is there an analog in the universe of technologies to genes, mutation, and natural selection?

For Arthur, there are three basic principles of what he calls the “combinatorial evolution” of technologies:

IS THERE AN UNDERLYING UNITY TO THE APPARENTLY BEWILDERING VARIETY OF TECHNOLOGIES? HOW DO EXISTING TECHNOLOGIES GIVE RISE TO NEW ONES? IS THERE A MECHANSIM FOR EXPLAINING THE ORIGIN OF UTTERLY NOVEL TECHNOLOGIES?
  • Technologies inherit parts from the technologies that preceded them, so putting such parts together—combining them—must have a great deal to do with how technologies come into being. Technologies somehow must come into being as fresh combinations of what already exists.
  • Each component of technology is itself in miniature a technology.
  • All technologies harness and exploit some effect or phenomenon, usually several.

The first point provides a mechanism for deriving new technologies from existing ones. Starting from a mythical state of nature where humankind invented technologies such as sharp tools and fire from little but natural phenomena, a world increasingly dominated by technologies themselves has emerged, reflecting the generative power of using existing technologies to bootstrap new ones.

The last point, that technologies harness an effect or phenomenon, connects technologies to “nature,” since these phenemona are, for most technologies, natural. Jet-engine technology (an example used throughout the book) harnesses, among other things, Newton’s Third Law and the Bernoulli Effect.

The second point, although it fascinates Arthur and leads him to dwell on the “recursive,” “fractal,” and “autopoietic” properties of technology, adds little to the argument.

New technologies are produced from the field of existing technologies and effects in the same way that sentences in a language are produced from the grammar and vocabulary of the language, or in the same way the musical compositions are produced from pitches, rhythms, and timbres. And just as the body of music or language utterances influences new composition, the body of technology provides a field in which new technology combinations can be more easily derived.

Chapters 5 and 6 are my favorites. They take deep dives into what “standard engineering,” the everyday design problems that an engineer faces, has in common with profound innovation in technology and highlight a common thread: “A new project always poses a new problem. And the overall response—the finished project—is always a solution: the idea, a specific one, of an appropriate combination of assemblies that will carry out the given task … Because the overall solution must meet new conditions, its chosen assemblies at each level will need to be rethought—and redesigned to conform.”

A profound innovation works in exactly the same way but entails new phenomena or new uses of known phenomena: “At one end of the chain is the need or purpose to be fulfilled; at the other is the base effect that will be harnessed to meet it. Linking the two is the overall solution, the new principle, or concept of the effect used to accomplish that purpose. But getting the principle to work properly raises challenges, and these call for their own means of solution, usually in the form of the systems or assemblies that make the solution possible.”

There is much more. Arthur has some interesting thoughts about “domains” or systems of technologies, such as digital, electronic, or photonic. He has a nice phrase for domains as “worlds entered for what can be accomplished there.” He has some thoughts about the economy not simply as a container for its technology but as an evolving entity that reacts back with its technologies and changes itself over time. Stimulating ideas such as these are a good reason to read this book.

Ultimately, though, the whole edifice is not entirely satisfying. For one thing, the scholarship could have been stronger. Surely some economists besides Schumpeter have written about technology. Both Adam Smith and Karl Marx have had plenty to say about technology and the economy, but Arthur barely mentions them. Similarly, many relevant philosophers are curiously absent. Arthur discusses Heidegger a bit (and appropriately so), but Popper, for example, is nowhere to be found, or Foucault for that matter (although perhaps Arthur is to be forgiven for that). This wouldn’t be a shortcoming—after all, Arthur says “[t]his book is also not a review of the literature on technology”—if it didn’t have the effect of hiding “prior art” in these matters. Marx’s notion of the productive forces and his treatment of them prefigures Arthur’s by more than a century. There is a bit of old wine in new bottles here.

But lack of scholarship is a small blemish compared to the seeming unproductiveness of the results. The ultimate power and test of a theory are the predictions it affords and the countertheories it falsifies. Darwin’s theory opened up a whole new territory for biology by adducing a simple and profound mechanism whereby species could evolve from other species, thereby falsifying the theory that each species was unique. Did anyone before The Nature of Technology seriously doubt that technologies arise from other technologies and ultimately from the harnessing of natural phenomena?

Of course there is a virtue in saying it all in one place in the structure of an orderly argument, and Arthur is to be commended for this. But what predictive power becomes available to us as a result of this work?

Arthur does not ignore or shrink from this challenge and discusses a simulation he and Wolfgang Polak built to explore the “evolution” of more complex logic circuits from simpler ones. The simulation is a standard kind of genetic algorithm that embodies some of Arthur’s thinking about the evolution of new technologies, and Arthur says excitedly: “In the beginning there was only the NAND technology [a very simple form of logic gate]. But after a few tens and hundreds of combination steps, logic circuits that fulfilled simple needs started to appear … [A]fter sufficient time, the system evolved quite complicated circuits: an 8-way-XOR, 8-way AND, 4-bit Equals … In several of the runs the system evolved an 8-bit adder, the basis of a simple calculator.”

This kind of “proof by simulation” has a modest history in “the -ology of technology.” Stanley Miller in the 1950s put together a few organic chemicals and passed a spark through them for days: Out came amino acids. John Conway in the 1970s put together a few “cellular-automaton” rules and a nifty visual display and ran his software for a few days: Out came “self-replicating patterns.” Doug Lenat in the 1980s put a few “general rules of inference” into Eurisko and ran it for a while: Out came the “discovery” of prime numbers. Sadly, these simulations prove little except that “if you put good precursors into a simulator you can get outputs that please you and sometimes things go well.” The Arthur-Polak simulator, like these others, gives intimations of higher structures emerging from lower, but in fact things do not progress beyond the first encouraging results. Lenat’s theorem prover “discovered” prime numbers but never got much further. Miller’s spark tubes produced amino acids but not much else. I would guess that Arthur’s and Polak’s program didn’t go much beyond 8-bit adders.

Why are some geographies or eras more favorable to technology innovation and some less? Are there inhibitory forces that retard the acceleration of technology, or are things just going to move faster and faster until we pass through something like Ray Kurzweil’s “singularity,” where we simultaneously discover immortality, download our bodies into arbitrary hardware, and cope with artificial intelligences much more capable than ourselves? Is automatic evolution of technology in fact possible? These are some questions that a theory of technology should answer, and although Arthur recognizes that they are appropriate questions, his theory does not help us much with predictions.

Perhaps Arthur is too much of a technophile to do what Marx did and look not just at the “forces of production” but also at the “relations of production”: the combination of class struggle, clashing of economic interests, politics, psychology, and pure human orneriness that, for Marx, defined how innovation would develop as much as or more than the autopoietic inner play of technology. Relations of production tell us that new anti-aging creams will undergo more frenzied evolution than cheap mosquito nets, that hems will rise and fall in order to keep selling skirts, and that food will rot in one corner of the globe while people go hungry in another. Surely it is in the intersection of these political, social, and cultural factors with the elegant structures of technology that a better theory of how we make what we make—and why we don’t sometimes—would be found.

Is Medical Technology the Villain?

Daniel Callahan’s Taming the Beloved Beast: How Medical Technology Costs are Destroying Our Health Care System is both more and less than the title implies. More, in that it is a blunt, thought-provoking view of medical culture that raises difficult but essential questions about our values and public policy. Less, in that it lacks depth and nuance in its treatment of technology, limiting its utility in evaluating short-term policy issues. It is a particularly interesting read in this time of acrimonious health reform debate.

Callahan’s main focus is not technology per se but rather the evolution and prospects of the U.S. health care system as a whole. The challenge is formidable because the starting point is “a messy system, one ill-designed for reform because of the accretion of assorted interest groups with different agendas and vested interests, an ideologically divided public, and a steady stream of new and expensive technologies added to those already in place.”

At the heart of the book is the belief that health care is at a level of unsustainable cost growth and resource consumption. When Callahan examines health care costs, he gravitates toward technology, which by various estimates accounts for up to half of cost growth. By extension, his policy imperative becomes how to manage technology costs.

Technology is the beloved beast: “It saves and improves our lives with its undoubted power to diagnose and treat but, in its unrestrained lumbering about in the house of medicine, increasingly wreaks financial havoc.” Callahan attributes that unrestrained lumbering in large part to a medical culture that assumes innovation to be a “central and untouchable value” so that “nothing is allowed to stand still in health care; it is always supposed to get better.” He characterizes much of the industry as focused on innovation and scientific advancement with little or no concern for escalating costs or resource consumption at a national level.

One of the most interesting discussions in the book is Callahan’s description of how innovation, research, business interests, and medical education meld into a self-reinforcing culture of continuous innovation and development. Although many would call this a treasured asset, he views it as a dynamic that creates untenable social risk. At the same time, he has a fine sense of realpolitik and pursues his case in the context of this question: “What if it is crucial for the long run that the golden-haired favorite not win, that the despised or dismissed contender—the one who looks a bit grim and hardly as pleasing as the crowned champion—is actually the most deserving?”

What is this despised or dismissed contender? Philosophically, it is a culture in which health care moves from an “infinity model” with open-ended medical progress and technological innovation as core values to one that is more limited in aspiration. In practical terms, this becomes a universal health system with global budgeting, conscious limits on the development and use of technology, and policies that distribute care resources based on an age-stratified, quality-adjusted life year (QALY)–mediated cost-effectiveness model. Callahan is particularly enamored of the European social health insurance (SHI) model as demonstrably superior in both health outcomes and cost management. Although he rightly notes the role of national policy, he fails to point out underlying characteristics of these systems—much smaller size, homogenous demographics and culture, better-established social support systems—that contribute to their success but do not exist in the United States. The resulting sea change embodied in his proposals will require “fewer new technologies, especially those with marginal benefits, slower diffusion of expensive ones, a reduced dependence upon technology by physicians, and a willingness of patients to change their expectations and lower their demands.”

The upshot, and in many ways the most controversial element of Callahan’s policy framework, is the severe limiting of the use of expensive technologies and interventions to treat those who have lived a “full life,” nominally at or around age 80. Callahan summarizes the argument thus: “The baby boomers should know that if in their 80s they come to want the same level of technological care they received in their earlier years, they will simply not get it.” That view, writ large, becomes the driving force for a series of age-, condition-, and expense-based proposals at the heart of resource allocation in his proposed national health system. Callahan lays out preliminary ideas on how the logic of such a system might work; less time is spent on the process or politics of implementation.

The urgency behind the words is based in large part on the author’s philosophical bent and reflections. However, there are two other elements in play. The first is his belief that current approaches to cost containment and technology assessment are grossly insufficient, individually and collectively, to generate progress. Callahan devotes a very small fraction of the book to considering, and largely summarily dismissing, a wide range of initiatives. These include reduction of waste, inefficiency, and geographic variation; evidence-based medicine; reduction of medical errors; disease management; better management of chronically and critically ill patients; and information technology. In each case, the intervention is deemed too small, not politically grounded, not focused strongly enough on costs, or in some other way not up to the task.

However, as Callahan exposes his schema for public policy, in particular his “conciliatory” reform track, he allows that some of the above would be appropriate elements. More important, aside from a high-level description of cost-effectiveness evaluation utilizing QALYs (drawn in large part from the UK model employed by the National Institute for Clinical Excellence) and the constraints of a global budget, he proposes no other specific approaches, thus suggesting a decided preference for the saw over the scalpel.

The lack of depth in the discussion of technology is noticeable, and surprising given the title. With a few exceptions such as the discussion of left ventricular assist devices, Callahan spends little time defining the technology arena except in broad categories such as critical care, hospital care, high-cost pharmaceuticals, and biologics. Noticeably absent is any discussion of areas in which technology can truly be transformative, in which it can reduce costs and improve clinical quality on a scale that will have a major impact on U.S. health care. Callahan’s implicit conclusion appears to be that no such examples exist.

The combination of few or no perceived effective policy levers and no transformative technologies leads to a very simple and blunt technology policy: make it unavailable or permit it only when it passes a rigorous “sufficient evidence” test. In short, desperate times require painful measures.

One can, however, hold a fundamentally different and more positive view of both proposed initiatives and present and future technologies. With respect to current initiatives, Jack Wennberg, Donald Berwick, and many others have made a compelling case for the clinical and cost benefits of reducing practice variation and errors. Sean Tunis, identified by Callahan as a leading thinker in the area of comparative effectiveness, is making excellent progress on methodology and application with the Center for Medical Technology Policy. Although no single initiative will resolve the cost issue, it is equally true that their combined potential is powerful.

There are current and emerging technologies that also show early results and even greater promise for transformative performance. Technologies such as telemedicine, remote patient monitoring, medication optimization, computerized provider order entry, and electronic health records offer significant advantages in cost and quality of care. It is particularly important to note that the European SHI systems that Callahan most admires have been early and continuous investors in these very technologies as part of their own strategies. Likewise, the most aggressive use of, and compelling results from, these same strategies and technologies in the United States are coming from systems such as the Veteran’s Health Administration and Kaiser Permanente that are recognized as leaders in cost management and care of large populations.

The root causes of the nation’s long-term dilemma—an aging population, growth in chronic diseases, and persistent shortages of caregivers—that Callahan takes as his imperative to constrain technology may in fact be the most persuasive argument for technology. If one thing is clear, it is that we cannot expect to provide anything approximating reasonable capacity if we continue down a path of highly person-intensive, disease- and institution-based care models. Although it is unarguable that technology has contributed to cost escalation, it is also clear that technology can and will play an important role in redefining our care processes and resulting delivery system. Forcibly constraining R&D will make delivery system transformation more, not less, difficult.

This is a book well worth reading. It raises important and difficult questions and deals with them directly and unflinchingly. Its greatest strength is in identifying and expounding on important philosophical and social issues that underlie much of our national reform debate. Its greatest weakness is in underestimating the potential utility of current industry initiatives aimed at quality improvement and cost management, and the contribution of specific technologies to the transformation of health care. In total, Callahan performs a valuable service for all who think about and must shape the future of our delivery system.


Steven DeMello () is the Director of Health Care at the Center for Information Technology Research in the Interest of Society (CITRIS) at the University of California, Berkeley.

Every Little bit Counts

The world’s top search engine, a $175 billion corporation second only to Coca Cola in name recognition worldwide, and a new active verb, Google knows what it’s about. But others aren’t so sure, and now that its critics, competitors, and fans—and, increasingly, the world’s courts—are keen to figure out what it is and where it’s headed, it may have some public explaining to do.

The big questions about Google’s future spring partly from the giant’s beginnings, and Ken Auletta’s Googled: The End of the World as We Know It is a searching and well-informed probe into both. For added measure, the book is also a platonic love story and a moral tale.

Auletta’s account of Google’s origins and early years shows a great notion—free delivery of content picked by datamining software and data users—taking shape in a graduate engineering school and taking flight in an atmosphere of absolute mutual trust and respect as remarkable as the big idea itself. Google founders Sergey Brin and Larry Page met at Stanford in 1995 just as the Internet was taking off. Soon, they were sparring non-stop, testing new ideas and prevailing theories, including the gospel on keyword search, and discovering that they both disdained authority and no-can-do thinking in general. Mentors from that time credit this pluck more than sheer brilliance—hardly scarce in Stanford’s engineering department—with Brin and Page’s success.

The technical breakthrough, and here Auletta quotes John Battelle’s The Search, was the pair’s “algorithm—dubbed PageRank after Larry—that manages to take into account the number of links into a particular site and the number of links into each of the linking sites.” From that moment on, keywords had to make room for the wisdom of crowds in search results and the career road for the wiz kids forked, forcing on Brin and Page a hard choice between academe and full-bore entrepreneurship.

Hyper-secretive about their new search engine (then called BackRub), Brin and Page left Stanford to make the rounds of potential funders, networking with Silicon Valley’s angel investors and first-generation search-engine companies. The fateful decision about whether to be an add-on or a start-up hung on their mutual distaste for rigging scientific searches by letting paid advertising pages percolate to the top. Above all, they would stay true to their algorithm.

And to each other. By 1998, Brin and Page had offices, incorporation papers, and one employee. Two years and one billion indexed webpages later, they had yet to turn a significant profit but had revenues of $19 million and a burning need—felt less by them than by their advisors—for a CEO to run Google’s daily business. A third party was also needed to challenge the founders’ like thinking lest engineering bravado trump everything else the young company required to thrive. In early 2001, capping a yo-yo courtship, Eric Schmidt signed on with Google after three years and a checkered record as Novell’s CEO. The adjustment to power-sharing and competing perspectives was huge and rocky at first—possible at all, Auletta contends, only because Schmidt kept his ego out of it—but with the troika in place conditions were ripe for a wave of flat-out brilliant money-making ideas.

The rest has made technological and business history. In early 2002, the beta-tested AdWords launched, allowing advertisers to display simple ads related to the text on the webpages viewers pulled up. A year later came AdSense, which pays websites to host Adwords related to their content, and then a string of innovations that extended Google’s reach far beyond the realm of search engines. The company’s policy of allowing its engineers so spend 20% of their paid time following their own hunches and inspirations was a catalyst for many of these new services: Gmail, Google Maps, Google Earth, Google Analytics, the Google Chrome web browser,” GoogleVoice and countless “mashups” using freely available Google data.

Besides the right team, Google long seemed to have an unbeatable philosophy. One tenet is the let-the-people-decide principle embedded in both Google’s basic computing formula and its ad scheme. As Auletta explains, “advertisers would rank higher on the search results page based not just on the price they bid per keyword, but on the number of clicks their ads received. The more clicks, the lower the price, and the higher they would rank.” Other features of Google’s business model also sound high-minded and simple, at least until applied. To quote Auletta, “you can trust your folks” (employees) and customers and “shoot for the moon, not the tops of trees.” And to Google, information is intrinsically good, helping people make better decisions about almost everything, not just about what to buy.

Apparently, optimism and idealism sell. The company’s odd motto, “Don’t be evil,” has come to define how most consumers see Google, especially in relation to the purportedly more ruthless Microsoft (the subject of the author’s last book). Most, says Auletta, view Brin and Page’s brainchild as “an iconic brand, a force for good, a company that made search easy and fast and free; a company that retained its bold entrepreneurial spirit and was both a beneficent employer and a benefactor to shareholders.”

Some competitors take a different view, and the second half of Googled explains why. Auletta doesn’t have much new to say about the newspaper industry’s decline or about book publishing, but his bead on how the other media corporations missed the cues that radical change was afoot seems exactly right. Even though Marshall Mcluhan told us decades ago that the medium is the message, not until Google bought YouTube did the traditional media begin to see just how blurred the line between content and distribution has become. And not until Google absorbed Android did the phone companies wake up. Especially interesting is Auletta’s account of how the pre-Google advertising industry is being pushed into a niche after owning the whole show.

Auletta’s riveting blow by blow exposes Google’s flaws and vulnerabilities along with its triumphs and the rigidity of its “frenemies.” Foremost is the hubris that comes from winning big before failing at anything. Related is utter reliance on academic credentials in hiring as bankable proxies for such essential but hard to quantify attributes as social grace, business instincts, and team spirit. A bit like the artificial intelligence Page and Brin found so fascinating at Stanford is a literal-mindedness at Google that seems to cut both ways—a brainy efficiency reigns, but the company can’t see itself as others do, which can be embarrassing and dangerous given what its marketing and advertising business is all about. And Google’s uneasy peace with China’s censors also looks to be a business mistake as well as an ethical lapse. (The company’s recent public criticism of Chinese practices suggests that company policy is being reconsidered.)

Auletta’s book has shortcomings too. As the hyperbolic subtitle makes clear, the author is awestruck by Google, and he’s no digital native. To those younger than Baby Boomers, Google—however admired—may represent the latest new thing or Microsoft’s heir as the coolest employer ever more than it does a transformative idea. Auletta acknowledges as much, but doesn’t dig deep enough into the possibility’s implications for Google’s future.

Following the money, Auletta also emphasizes Google’s business success at the expense of the engineering side of the story. True, Google guards its trade secrets ferociously—what happens at the Googleplex campus stays at Googleplex—and the business story is irresistibly juicy. But many readers might want to know more about whether something Google-like was ripe to happen with or without Page and Brin. Although Auletta probably wrote the book he wanted to and should mostly be judged accordingly, more on the state of computational science before and during Google’s rise would have made this 270-degree surround truly definitive.

A final complaint is perhaps just a question. After Auletta invested years of research in this book, the finish line (Fall 2009 ) seems arbitrary when a few more months would have seen reportable action on the Google Book settlement and other historic privacy and copyright cases. Moving targets vex all in-the-moment journalism, but Googled already needs an update or a sequel.

Gaps aside, Auletta’s take on what makes Google unique and powerful and what puts it at risk now and how these two factors mirror each other should spellbind B-school and engineering students and make the rest of us wonder where googling as a tool, a free good, and a business teetering between oligopoly and monopoly ends. (If you google Google, you get 2 billion citations and one ad—“Make Google your home page.”)

Archives – Spring 2010

Dome of the Great Hall, National Academy of Sciences

Hildreth Meière (1892-1961) was an influential decorative artist known most for her murals, wall sculptures, and other unique works in the Art Deco style. Meière was commissioned by architect Bertram Grosvenor Goodhue to create the emblematic figures that adorn the dome of the Great Hall of the National Academy of Sciences building in Washington, DC. These decorations illustrate the components of the physical world and celebrate the history of science as it was known in 1924 when the construction of the building was completed.

Recently there has been a revival of interest in Meière prompted primarily by the first major exhibition of her work at St. Bonaventure University. Curated by Catherin C. Brawer, the exhibition Walls Speak: The Narrative Art of Hildreth Meière, runs from September 4, 2009, through June 13, 2010, at the Beltz Gallery and Kenney Gallery, St. Bonaventure University, New York.

Expanding Innovation

Innovation is good. Everyone says so. It will increase worker productivity and thus the world’s wealth and the value of work. It will cure deadly diseases and ease the ailments that afflict us as we age. It will enable us to tap the unlimited resources of renewable energy and to use our finite resources of soil, water, plants, and minerals more sustainably. It will expand our access to knowledge and the arts and provide us with new tools with which to express our own creativity.

Of course, it is not that simple. Innovation has enabled us to devastate some fish populations and drain some aquifers, increase the time demands of many jobs and sap the creativity from others, provide tools that enable a small band of terrorists to do unimaginable damage, and create a global financial system of mind-boggling power and worrisome fragility. Nevertheless, virtually all the world’s nations and their leaders see innovation as a force that they want to nurture and direct to meet the needs and aspirations of their people.

Stimulating innovation and directing it to the proper ends is not easy. To begin with, innovation entails much more than inventing clever new gadgets. Even with gadgets, it involves design to make the gizmos appealing, safe, and easy to use, manufacturing processes that avoid waste and ensure quality, marketing systems for distribution, business management systems that coordinate all these activities, financial mechanisms to provide capital where it can be best used, government policies that protect workers, preserve intellectual property, and facilitate trade, and an education system that will provide the skills needed to keep all of these activities going in the future. And all of these activities will also be necessary to accomplish similar goals in the proliferating and expanding services sector.

The articles in this tour of innovation policy around the world illustrate how a variety of countries that differ in size, history, political culture, and many other ways are finding their own way to pursue the shared goal of innovation. Although countries can learn from observing the experience of other countries, no single template for innovation policy exists. The United States, Brazil, Singapore, and Ireland differ in obvious and subtle ways, and those differences are reflected in the innovation strategies they have adopted. They share a concern with education, research, capital formation, global market position, and government regulation, but what they do in each of these areas varies considerably. Understanding these differences is the key to understanding why each of these countries is now successful at expanding its innovative capacity. And the differences are not only among countries but within each country over time.

Brazil is a country that has been reinventing itself during the past half century. In the 1960s, Brazil had no graduate education programs, and undergraduate teaching was not a full-time job. Students seeking graduate degrees had to go abroad. In 1968, the country began reforming the federal university system, and the country now has tens of thousands of people with graduate degrees and a productive domestic research system. The country also began developing technological strength with initiatives in a number of key areas, most notably Embrapa in tropical agriculture, Petrobras in deep-water oil drilling, and Embraer in commercial, defense, and executive aviation. A financial crisis in the 1990s devastated innovation programs, but the government responded by creating sectoral funds that were collected from specific activities such as extraction of natural resources and designated for programs to promote innovation. These funds grew rapidly during Brazil’s economic recovery and continue to fuel what has become a large and diverse research and innovation program.

Singapore’s road to progress also began in the 1960s with a strategy built on foreign direct investment and export-led growth based on low-tech manufacturing. It also invested heavily in educating its people, which made it possible to move up to more sophisticated manufacturing, such as electronics and pharmaceuticals, and into new areas such as supply management and research. A small country with limited natural resources, Singapore recognized that the skills of its people would be a key to its success and that it would have to focus on just a few areas of expertise. Beginning around 2000, Singapore placed a major emphasis on developing expertise and capacity in the biomedical sciences. It soon became a regional power in this field and now it is establishing itself as a significant global player and a site for investment for many of the world’s largest multinational medical companies.

Ireland shares some experiences with Singapore and Brazil. Roughly the same size as Singapore and similarly lacking in natural resources, Ireland emphasized education as a key ingredient to success and used the quality of its workforce as a magnet for foreign direct investment. It quickly moved up the ladder from simple manufacturing to more sophisticated products to a wide variety of business and financial services. It also began the process of developing world class expertise in a number of key research areas such as biotechnology. With one of the world’s fastest growing economies, Ireland was particularly vulnerable when the recent global financial crisis arrived. The government has had to reduce spending in many areas, but like Brazil it has not forgotten the importance of research and innovation for its long-term economic well-being. In spite of its current state, Ireland is sticking with its program of building its economy through a strong foundation of education, research, and innovation.

Widely recognized as the world leader in innovation, the United States faces a different challenge. With dozens of countries eagerly trying to follow U.S. success in innovation-led growth, the country is under mounting pressure to increase its pace of progress. The Obama administration is pursuing a strategy that aims to avoid the extremes of laissez-faire government detachment and heavy-handed federal interference.

It seeks to use education, research, infrastructure investment, and regulation to create a market environment in which new products and services can emerge quickly and reach consumers at a competitive cost. And as it moves to shift the U.S. innovation system into a higher gear, it will also have to create the institutional structure that will be necessary to update policies and provide continuity.

The data presented by Andreas Schleicher in the “Real Numbers” feature in this issue should capture the attention of U.S. leaders. President Obama has expressed his desire to see the United States reclaim its position as the world leader in educating its entire population, but the data indicate that the country has been barely treading water while a number of countries have made stunning progress in just the past decade. Lip service to the importance of quality education is plentiful in the United States, but effective action to improve schooling is much more common in many countries in Europe and Asia. This appears to be a case where the United States needs to follow the lead of others.

Finally, in the midst of this pell-mell stampede to promote innovation and increase economic output, Patrick Cunningham reminds us that production alone is not an adequate measure of a nation’s well-being. Although wealth does contribute to national happiness and satisfaction with life, it is more an enabler than a real goal. When we talk about innovation policy, we should also be thinking about all that government can do to promote the well-being of its citizens. Education can enrich lives intellectually and culturally as well as make workers more productive, research can uncover the secrets of physical and psychological health as well as develop new medical technologies, businesses can be managed more effectively not just to lower costs but also to increase worker satisfaction. Social innovation deserves as much attention as manufacturing and commercial services.

United States: A Strategy for Innovation

On September 21, 2009, President Obama released his Strategy for American Innovation, which he unveiled in a major policy address that he gave in Troy, New York. The goal of the strategy is to establish the foundation for sustainable growth and the creation of quality jobs. Although the private sector is responsible for job creation and developing new products, services, and processes, the strategy identifies three critical roles for the federal government. First, the federal government must invest in the building blocks of innovation, such as fundamental research, human capital, and infrastructure. Second, the government must create the right environment for private-sector investment and competitive markets by, for example, promoting exports, reforming export controls, encouraging high-growth entrepreneurship, ensuring that financial markets work for consumers and investors, protecting intellectual property rights, and promoting regional innovation clusters. Third, the government should serve as a catalyst for breakthroughs related to national priorities such as clean energy, health care, and other grand challenges of the 21st century.

Despite the U.S. economy’s historic strength, its economic growth has rested for too long on an unstable foundation. Explosive growth in one sector of the economy has provided a short-term boost while masking long-term weaknesses. In the 1990s, the technology sector climbed to new heights, only to fall back to earth at the end of the decade. The tech-heavy NASDAQ composite index rose more than 650% between 1995 and 2000, but then lost two-thirds of its value in a single year.

After the tech bubble burst, a new one emerged in the housing and financial sectors. The formula for buying a house changed. Instead of saving to buy their dream house, many Americans found they could take out loans that by traditional standards their incomes could not support. The financial sector willingly propped up real estate prices, funneling money into real estate and finding innovative ways to spread the credit risk throughout the economy. Between 2000 and 2006, U.S. house prices doubled while the financial sector grew to account for fully 40% of all corporate profits.

This too proved to be unsustainable. House prices lost a quarter of their value in two and a half years. The housing decline and accompanying stock market collapse wiped out more than $13 trillion in wealth in 18 months. The bursting of the bubble based on inflated home prices, maxed-out credit cards, overleveraged banks, and overvalued assets wreaked havoc on the real U.S. economy, triggering what is expected to be the longest and deepest recession since World War II and driving the unemployment rate to its highest level in a quarter century.

This type of growth isn’t just problematic when the bubble bursts; it is not entirely healthy even while it lasts. Between 2000 and 2007, the typical working-age U.S. household saw its income decline by nearly $2,000. As middle-class incomes sank, the incomes of the top 1% skyrocketed. This phenomenon has a number of causes, but among them were the rising asset prices and the proliferation of financial sector profits.

A short-term view of the economy masks underinvestments in essential drivers of sustainable, broadly shared growth. It promotes temporary fixes over lasting solutions. This is patently clear when looking at how U.S. education, infrastructure, health care, energy, and research—all pillars of lasting prosperity—were ignored during the last bubble.

Education. Too many children are not getting the world-class education they deserve and need to thrive in this new innovative economy. Despite research documenting that quality matters greatly in early childhood education settings and that investments in high-quality early learning have the highest potential rates of return, the federal government lacks the level of investment needed to transform the quality of and enhance access to early education for the youngest children. Studies show that there is a school-readiness gap as early as kindergarten between children from the highest socio economic background and their less affluent peers.

Americans have neglected to provide their children with the rigorous curriculum and instruction needed to prepare them for college and careers. By the end of high school, African American and Latino students have math and reading skills equivalent to those of 8th grade white students. Across the nation, the students with the greatest need for a qualified and effective teacher are also exactly those students most likely to be taught by teachers who lack sufficient background in the subject they teach. The problems persist when students look toward continuing their education past high school. The average tuition and fees at public, four-year institutions rose 26% between the 2000-2001 school year and the 2008-2009 school year. As a result, whereas 94% of U.S. high school students in the top quintile of socioeconomic status continue on to postsecondary education, barely half of those in the bottom quintile do so. Because of the rising costs of four-year institutions, many Americans are turning to community colleges for quality higher education. Yet the federal government has historically underinvested in community colleges, giving them one-third the level of support per full-time equivalent student that it gives to public four-year colleges.

Infrastructure. The nation’s physical and technological infrastructure has been neglected, threatening the ability of U.S. businesses to compete with the rest of the world. The American Society of Civil Engineers assigns a “D” to the country’s physical infrastructure. In 2007, drivers on clogged U.S. highways and streets experienced more than 4.2 billion hours of delay and wasted 2.8 billion gallons of fuel. The United States once led the world in broadband deployment, but now that leadership is in question. Wireless networks in many countries abroad are faster and more advanced. The U.S. electrical grid is still based on the same model employed immediately after World War II. Power interruptions and outages cost individuals and businesses at least $80 billion each year.

Healthcare. U.S. health care costs have been allowed to spiral out of control, squeezing individuals and businesses at a time when they are feeling pressure on all sides. Since 2000, health insurance premiums have increased about 60%, 20 times faster than the average U.S. worker’s wage. At the same time, the number of uninsured Americans has jumped by 7 million to 46 million. Overall, health care is consuming an increasing amount of the nation’s resources. In 1970, health care expenditures were 7% of GDP; now they are 16% of GDP and at this rate will hit 20% of GDP by 2017.

Energy. The U.S. economy has remained dependent on fossil fuels, exposing consumers and businesses to harmful price shocks, threatening economic and national security and resulting in a missed opportunity to lead the clean energy economy of the future. Between 1999 and 2004, the production tax credit for renewable energy was allowed to expire on three separate occasions. In each subsequent year (2000, 2003, and 2004) new wind capacity additions in the United States fell by more than 75% from the year before. Instead of focusing on finding ever more fossil fuels, other countries made aggressive investments in renewable energy, thus creating jobs and growing domestic energy sources.

R&D. The United States has compounded its long-term economic challenges by ignoring essential investments in high-technology research that will drive future growth. During the past four decades, federal funding for the physical, mathematical, and engineering sciences has declined by half as a percent of GDP (from 0.25% to 0.13%) while other countries have substantially increased their research budgets.

Competitive, high-performing regional economies are the building blocks for national growth, and the administration is stepping up its efforts to cultivate regional economic clusters across the country.

Despite this underinvestment in key drivers of growth, the U.S. economy remains the most dynamic, innovative, and resilient in the world. The United States still has world-class research universities, flexible labor markets, deep capital markets, and an energetic entrepreneurial culture. Americans are twice as likely as adults in Europe and Japan to start a business with the intention of growing it rapidly. The United States must redouble its efforts to give its world-leading innovators every chance to succeed. It cannot rest on its laurels while other countries catch up.

Amidst the worst recession since the great depression, the administration’s initial economic objective has been to rescue the economy. The nation has taken, and will continue to take, bold and aggressive steps to stabilize the financial system, jumpstart job growth, and get credit flowing again. But as the economy stabilizes, the United States is moving on from rescue to recovery. Reflecting on the lessons of the past, the nation must rebuild a new foundation for durable, sustainable expansion in employment and economic growth.

Innovation is at the core of that new foundation. Robert Solow won the Nobel prize in economics in part by showing that factors other than capital intensification, in particular human knowledge and technology, accounted for almost 90 percent of the growth in U.S. output per hour in the first half of the 20th century. Economic growth research shows that human skill and innovation are the most powerful forces for improving prosperity over the long-run, which is exactly what we need.

Given its importance, the process of innovation cannot be taken for granted. It begins with the development—of a new product, service or process. But it does not end there. To create value, a new idea must be implemented. Thus successful innovations will diffuse throughout an economy and across the world, affecting various sectors and sometimes even creating new ones. A diffused innovation must then scale appropriately, reaching an efficient size at which it can have a maximal impact.

The full process, from development to diffusion to scaling, has many variables and many inputs. Ideas often fail before they make it through the full chain. But those that do can create value and jobs while improving people’s lives. It is essential for the nation’s long-run prosperity that innovations be allowed to flourish and progress along this chain. And here government has a fundamental role to play.

The appropriate role for government

Although it is clear that a new foundation for innovation and growth is needed, the appropriate framework for government involvement is still debated. Some claim that the laissez-faire policies of the past decade are the right strategy and that the recent crisis was the result of too much rather than too little government support. This view calls for cutting government regulation and gutting public programs, hoping the market will take care of the rest.

However, the recent crisis illustrates that the free market itself does not promote the long-term benefit of society and that certain fundamental investments and regulations are necessary to promote the social good. This is particularly true in the case of investments for R&D, where knowledge spillovers and other externalities ensure that the private sector will underinvest, particularly in basic research.

Another view is that the government must dominate certain sectors, protecting and insulating those areas thought to be drivers of future growth. This view calls for massive, sustained government investment supported by stringent oversight, dictating the type and direction of both public and private investments through mandates and bans.

But historical experience in this country and others clearly indicates that governments that try to pick winners and drive growth too often end up wasting resources and stifling innovation. This is in part due to the limited ability of the government to predict the future, but also because such exercises are distorted by lobbyists and rent seekers, which are more likely to favor backward-looking industries than forward-looking ones. In the United States such failures at picking winners and losers most prominently include the Synthetic Fuel Corporation, a $20 billion project in the 1980s that failed to provide the promised alternative to oil.

Therefore, the Obama administration rejects both sides of this unproductive and anachronistic debate. The true choice in innovation is not between government and no government, but about the right type of government involvement in support of innovation. A modern, practical approach recognizes both the need for fundamental support and the hazards of overzealous government intervention. The government should make sure individuals and businesses have the tools and support to take risks and innovate, but should not dictate what risks they take.

The United States proposes to strike a balance by investing in the building blocks that only the government can provide, setting an open and competitive environment for businesses and individuals to experiment and grow, and by providing extra catalysts to jumpstart innovation in sectors of national importance. In this way, we will harness the inherent ingenuity of the American people and a dynamic private sector to generate innovations that help ensure that the next expansion is more solid, broad-based, and beneficial than previous ones.

A strategy for U.S. innovation

For local communities and the country at large to thrive in this new century, the nation must harness the spirit of innovation and discovery that has always moved the country forward. The United States must foster innovation that will lead to the technologies of the future, which will in turn lead to the industries and jobs of the future.

President Obama has already taken historic steps to lay the foundation for the innovation economy of the future. In the American Reinvestment and Recovery Act alone, the president committed more than $100 billion to support groundbreaking innovation with investments in energy, basic research, education and training, advanced vehicle technology, innovative programs, electronic medical records and health research, high speed rail, the smart grid, and information technology. His commitment also includes broader support in the Recovery Act and in his fiscal year (FY) 2011 budget on initiatives from education to infrastructure. The president’s commitment is not limited to government funding but extends to important regulatory and executive actions such as patent reform, coordinated fuel efficiency standards, net neutrality, permit policy for offshore wind farms, and the appointment of the government’s first chief technology officer. The Obama innovation strategy has three parts: investing in the building blocks of innovation, promoting competitive markets that spur productive entrepreneurship, and catalyzing breakthroughs for national priorities.

Investing in the building blocks of American innovation. President Obama is committed to making investments that will foster long-term economic growth and productivity, such as R&D, a skilled workforce, a leading physical infrastructure, and widely available broadband networks. This commitment is evident in the Recovery Act, which provided an $18.3 billion increase in R&D, the largest increase in our nation’s history. Recognizing the need for long-term and sustained investments in R&D, President Obama has pledged to complete a planned doubling of the funding of three key science agencies: the National Science Foundation (NSF), the National Institute of Standards and Technology, and the Department of Energy’s (DOE’s) Office of Science. In an address at the National Academy of Sciences, the president called for the public and private investment in R&D to surpass 3% of GDP, which would exceed the level achieved at the height of the space race. As the president noted, “science is more essential for our prosperity, out security, our health, our environment and our quality of life than it has ever been before.” To encourage private sector investment in R&D, the president has proposed making the research and experimentation tax credit permanent.

The president’s FY 2011 budget provides a 5.9% increase in civilian R&D, with significant increases for biomedical research supported by the National Institutes of Health, the physical sciences and engineering, and multiagency research initiatives such as the U.S. Global Change Research Program. The administration is also working to increase the impact of this investment by providing greater support for university commercialization efforts, for high-risk, high-return research, for multidisciplinary research, and for scientists and engineers at the beginning of their careers. For example, the NSF’s FY 2011 budget proposes to double the Partnerships for Innovation program, which will help universities move ideas from the lab to the marketplace. Under the leadership of Regina Dugan, the Defense Advanced Research Projects Agency has embraced its mission to sponsor revolutionary, high-payoff research. The National Aeronautics and Space Administration is pursuing a bold new approach for space exploration and discovery, and will dramatically increase its support for game-changing technologies such as advanced engines for launch and in space travel, super light-weight structures, robotic missions, new entry systems, space resource processing, and radiation protection for people and space systems.

To help ensure that the United States has a world-class workforce with 21st century skills, President Obama has launched a series of initiatives to reform the educational system. The Department of Education’s Race to the Top program is providing more than $4 billion in funding to support a national competition among states to improve schools. As part of his FY 2011 budget, the president is proposing an additional $1.3 billion for Race to the Top and will expand the program to include local school districts. In November 2009, the president launched the Educate to Innovate initiative to encourage more boys and girls to excel in science, technology, engineering, and mathematics (STEM) subjects. He committed to hold an annual science fair at the White House with the winners of national competitions in science and technology. As he noted, “If you win the NCAA championship, you come to the White House. Well, if you’re a young person and you’ve produced the best experiment or design, the best hardware or software, you ought to be recognized for that achievement, too.” Companies, foundations, and nonprofit organizations have already pledged $500 million in financial and in-kind support for this effort. Volunteers are signing up for grassroots initiatives such as National Lab Day, which will match teachers with scientists and engineers to bring hands-on science projects into the classroom.

Furthermore, President Obama has set a national goal of once again having the highest proportion of college graduates in the world. To reach that goal, he has proposed nearly doubling the amount of Pell grant scholarships available to 9 million students. He signed into law the $2,500 American Opportunity Tax Credit for college and is now working to make it permanent to give families $10,000 over four years for college. He has proposed a $12 billion American Graduation Initiative to help community colleges improve their quality, work with businesses, improve transfer rates, and support working students.

To connect people and businesses, the president has made large investments in the nation’s roads, bridges, transit, and air networks. The Recovery Act made large investments in the smart grid, high-speed rail, and the nation’s highways and mass transit systems. The president’s FY2011 budget includes $4 billion for a National Infrastructure Innovation and Finance Fund to support projects of regional or national significance, an additional $1 billion for high-speed rail, and a more than 30% increase for the Next Generation Air Transportation System. This will improve the efficiency, safety, and capacity of the aviation system by moving towards a more accurate satellite-based surveillance system.

The Obama administration is committed to expanding access to broadband networks. This is essential for economic growth and job creation. It also has the potential to reduce energy consumption through telework, allow working adults to acquire new skills through online learning, improve communications networks for first responders, and foster rural economic development. The Recovery Act provided $7.2 billion for broadband grants and loans through the Department of Commerce and the Department of Agriculture, and the Federal Communications Commission is hard at work on its National Broadband Plan.

To foster the next wave of innovation in information and communications technologies, the administration is supporting research in areas such as cybersecurity, cyber-physical systems, efficient programming of parallel computing, quantum computing, and nanoelectronics that will extend the rapid rate of progress known as Moore’s Law for decades to come.

Promoting competitive markets that spur productive entrepreneurship. The Obama administration believes that it is imperative to create a national environment that is ripe for entrepreneurship and risk taking and that allows U.S. firms to compete and win in the global marketplace. The administration is pursuing policies that will promote U.S. exports, support open capital markets, encourage high-growth entrepreneurship, invest in regional innovation clusters, and improve the patent system. The administration also strongly supports public sector and social innovation.

In his 2010 State of the Union address, the president set a goal of doubling U.S. exports over the next five years, which will support 2 million jobs. The president’s FY 2011 budget provides a 20% increase for the Department of Commerce’s International Trade Administration (ITA). As part of the National Export Initiative, a broader federal strategy to increase U.S. exports, ITA will strengthen its efforts to promote exports from small businesses, help enforce free trade agreements with other nations, fight to eliminate barriers to the sales of U.S. products, and improve the competitiveness of U.S. firms. The president has also directed that the National Economic Council and the National Security Council to review the overall U.S. export control system, directing them to consider reforms that enhance U.S. national security, foreign policy, and economic security interests. Although the United States has one of the most robust export control systems in the world, it remains rooted in the Cold War era of more than 50 years ago. It must be updated to address today’s threats and the changing economic and technological landscape.

One of the lingering difficulties of the recession is that it is difficult for many small businesses to access the capital they need to operate, grow, and create new jobs. To encourage high-growth entrepreneurship, the president is proposing $17.5 billion in loan guarantees for small businesses, increased incentives for small businesses to invest in plant and equipment, and a permanent elimination of capital gains taxes for investors that make long-term investments in small businesses.

Competitive, high-performing regional economies are the building blocks for national growth, and the administration is stepping up its efforts to cultivate regional economic clusters across the country. The president’s budget supports growth strategies based on stronger regional clusters of innovation through funding in the Economic Development Administration, the Small Business Administration, the Department of Labor, the Department of Education, and the DOE. For example, in early 2010 the administration announced a $130 million competition for an Energy Regional Innovation Cluster. This pilot project is designed to spur regional economic growth while developing energy efficient building technologies, designs, and systems. This will allow a region to develop a strategy that includes support for R&D, infrastructure, small and medium-sized enterprises, and workforce development.

The administration is committed to ensuring that the U.S. Patent and Trademark Office (PTO) has the resources, authority, and flexibility to administer the patent system effectively and issue high-quality patents on innovative intellectual property, while rejecting claims that do not merit patent protection. Currently, patent applicants wait almost three years on average to receive their patents. The president’s budget would improve processing times by providing the PTO with a 23% increase to hire additional staff and modernize IT infrastructure.

Innovation must occur within all levels of society, including the government and civil society. The administration is committed to increasing the ability of government to promote and harness innovation. The administration is encouraging departments and agencies to experiment with new technologies such as cloud computing that have the potential to increase efficiency and reduce expenditures. The federal government should take advantage of the expertise and insight of people inside and outside the government, use high-risk, high-reward policy tools such as prizes and challenges to solve tough problems, support the broad adoption of community solutions that work, and form high-impact collaborations with researchers, the private sector, and civil society.

The administration launched the White House Open Government Initiative to coordinate open government policy, support specific projects, and design technology platforms that foster transparency, participation, and collaboration across the executive branch. The initiative has achieved many important milestones, including publishing government data online to make it easy for anyone to remix and reuse; challenging thousands of federal employees to propose ideas for slashing the time required to process veterans’ disability benefits; releasing information on executive branch personnel and salaries; and making it easier to track the performance of the government’s IT spending.

Catalyzing breakthroughs for national priorities. President Obama is committed to harnessing science, technology, and innovation to unleash a clean energy revolution, improve the health care system, and address the grand challenges of the 21st century.

To support U.S. leadership in clean energy while tackling the threat posed by climate change, the administration is making major investments in energy efficiency, the smart grid, renewable energy, advanced vehicle technology, next-generation biofuels, and nuclear energy. For example, the president’s FY 2011 budget provides $5 billion in tax credits to spur manufacturing of clean energy technologies and $40 billion in loan guarantee authority for nuclear energy, energy efficiency, and renewable energy projects. These loan guarantees will help prove the technical viability of various promising technologies in the early stages of their commercial deployment so that they can later thrive in the marketplace without government support. It provides $300 million for the Advanced Research Projects Agency-Energy, which is charged with supporting projects that can transform the way we generate, store, and utilize energy. It also provides support for RE-ENERGYSE, a DOE/NSF educational effort to inspire tens of thousands of young Americans to pursue careers in clean energy.

Another important presidential priority is health care. Broad use of health IT has the potential to improve health care quality, prevent medical errors, increase the efficiency of care provision and reduce unnecessary health care costs, increase administrative efficiencies, expand access to affordable care, and improve population health. The Recovery Act provides more than $19 billion in investments to support the deployment of health IT, such as electronic health records. The Office of the National Coordinator for Health Information Technology and the Centers for Medicare and Medicaid Services are working to ensure that health information technology products and systems are secure, can maintain data confidentially, can work with other systems to share information, and can perform a set of well-defined functions.

Finally, the administration believes that grand challenges should be an important organizing principle for science, technology, and innovation policy. They can address key national priorities, catalyze innovations that foster economic growth and quality jobs, spur the formation of multidisciplinary teams of researcher and multi-sector collaborators, bring new expertise to bear on important problems, strengthen the social contract between science and society, and inspire students to pursue STEM careers. The president’s innovation strategy sets forth a number of grand challenges, such as solar cells as cheap as paint, educational software that is as compelling as the best video game and effective as a personal tutor, and early detection of diseases from a saliva sample. The National Economic Council and the Office of Science and Technology Policy are encouraging multisector collaborations to achieve these grand challenges that might involve companies, research universities, foundations, social enterprises, nonprofits, and other stakeholders.

The administration is working closely with the National Academy of Engineering (NAE) on this initiative. The NAE has identified 14 engineering grand challenges associated with sustainability, health, security, and human empowerment, such as providing access to clean water, engineering better medicines, securing cyberspace, and restoring and improving urban infrastructure. These grand challenges are already beginning to have an impact on undergraduate education. Twenty-five universities have decided to participate in the Grand Challenge Scholars Program. Undergraduate students at these campuses will be able to tackle these problems by integrating research, an interdisciplinary curriculum, entrepreneurship, international activities, and service learning.

The way forward

Thanks to President Obama’s leadership, the administration has made large strides in developing and implementing an ambitious innovation agenda. This commitment to investing in America’s future that was evident in the Recovery Act continues in the president’s most recent budget, with sustained support for research, entrepreneurial small businesses, education reform, college completion, and a 21st century infrastructure.

The administration is working with a wide range of stakeholders to identify the most promising ideas for implementing and further refining its innovation strategy. There are active interagency working groups on issues such as prizes and challenges, regional innovation clusters, research commercialization, spectrum reform, broadband, open government, and standards. The National Science and Technology Council is leading multiagency research initiatives in dozens of critical areas such as aeronautics, genomics, green buildings, nanotechnology, quantum information science, robotics, and information technology. Through the President’s Council of Advisors on Science and Technology, the administration is able to receive high-quality advice from the nation’s leading scientists, engineers, and innovators on issues such as health IT, advanced manufacturing, clean energy, and STEM education.

The United States has always been a nation built on hope, the hope that it can build a prosperous, healthy world for today and for posterity. These long-standing aspirations depend critically on farsighted investments in science, technology, and innovation that are the ultimate act of hope and will create the most important and lasting legacies.

The United States is still the land of the future. It has held that honor since this continent was discovered by a daring act of exploration more than 500 years ago. It has earned it anew with each passing generation because U.S. scientists, entrepreneurs, and public officials have understood the importance of applying the power of curiosity and ingenuity to the biggest economic and societal challenges.

Forum – Spring 2010

Better, more affordable health care

In “Better U.S. Health Care at Lower Cost” (Issues, Winter 2010), Arnold Milstein and Helen Darling make an excellent case for improving U.S. health care by applying techniques developed during numerous studies, some of which have been used successfully in clinical settings. What has prevented them from being applied to Medicare and Medicaid? The government is already paying almost half of U.S. health care costs, so it should have the leverage to insist that these techniques be applied, at least in selected test scenarios.

Perhaps more feedback from the recipients of Medicare would help. My wife and I have been covered by Medicare for 16 years. She is quadriplegic, survived breast cancer, and is fed through a gastrostomy tube. We watch with amazement the data flow from bill to Medicare to supplemental insurance carrier. There appear to be standard treatment codes, but no relation between the amounts billed and paid. We’re encouraged to report discrepancies, but hospital bills include a (very large) lump sum for supplies. It takes special effort to request an itemized bill and then it is difficult to interpret. Payment for the skilled surgeon was reduced sharply, but unnecessary nursing consulting services were paid in full.

Our most ridiculous case is plastic syringes for my wife’s feeding that are billed at $150 each, can be purchased at retail for less than $20, and are paid by Medicare at $5.93. Even then the vendor tries to persuade us to use more of them, once asserting that Medicare insisted on providing one per day. My wife’s nourishment, Jevity, is billed at $10.79 per can with Medicare paying $1.95. I’ve complained to both the vendor and Medicare. They both say it doesn’t matter what is billed, because they’re only paid the agreed amount. So why is it billed at a ridiculous amount? Such anomalies usually indicate a flaw in the system somewhere.

We’ve avoided most of the duplicate tests, because we use a set of physicians that usually cooperate. Nevertheless, rather than Celebrex, which insurance would not reimburse, my physician recently prescribed a generic for Voltaren. He didn’t know that four years ago another physician’s Voltaren prescription caused my hiatal hernia. I learned that it was a Voltaren generic only after a diagnosis of stomach bleeding.

When my late father was hospitalized by an emergency 12 years ago, the hospital refused to consult his primary physician because he was not on their staff. They were unaware that he needed Aricept, a medicine to slow the progression of dementia, because he had forgotten why he was taking it!

These experiences support many of the authors’ recommendations. I recommend incorporating more feedback from Medicare and Medicaid recipients into planning improvements. While Congress debates expanding care to the presently uninsured, and counts on Medicare savings to provide part of the funds, they must ask why such improvements have not already been made, or at least tried, in Medicare. Does the United States have to incur trillion-dollar deficits before we are motivated to fix the system?

VICTOR VAN LINT

La Jolla, California


Calming nuclear jitters

For more than a decade, foreign policymakers and international relations academics have lamented the growing gulf between their fields. The foreign-policy people have complained that the academy remains aloof, ignoring real world problems and instead focusing on increasingly abstruse theorizing, ornate formal models, and quantitative noodling.

The other problem, though, is that policymakers show little interest in investigating the cause-and-effect assumptions that underpin their policy decisions. Although spelling out and scrutinizing these theoretical assumptions is important for good social science, foreign-policy people paid little attention to the work of the few academics who are doing work on policy-relevant subjects. Compounding matters further is the fact that many policy-focused academics haven’t thought much of U.S. foreign policy in recent years. What this means is that policymakers might listen to academics’ advice on how to take a hill, but not whether to invade the country in the first place.

Accordingly, reading work like John Mueller’s is at once refreshing and frustrating (“Calming Our Nuclear Jitters,” Issues, Winter 2010). A judicious academic who writes in clear English prose and focuses on policy problems, Mueller has much to offer the policy establishment, but it seems unlikely they will accept it. Building on his previous work highlighting the declining incidence of interstate violence and the inflation of the threat posed by terrorism, Mueller now aims to “calm our nuclear jitters.”

The great service Mueller does in his book is a sort of “naming and shaming” exercise, cataloging the many erroneous predictions of doom and disaster that have constituted the bulk of popular commentary on nuclear weapons. Although most analysts are smart enough to shroud their arguments in nonfalsifiable rhetoric, Mueller documents the range of frenzied projections and uses these as a jumping-off point for examining the arguments of today’s doomsayers. In particular, his analysis of the likelihood of an atomic terrorist threat, the focus of the Issues article, is a bright bulb in a dark room.

That said, Mueller’s analysis of the atomic obsession fits uneasily with some of his earlier work. For instance, the takeaway lesson for Mueller is that “whatever their impact on activist rhetoric, strategic theorizing, defense budgets, and political posturing,” nukes remain “unlikely to materially shape much of our future.”

In prior work, however, Mueller has argued forcefully that ideas—presumably including activist rhetoric, strategic theorizing, and political posturing—are primary causes of material outcomes. For example, in describing how and why the Cold War ended, Mueller challenged the realist view, wondering whether “domestic changes that lead to changes in political ideas may be far more important influences on international behavior than changes in the international distribution of military capabilities.”

If ideas are as important in influencing material outcomes as Mueller has suggested in the past, then it is curious to see him acknowledge that nukes have profoundly influenced our thoughts, only to suggest that this influence has contributed—and will contribute—only trivially to outcomes.

This puzzle aside, the country would be well served if the policy establishment deigned to take up Mueller’s contrarian arguments about our atomic obsession.

JUSTIN LOGAN

Cato Institute

Washington, DC


Perennial crops

I endorse the concerns about the sustainability and productivity of modern agriculture mentioned by Jerry D. Glover and John P. Reganold in “Perennial Grains: Food Security for the Future” (Issues, Winter 2010). High-input monocrop farms that embrace frequent tillage are undermining the natural resource base needed to sustainably double food production by 2050. The sowing of perennial grain crops, as suggested by the authors, can reduce some environmental impacts; however, most benefits will be felt on marginal farm lands, such as hillsides, drylands, and degraded soils. Although perennial crops can make an important contribution to these areas, Washington State University, CIMMYT, and other organizations have been studying and promoting conservation agriculture—minimum tillage, crop rotations, and retention of crop residues—which is compatible with and provides many of the benefits suggested for perennials, but can be adapted to almost all farming situations.

IN THIS ERA OF ESCALATING DEMAND, WATER AND PHOSPHORUS RESOURCE DEPLETION, PRODUCTIVE LAND SCARCITY, GLOBAL WARMING, AND INCREASING WEATHER EXTREMES, FOOD SECURITY AND SUSTAINABILITY MUST BE PLACED HIGHER ON THE WORLD’S AGENDA.

Topping the research challenges for perennial grain crops is the most basic requirement for success: high yields. Though perennial wheats can produce as much as 70% of the yield of elite annual wheats, on average their yields are lower, especially if one takes into account that the yields of perennial grains tend to progressively decline each year after sowing.

Another concern about perennial crops that must be addressed is their potential contribution to the development and spread of diseases, as they are an ideal green bridge to transfer diseases from one year to the next. Perennial wheats will need resistance to many biotic threats, including cereal rusts, viruses, and soil-borne diseases, the latter being a particular problem in wheat monoculture.

The concept of perennial grain crops is interesting and is one of many important agricultural research topics such as trait mining, enhancement of photosynthesis, conservation agriculture, and precision farming that deserve a massive increase in R&D investment.

I strongly support the visionary thinking espoused in the article. In this era of escalating demand, water and phosphorus resource depletion, productive land scarcity, global warming, and increasing weather extremes, food security and sustainability must be placed higher on the world’s agenda. The remaining years until peak food demand around 2050 will test the ability of our species to think beyond the next election cycle and support research for the future. Our complacency about food security is leading to a crisis far worse than today’s financial problems.

THOMAS A. LUMPKIN

Director General

Centro Internacional de Mejoramiento de Maiz y Trigo (CIMMYT)

(International Maize and Wheat Improvement Center)

El Batan, Texcoco, Mexico


Jerry D. Glover and John P. Reganold present a convincing case for public funding of perennial crops. Congress would do well to pay attention to their work. However, can Congress, a body that knows a decent amount about health care but cannot enact any meaningful health care legislation, actually respond to a call to fundamentally shift the direction of agriculture—a field more foreign to members of Congress after every election cycle?

I believe Glover and Reganold should consider altering the nuance of their appeal. Their work of perennialization has been to protect and restore the health of our natural resources. The productivity of perennial grains has been increasing rapidly, and the two authors confidently demonstrate how yields can rise to the point where perennial grains can play a significant and viable role in a management sequence. However, as long as Congress is asked to determine public funding in agriculture based on the crops themselves, members have little incentive to move beyond questions of yield, which favor traditional annual methods.

At its roots, the raison d’être of perennial grain research is soil science. This is where Glover and Reganold can attract significant congressional attention. Commodity yield has the strong force of the free market behind it, making public funding less and less important. Soil science, however, needs the public body as an advocate and public financing as a catalyst. The realities of global population growth will preserve the economy for agriculture, and Congress needs to recognize that freedom and instead begin to craft farm bills that start to transition toward a focus on natural resources. As the article points out, our problem isn’t protecting a future market for cereal grains; it is protecting our natural resources so that we can continue to widely produce cereal grains.

The budget considerations proposed in the article are minimal within the entire federal appropriations process. Although Congress has not shown much interest in funding perennial research, they are clearly willing to wrestle with questions of our natural resources. Glover, Reganold, and other scientists with valuable knowledge in this field ought to point Congress in the direction of our soil and our water. Public funding for perennial grain research waits for them along that path.

JOSH SVATY

Secretary of Agriculture

State of Kansas

Topeka, Kansas


I am delighted to see this paper published. It should be on the front pages of the New York Times, Washington Post, and other influential papers. The message it carries is very important: that society has major problems with our food system and that one of the main ways to address these problems is to develop perennial grain crops. The paper gives all the good reasons to do this, which I wholeheartedly support.

I also strongly support their call to support Wes Jackson’s 50-year breeding program. Coming from corn and soybean Iowa, educated through the Land Grant System in Iowa and Wisconsin, it was hard for me to see Jackson’s vision. The Land Grant vision is to make “Illinois safe for soybeans,” as so well put in an essay by Aldo Leopold. This vision will be hard to change, but it must be addressed.

The Land Grant–Agriculture Research Service system has largely evolved to be dependent on the current highly entrenched input-marketing-processing complex that dominates our food system. The annual grains support this complex. Fertilizers, biotech-generated seeds, planting and harvest equipment, intensive animal production, international grain marketers and processors, etc. have increasingly come to control the marketplace. The farmer becomes the slave to these masters. Although the problems of market dominance have long been known, the United States has failed to address these in existing antitrust legislation. What has this to do with perennials? Everything. Getting viable perennial grain crops widely adapted to the various biomes of the world would take away this power. It is not likely that the entrenched agribusiness moguls would let this happen without a fight. And they have the money and power.

When I first heard of Jackson and the Land Institute, I was as skeptical as the next agronomist. Then I got the chance to work in the perennial agriculture of New Zealand, and in 1988, to direct the premier Land Grant–based sustainable agriculture program, the Leopold Center. I connected the dots professionally and realized that the Land Institute was right. Now John Reganold is showing how perennial wheat can address the huge problem of erosion in the Palouse.

I hope society does the right thing: get behind the funding of perennial grains R&D. Another big drawback is the lack of trained plant breeders. Classical plant breeding has become almost a thing of the past, as genomics takes over. That is where the money is. So parallel to Jackson’s 50-year plan must be a major uptick in funding for graduate training, and of course, a guarantee of good jobs for the time spent.

While the grain breeders are at it, emphasizing the development of nutritional perennial grains should be a continuing emphasis. This would lessen the consumption of meat in the Western diet, a win-win if there ever was one. And win-wins are hard to find in this day and age.

DENNIS KEENEY

Emeritus Professor, Department of Agronomy

Founding Director, Leopold Center for Sustainable Agriculture

Iowa State University

Ames, Iowa


United States: The Need for Continuity

The time is ripe to reassess U.S. national innovation policies and programs and to consider new initiatives. A transition in political control of the White House inevitably produces a change in economic strategy, and the recession that gripped the country when Barack Obama took office required the new administration to come to terms not only with the immediate crisis but also with the nation’s long-term prospects for productivity growth and economic strength. President Obama made it clear from the outset that he viewed innovation as essential to the nation’s future well-being.

Businesses, governments, and institutions the world over embrace innovation. Innovation is hailed as a major factor driving economic growth. It is called on to address a myriad of societal problems, from the high cost of medical care to global climate change. Fighting and winning the war against extremism around the globe requires innovation in weapons, intelligence gathering, military tactics, medical treatment, and peacemaking strategies.

Innovation is more than new technology. Our intensely competitive global economy demands a relentless search for innovation-based advantage in services, marketing, and management. Information technology enables and accelerates innovation across the economy, even changing the way that knowledge is created, managed, communicated, and transformed. Innovation itself faces innovation. Many companies have adopted new models for accelerating the pace of innovation, including open innovation, multidisciplinary teams, user-driven innovation, and interinstitutional collaborations of unprecedented richness, intensity, and scale.

No wonder then that nearly every nation of the world has launched an array of innovation policies designed to spur innovative activity in its companies; to encourage the widespread adoption and use of innovative products, services, and approaches; and to create the environment and infrastructure that will enable successful innovation in every sector.

The United States has an outstanding record of producing world-renowned innovations. Its culture fosters innovation, it has excellent institutions of higher education and research, its marketplace is large and well-integrated, and it has diverse and efficient capital and equity markets. Yet there are indications that U.S. innovation is not keeping pace with the rest of the world and that other countries are redoubling their efforts to move ahead in the race to innovate. At the same time, the United States is looking to innovation to help resolve some of its most important challenges, such as combating climate change and making health care more efficient.

History

The U.S. federal government has long had an interest in innovation policies, although calling them “innovation policies” is new. In his first State of the Union address to Congress in 1790, George Washington said,

The advancement of agriculture, commerce, and manufactures by all proper means will not, I trust, need recommendation; but I can not forbear intimating to you the expediency of giving effectual encouragement as well to the introduction of new and useful inventions from abroad as to the exertions of skill and genius in producing them at home, and of facilitating the intercourse between the distant parts of our country by a due attention to the post-office and post-roads.

Clearly, President Washington thought it proper for the new federal government to support innovation at home and encourage imports of innovations from abroad. He also appreciated the importance of federal investment in infrastructure.

Throughout the 19th century and into the first several decades of the 20th, isolated efforts were made to enhance what we now understand to be innovation. In addition to implementing the patent system, government supported research in fields such as agriculture, mining, and geology and funded technologies such as Morse’s telegraph and the early railroads and aircraft.

Large-scale federal support of R&D during World War II changed forever the government’s engagement in supporting innovation and knowledge creation. Driven largely by Cold War priorities, federal investments in R&D increased dramatically in the decades after the war, including support for atomic energy, the space program, biomedicine, and the Internet.

In addition, after the Soviet Union’s test of a hydrogen bomb and its launch of Sputnik in the mid-1950s, new educational programs were adopted to improve the performance of U.S. students at all levels, from kindergarten through graduate school. Special emphasis was placed on widening and deepening students’ understanding of math and science to prepare them for careers in a technology-rich society.

The Commerce Department remains the natural home for implementing a White House initiative on innovation, but it also remains a stove-piped agency with a limited political constituency and very limited resources to address innovation.

The first attempt at articulating the foundations of a federal innovation policy took place during the Johnson administration. In 1967, an advisory committee to the Department of Commerce issued a report entitled Technological Innovation: Its Environment and Management, which laid out many of the principles that guide innovation policy discussions today.

Nearly every subsequent presidential administration has articulated a strategy or policy for innovation, technology, or competitiveness. President Nixon offered the short-lived New Technology Opportunities Program. President Carter oversaw a comprehensive “domestic policy review of industrial innovation,” which set forth a wide range of actions to encourage innovation.

During the “competitiveness crisis” inspired by Japan’s economic ascendance in the 1980s, Congress assumed the initiative, passing a number of new laws, including policies intended to strengthen innovation across the economy by encouraging cooperative R&D among and between firms, universities, and government laboratories. Tax incentives were adopted to encourage private-sector expenditures on R&D. The Bayh-Dole Act of 1980 and the Federal Technology Transfer Act of 1986 created new incentives and mechanisms to enable the commercialization of R&D results obtained with federal funds at universities and federal laboratories. Programs were set up to financially assist innovative activities in small firms and in consortia of large and small firms. These efforts culminated in the Technology Competitiveness Act of 1988, which created the office of the undersecretary for technology in the Department of Commerce.

President George H. W. Bush’s Office of Science and Technology Policy (OSTP) issued its U.S. Technology Policy in 1990. President Clinton and Vice President Gore wasted no time in issuing Technology for America’s Economic Growth under their own names a mere month after taking office in 1993. President George W. Bush released the American Competitiveness Initiative, which helped pave the way for the passage of the America COMPETES Act in 2007. And, in September 2009, with little fanfare, the Obama administration released A Strategy for American Innovation: Driving Towards Sustainable Growth and Quality Jobs, which outlines federal activities for encouraging and using innovation.

Political considerations

There is a certain consistency among all of these high-level reports and strategies. They characteristically call for new incentives to encourage private-sector innovation; new programs to strengthen higher education and research in fields such as math, science, and engineering; new modes of cooperation among industry, universities, and government agencies and laboratories; and intellectual property protection in pursuit of the constitutional purpose to “promote the progress of science and useful arts.”

However, although they share a deep commitment to funding basic research, Democratic and Republican administrations and Congresses favor different sorts of innovation strategies and policies. Democrats tend to focus on specific, identifiable national goals, such as safe and clean energy, exploring and learning about space, or wiring the nation. Democrats are also willing to create new programs that provide targeted resources to the private sector to, in effect, directly subsidize early-stage commercial innovation. Examples include the Small Business Innovation Research Program, which provides grants to new small firms and the recently ended Advanced Technology Program (ATP), which provided cost-shared grants to consortia of firms to develop precompetitive technologies. Republicans, on the other hand, prefer to focus on the general conditions for, and incentives to encourage, innovation in many areas. They prefer low corporate taxes, tax incentives for R&D performance, and free trade regimes to encourage innovation, while eschewing subsidies for specific technologies and sectors.

Party lines are not strictly determinative, however. Republicans have sponsored their favored innovation projects [e.g., Nixon and the Supersonic Transport (SST), Reagan and SEMATECH, and G. W. Bush’s hydrogen-powered “Freedom Car”]. On the other hand, Democrats have been willing to support tax preferences for R&D performance and, with some exceptions, fair and open trade among nations.

Owing in part to these partisan differences of view about the proper government role in innovation, federal programs and policies to support innovation in the United States tend to be fragile, subject to the changing fortunes of political party control of the White House and Congress. It is, of course, fitting that in the risky business of innovation promotion, some projects come to an early end. The SST, the fast-breeder nuclear reactor, synthetic fuels, and the high-speed commercial transport aircraft initiative were all high-profile efforts that were abandoned before completion. Programs such as the Technology Reinvestment Project of the Clinton era, the ATP adopted in 1988, and the Experimental Technology Incentives Program put in place by President Nixon were too controversial from the outset to survive over long periods. Institutional vehicles such as the Technology Administration in the Department of Commerce, the National Critical Technologies Plans initiated in the early 1990s, and Congress’s Office of Technology Assessment appeared to offer substantial value but were eliminated.

The irony is that innovation policies, which seek to improve national performance over the long run, are often short-lived. Promoters of innovation programs often live in a state of heightened anxiety, concerned that their programs will be terminated, reorganized out of recognition, or “defunded” just as they reach their stride.

The complex of innovation policies and the multiplicity of constituencies concerned with their own pieces of the puzzle add to the challenge of building and sustaining sensible policies. Individual companies and trade associations, labor unions, professional and technical societies, academic institutions, and state and local government officials all have stakes in innovation policy discussions and do not always agree on what should be done or how. Think tanks and advocacy organizations contribute to the debates over innovation policy with competing studies, reports, and policy proposals. The Council on Competitiveness, whose members are chief executives of corporations, labor unions, and academic institutions, has long sought to find common ground for action and has issued important reports on innovation policy. The National Research Council (NRC) of the National Academies has played an important role in convening the interests and clarifying the debate. The NRC study Rising Above the Gathering Storm stimulated the enactment of the America COMPETES Act in 2007. Organizations such as the Council on Competitiveness and NRC help reinforce the relatively limited capacity of the federal agencies to address innovation policy needs in depth. Yet their ad hoc nature underscores the problems of ensuring agency continuity and building sustained capacity within the government.

The Obama strategy

The Obama administration’s A Strategy for American Innovation: Driving Towards Sustainable Growth and Quality Jobs is a rich matrix of policies, initiatives, and programs focused on innovation as a driver of productivity and economic growth and as a means for addressing key problems facing U.S. society. The strategy makes a strong case for an economy based on sustained innovation as the answer to the bubble economies of the recent past. It speaks less directly to the competitiveness issues that motivated earlier congressional action, such as the America COMPETES Act. And by focusing broadly on innovation rather than narrowly on technology, it implicitly embraces the panoply of Internet-enabled business, marketing, and social innovations.

The strategy presents itself as a middle-of-the-road approach that avoids both interventionist and laissez-faire extremes. It avoids suggesting new programs similar to the ATP that would be likely to draw strong opposition, and it argues against “picking winners.” The strategy’s elements are laid out in a three-part framework of building blocks, competitive markets, and national priorities. It begins with investments and markets needed for innovation generally and ends by addressing technological “Grand Challenges,” in which the role of the federal government is modestly described as “Catalyz[ing] Breakthroughs for National Priorities.”

The 35 bulleted items in the “building blocks” and “competitive markets” sections may seem overinclusive in the sense that many of the components are broad policy prescriptions such as promoting open capital markets that do much more than support innovation. Sound economic, trade, and education policies help provide an environment that is conducive to innovation, but including this broader array of policies diminishes somewhat the focus on innovation and sometimes misses the mark. For example, the section on physical infrastructure speaks of increased funding and accountability but doesn’t mention intelligent transportation systems. At the same time, the strategy is underinclusive, because key innovation-related institutions and initiatives are not mentioned.

To date, the most concrete embodiment of the administration’s innovation strategy is that it has directed more than $100 billion of the American Recovery and Reinvestment Act (ARRA) funding toward broad background activities, targeted R&D efforts, and education programs. Significantly, much of the innovation agenda supported by ARRA funds contributes little to short-term stimulus, and the attendant political benefits, but embodies the commitment to long-term sustainable economic growth articulated in the strategy. Nevertheless, the ARRA funding was only a one-time infusion that operated as an extraordinary exception to the annual budgeting and appropriations processes that had earlier constrained implementation of the America COMPETES Act.

Two new elements of the strategy have received considerable political attention and support: the regional clusters initiative and the open government initiative (OGI). Regional innovation clusters are not a new idea. They have been understood as drivers of innovation at the state level as well as in many countries around the world. Until recently, the federal government has been content to leave innovation cluster development to state and local authorities, public universities, and the private sector, inasmuch as clusters are inherently local. Now, however, it is widely believed that a small amount of federal funding can play a limited but catalytic role in encouraging collaboration and planning at the local level, leveraging both local and field-specific resources, and in building a community of practice that spans clusters.

The OGI breaks new ground by adding collaboration to the traditional principles of transparency and participation. Although in part a reaction to certain Bush administration policies limiting access and accountability, this initiative seeks to use information technology to enhance and redefine the engagement of public agencies with each other and with the public, academia, and industry so as to move beyond the limitations of advisory committees and conventional requests for information and comment. Collaboration might be suspect in some circumstances, but if it is openly disclosed and illuminated, and empowered by the Internet, standards, and well-designed software, it could prove extremely effective in enhancing information gathering, analysis, and decision-making. Unfortunately, the most cited example of such collaboration, the experimental peer review of patent applications, has been suspended despite the fact that substantial support was provided by foundations and the private sector. More pilot projects and generalizable guidelines are needed to show that collaboration technologies can work effectively to marshal knowledge needed for agency decisionmaking.

Although standards are not mentioned in the strategy, the White House OSTP is engaged in an important initiative on the development and adoption of standards for sharing data and information in complex environments such as health care, manufacturing, the “smart grid,” and government operations and services. In these areas, many differently positioned stakeholders, including government agencies, operate and often work together, voluntarily or by necessity, because they must share and reuse data that are subject to different constraints, purposes, and business processes. Work in some of these areas, such as electronic medical records that travel with the individual patient, has been under way at the National Institute of Standards and Technology (NIST) for some time. What is different here is the recognition that high coordination costs among many and potentially biased heterogeneous interests create the need for a stronger role for government engagement than is the case for conventional standards, where the stakeholders might be limited to perhaps a dozen similarly sophisticated technology companies.

Finishing the job

The strategy is the first effort by a U.S. administration to address innovation comprehensively rather than as simply the development of technology or the ability to compete in a global economy. Although it reflects some decisions already made, including funding provided under ARRA, it does not lay out an action plan for who will do what, nor does it provide a framework for how and where the elements of the strategy might be developed and implemented. It gives no line agency this responsibility. The strategy is the result of an interagency working group led by senior White House officials, but it is our understanding that the group responsible for the strategy no longer meets. In addition, the lack of legislated authority and budgets, as well as limits on staff size, mean that the White House faces real limits on its ability to transform strategy into sustained action.

For leaders in Europe and leading Asian nations who are accustomed to sustained public dialogue on innovation, the absence of a designated coordinating office, indeed of any sustained high-level public dialogue on innovation in the United States, is striking. This is all the more remarkable in light of the congressional call for a President’s Council on Innovation in the America COMPETES Act of 2007. Relegated by President Bush to a subcommittee of the technology committee of the National Science and Technology Council, the President’s Council on Innovation has never met and remains dormant even in an administration that has made a strong and explicit commitment to innovation.

It is ironic that major portions of the America COMPETES Act went unfunded and unimplemented until ARRA gave the White House the flexibility to direct funds to COMPETES initiatives such as the Advanced Research Projects Agency–Energy. It is also ironic that legislation intended to strengthen the nation’s ability to compete abolished the Commerce Department’s Technology Administration, the undersecretary for technology, and the Office of Technology Policy. Although these agencies had a mixed track record in recent years, rather than being eliminated they could have been redesigned and refocused to provide the institutionalized support and coordination for innovation policy that is now missing.

The Commerce Department remains the natural home for implementing a White House initiative on innovation, but it also remains a stove-piped agency with a limited political constituency and very limited resources to address innovation. It is promising that Secretary of Commerce Gary Locke has spoken frequently on innovation and has established an innovation policy team within his office. In a public/private partnership with the Kaufmann Foundation, he has established within the Office of the Secretary an Office of Innovation and Entrepreneurship and a National Advisory Council on Innovation and Entrepreneurship. These new activities seem to be focused heavily on the important but relatively narrow topic of entrepreneurship rather than on innovation more broadly. At present, they have no congressionally authorized programmatic or operational authority and no funding.

Under the president’s fiscal year (FY) 2010 budget proposal, the Economic Development Agency (EDA) was to have launched a small program to fund incubators and regional clusters of innovation. However, Congress provided mixed signals on how those funds were to be made available within the EDA appropriation. This left the EDA to make the case for the new program against the expectations of its traditional constituency of local government officials and economic development organizations, and the new initiatives were not funded. It is encouraging that this program has reappeared in the president’s FY 2011 Budget.

The absence of a line agency with responsibility to coordinate and implement U.S. innovation policy leaves the various agencies with some responsibilities in this area free to focus their innovation policy activities on the needs of their traditional constituencies. The result is likely to be more of the same in both practice and politics. Although more of the same can be a good thing, without the capacity for analysis, deliberation, and experimentation, it becomes reflexive and, ironically, favors inertia over innovation. In the worst case, it leads to capture, as reflected in the mission adopted by the U.S. Patent and Trademark Office (USPTO) in the 1990s, “to help customers get patents.”

The continued dominance of the linear model of innovation in framing the current innovation policy debate is symptomatic of the absence of mechanisms for integrating new thinking into policymaking. Numerous observers, including the present authors, have pointed out the limitations of this model as a description of how innovation actually occurs, yet it continues to be invoked often as the framework for policymaking.

Integrated thinking about innovation can be inhibited by the way Congress oversees some of the agencies. For example, within the Department of Commerce, two agencies with major responsibilities for innovation, NIST and the USPTO, fall under the jurisdiction of different House and Senate committees.

In short, current policy is biased toward enhancing the supplies of inputs to innovation (basic research, invention, and basic skills) and away from encouraging demand and building absorptive capacity for the new. Little attention is paid to the processes that shape increasingly complex value chains. Linear supply chains tied to jobs, money, and shipments may be easy to grasp, but the ebb and flow of knowledge and other intangibles in loosely defined, easily reconfigured networks based on social acquaintance, contract, or common interest are not. Although the importance of networking is widely cited in business and academic literature, along with phenomena such as “open innovation” and “markets for technology,” there is as yet no discernable impact of these new realities on federal innovation policy.

An innovation administration

The Obama strategy makes the case that innovation policy should be a key element of economic policy. However, in the present environment of 10% unemployment and slow growth, the short-term demand for jobs upstages the possibilities of acting on the strategy’s long-term vision. Other than discussions of the opportunity provided by the impending expiration of the program authorizations under the American COMPETES Act, there is little evidence that the administration or Congress is considering major actions to follow up on the strategy articulated in fall 2009.

A national innovation strategy naturally involves many agencies and many policy mechanisms, ranging from mission-driven funding programs to tax credits to patents to regulatory insights. Yet the strategy does not address the need for agency capacity-building, coordination, and sustained public policy development. A number of approaches to filling this need have been suggested. Some advocate a new freestanding National Innovation Foundation. Others have suggested a new innovation policy coordination office within the Office of Management and Budget to ensure that innovation is taken into account in regulatory decisionmaking.

The United States is unusual among leading industrial countries in not having a top-level administrative agency responsible for innovation programs and analysis. The most straightforward way to address this need would be to reestablish an Innovation Administration within the Department of Commerce with responsibility for developing, promoting, and coordinating national innovation policy. From 1988 through 2007, this role was played by the office of the undersecretary for technology and a subordinate assistant secretary for technology policy. Predecessors of this office existed as far back as 1962, when President Kennedy created the position of assistant secretary for science and technology and gave that office oversight over NIST’s predecessor, the National Bureau of Standards, the USPTO, the National Technical Information Service, and the Weather Bureau. The role of this office evolved over the next two and a half decades, but it always served as the focal point for discussion and promotion of technology and innovation policies. An important agenda item for the America COMPETES reauthorization or other legislative initiative would be to reinvent this office and make it the central agency for innovation programs and policy analysis.

Commerce already houses the two line federal agencies most directly responsible for promoting innovation today, NIST and the USPTO. Both agencies have considerable potential to play more aggressive and successful roles in encouraging innovation and should be key elements of the Innovation Administration. NIST is home to the Technology Innovation Program, the Manufacturing Extension Partnership, the Malcolm Baldrige Quality Award, and a program services unit that does studies and analysis for the director. The USPTO examines and grants patents, but presently pays little attention to how patents are used, or abused, in the marketplace or to how experience in the marketplace might suggest patent reforms to better promote innovation. To the agency’s credit, it recently hired its first chief economist to help with policy matters, albeit six years after the European Patent Office led the way.

In addition to NIST and the USPTO, an Innovation Administration within Commerce should house an office of innovation policy with the capacity to support evidence-based innovation policy analysis development anywhere in the government. It would coordinate closely with the National Economic Council, OSTP, and the President’s Council on Innovation to ensure breadth of input and to inhibit capture by any narrow set of internal or external constituencies. It need not be large. A staff of 10 to 20 professionals could conduct in-depth studies and special analyses, develop metrics and methodologies, track business practices and policy initiatives around the world, and represent the United States in the increasing number of international forums, such as the Organization for Economic Cooperation and Development, in which innovation policies are addressed. It would be of particular value as a resource in support of interagency activities related to innovation, such as those led by NSTC and NEC. The office should have sufficient budget to be able to contract with outside researchers to do data- and model-intensive studies, as well as to collect the specialized data needed to conduct one-off analyses.

Two other Commerce Department candidates for the Innovation Administration are the National Telecommunications and Information Administration (NTIA) and the EDA. The NTIA develops policy and manages funding programs, including the $7.2 billion broadband component of the stimulus package, for a sector in which the government has traditionally had a broader set of interests than promoting innovation. The EDA has historically been focused on investments to help revitalize disadvantaged or distressed regions. However, during the Bush administration, its mission statement was revised to focus on innovation and competitiveness as needed for participating in the global economy. If this mission is effectively implemented in programs on clusters and incubators, the EDA may well belong within an Innovation Administration.

The United States has had elements of innovation policy in many forms and for quite some time, but under other labels. The importance of innovation to rebuilding the economy, increasing productivity, sustaining economic growth over the long term, and addressing national priorities now demands that innovation policy move to center stage. The approach must be comprehensive and balanced; treating innovation issues in isolation will be insufficient.

The Obama administration has embraced a culture of innovation and understands why innovation is important. Success, however, will require more than White House strategy and planning. The United States also needs a permanent institutional framework for innovation that can outlive the stimulus package and survive through election cycles, independent of the presence at the top of talented individuals with good ideas.

Singapore: Betting on Biomedical Sciences

In an April 2009 report, the Massachusetts Biotechnology Council noted that Singapore, an emerging biotech cluster, was “aiming to move up the value chain and position itself as a world-class center for R&D through significant government investment.” Singapore’s key strengths, the report said, are its educated and skilled workforce; its supportive government, business, and regulatory environment; and its government-supported research institutes.

About a decade ago, Singapore decided to put a major national focus on biomedical sciences. It did so in part because of changes in the global economic arena. But the decision also reflected the evolving ways of thinking and doing that corresponded with Singapore’s rapid development since gaining nationhood in 1965.

In the 1960s, Singapore was a labor-intensive economy. In response to a poorly educated work force, labor strife, high unemployment, and a rapidly growing population, Singapore embarked on export-led industrialization and promoted foreign direct investment. It sought to develop its labor force by emphasizing technical and industrial training. The focus was on low-tech manufacturing.

The strategy worked well until the mid-1970s, when changes in the global economic environment prompted a new assessment. Singapore faced increasing competition from other developing countries in low-tech industries. Meanwhile, developed countries were moving into high-tech manufacturing. Singapore decided to phase out its labor-intensive industry and focus on skills-intensive, high–value-added, technology-intensive industries such as electronics manufacturing, data storage, and petrochemicals. To prepare its workforce for this new challenge, the country expanded engineering education while providing funding for older workers to upgrade their skills.

Singapore’s first major recession in 1985 spurred the country to look for new areas of economic growth. Its new strategy marketed Singapore as a place to do business and encouraged multinational companies in Singapore to move beyond production and into areas such as supply chain management, R&D, and the like. Multinational companies were encouraged to establish operational headquarters in Singapore to support their regional operations.

As the global economy continued to evolve, Singapore by the early 1990s began to face greater competition in its traditional economic strengths, including electronics manufacturing and petrochemicals. The critical challenge was to find ways to differentiate the country from its economic competitors in the region and the world. As a country with a population of fewer than 4 million and few natural resources, the only resource available was its people. Singapore concluded that it must promote strong intellectual capital creation as a basis for developing knowledge-intensive companies and generating high–value-added jobs for Singaporeans. In short, government officials believed that if Singapore was to join the ranks of the world’s leading industrialized countries, it must build a knowledge-based, innovation-driven economy.

In the late 1990s, Singapore identified the biomedical sciences as an area with tremendous growth potential. Between 2000 and 2005, it put in place the key building blocks to establish core scientific biomedical research capabilities by focusing on building up its human, intellectual, and industrial capital. In the second phase of the initiative (2006–2010), it focused on strengthening its capabilities in translational and clinical research in order to bring scientific discoveries from the bench to the bedside, to improve human health and health care delivery, and ultimately to contribute to the economy and bring benefits to society.

Reasons for success

Singapore was able to implement its biomedical sciences initiative and reap its benefits because of five key strengths that the country has developed during recent decades, a government committed to R&D, an integrated and well-connected public sector, public-sector research institutes that engage in both basic and mission-oriented R&D to develop a spectrum of capabilities, an educated and skilled workforce, and a supportive business and regulatory environment.

Singapore has made a huge commitment to develop and advance biomedical R&D. The significant investment began in 2001 and has been steadily increasing since. For the period 2006–2010, the government committed $13.5 billion in Singapore dollars (SGD) to R&D, more than double the spending of the preceding five-year period. Of this, 25.3% was committed to the biomedical secctor.

The government’s investment in biomedical sciences has been part of its overall commitment to achieving a gross expenditure on R&D (GERD) of 3% of the gross domestic product (GDP) by 2010. GERD grew rapidly at a compound annual rate of more than 11% from 2000 to 2008, an indication of the increasing intensity of R&D activities in Singapore. In 2008, GERD reached SGD $7.1 billion, or 2.8% of GDP. Recently, Singapore’s government accepted the recommendations by the Economic Strategies Committee to target GERD at 3.5% of GDP by 2015.

Boosting funding for biomedical R&D was just one prerequisite for success. A coordinated, intensive, nationwide approach was also essential. This was facilitated by Singapore’s small size. Its ministries and public-sector agencies have worked well together and have nimbly made changes over time to boost momentum. In an ever-changing global economic landscape, this integration and coordination have been essential.

Strategic direction for Singapore’s R&D initiatives is set by the Research, Innovation & Enterprise Council, chaired by the prime minister, and the key work is done by public-sector research institutes. The lead public-sector R&D agency, the Agency for Science, Technology and Research (A*STAR), receives 40% of the total public-sector R&D funds to carry out various activities with its partners: institutes of higher learning, hospitals, other public-sector agencies, and industry. A*STAR has two councils: the Biomedical Research Council (BMRC) and the Science and Engineering Research Council (SERC), to steer and support R&D activities in the biomedical sciences and the physical sciences and engineering, respectively.

The BMRC has seven research institutes and five research consortia under its umbrella. It has built up considerable strengths in six key research areas: biomedical engineering, cancer genetics, infectious disease and immunology, metabolic diseases, molecular cell and development biology, and stem cells and regenerative medicine. Because these research areas span the spectrum of biomedical sciences research, from basic to translational to clinical research, BMRC is well positioned to support industry activities at every step of the way. Indeed, the platform technologies being supported are aligned with four main industry sectors: biotechnology and biologics, health care services and delivery, medical engineering and technology, and pharmaceuticals. Singapore wants to leverage its investment in biomedical R&D to attract more industry to the country and create a sustainable biomedical sciences hub.

A*STAR’s other council, SERC, has seven research institutes and one center. It has built up strengths in eight key areas: biotechnology; chemistry; computational and device technologies; information, communications, and media; materials; manufacturing technology; mechatronics and automation; and metrology. Its research areas are also aligned with four main industry sectors: electronics, infocomm, chemicals, and engineering. Because the electronics and engineering industries have enjoyed a much longer history in Singapore, the main challenges are to continue to innovate in order to add value to current strengths in manufacturing processes and thus stay ahead of the crowd.

In 2009, A*STAR also set up the A*STAR Joint Council to facilitate interactions between BMRC and SERC in order to foster interdisciplinary and cross-council research. This is helped by the physical proximity of the two councils: BMRC’s research institutes are located in Biopolis, the biomedical sciences R&D hub, just 600 meters from SERC’s research institutes at Fusionopolis, the science and engineering powerhouse. Biopolis and Fusionopolis also house corporate laboratories and private-sector companies on their premises, which fosters ties between the public and private sectors.

Attracting talent

Singapore recognized early on that talent is the key to knowledge creation and value-generating R&D activities. With its small population, Singapore has had to devise a holistic talent strategy to attract and develop world-class scientists, both local and international, at all levels and in all areas of the R&D landscape.

Internationally renowned scientists who have moved to Singapore have helped to jump-start the country’s biomedical sciences efforts, providing leadership to the research institutes and mentoring young local scientists. They include Edward Holmes and Judith Swain from the University of California, San Diego; Edison Liu, Neal Copeland, and Nancy Jenkins from the National Cancer Institute; Jackie Ying from the Massachusetts Institute of Technology; and David Townsend, the co-inventor of the positron emission tomography/computed tomography scanner.

With a population of fewer than 4 million in the 1990s, Singapore has had to devise a holistic talent strategy to attract and develop world-class scientists, both local and international, at all levels and in all areas of the R&D landscape.

In addition, in 2008, A*STAR launched the A*STAR Investigatorship Programme to allow promising international postdocs to do research at Singapore’s research institutes. The program is modeled on the Howard Hughes Medical Institute Investigatorship award, with the objective of nurturing the next generation of international scientific leaders by providing funding for setup costs and research staff and access to state-of-the-art equipment and facilities.

A*STAR also provides scholarships for the most capable and committed young Singaporeans to pursue undergraduate and graduate scientific training at top universities locally and abroad. Many of them can be found in top U.S. universities. According to Stanford University President John Hennessy, Singapore has the highest per-capita number of Ph.D. students at Stanford, with almost all of them on A*STAR scholarships.

Because of its own excellent higher education system, Singapore has also been able to attract many foreign students to its shores. Singapore’s autonomous universities, the National University of Singapore (NUS) and Nanyang Technological University (NTU), have been ranked among the top universities in the world. In the Times Higher Education Supplement’s (THES’s) World Universities Ranking 2009, the schools were ranked 30th and 73rd respectively, among the top 200 universities in the world. NUS was also ranked 14th and NTU was ranked 33rd among the top universities for engineering and information technology. THES ranked NUS 20th in the world for life sciences and biomedicine and 27th for natural sciences.

Finally, the emphasis on developing and seeking talent has led Singapore to launch programs aimed at filling specific gaps in skilled humanpower. These include the Bioprocess Internship Programme, run by A*STAR’s Bioprocessing Technology Institute, to prepare science and engineering graduates for careers in bioprocessing. These programs have produced top-notch researchers for industry, contributing to the development of a world-class R&D hub in Singapore.

Singapore has also worked hard to create a favorable business environment for businesses and investors. The 2010 Index of Economic Freedom, published by The Wall Street Journal and the Heritage Foundation, ranked Singapore’s economy the second freest in the world. It noted that “flexibility and openness have been the foundation of Singapore’s transformation into one of the most competitive and prosperous economies in the world.” It said that Singapore has an “efficient regulatory environment [that] encourages vibrant entrepreneurial activity,” that its “commercial operations are handled with transparency and speed, and corruption is perceived to be almost nonexistent.” The index said that Singapore’s “very competitive tax regime” and “highly flexible labor market” encourage investment, which enables it to attract global companies and enhance innovation. And it noted that “foreign and domestic investors are treated equally, and Singapore’s legal system is efficient and highly protective of private property”.

Singapore’s open and supportive environment ensures that businesses are able to implement their ventures and projects speedily and efficiently. Excellent laws are in place to ensure the protection of intellectual property. Doing business and research in Singapore is made easy by the fact that English is the lingua franca. All of these factors work to enhance Singapore’s attractiveness for R&D and business.

The results thus far

Singapore’s R&D efforts, especially in biomedical sciences, have attracted international attention. In a May 2007 Boston Globe article, Massachusetts Governor Deval Patrick, after introducing a $1 billion life sciences initiative, cited Singapore as one of the state’s major competitors, in large part because Singapore had developed coordinated strategies to attract researchers and companies. Martin Rees, president of the United Kingdom’s Royal Society, was quoted by the Press Association in December 2007 as comparing Biopolis favorably with the UK Centre for Medical Research and Innovation.

On a macro level, Singapore’s R&D efforts have had a significant impact and contributed much to the economy. One indicator is private/public R&D investment. In 2000, for every dollar invested by the public sector, the private sector invested SGD $1.70. By 2008, the private-sector investment had increased to SGD $2.30.

In biomedical sciences, Singapore has also fared well. Today, more than 100 global biomedical sciences companies are carrying out a variety of business operations in Singapore, including cutting-edge research and manufacturing. These companies include Abbott, Roche, Merck, Novartis, Pfizer, Schering-Plough, Wyeth, Siemens, and Becton-Dickson. The manufacturing output for biomedical sciences increased from SGD $6.3 billion in 2000 to SGD $19 billion in 2008. Biomedical sciences’ share of Singapore’s total manufacturing output also increased, from 3.9% in 2000 to 7.6% in 2008. The compound annual growth rate was 10%, a good indication of Singapore’s steady success in building up its biomedical R&D capabilities. The number of jobs more than doubled, to more than 12,000, between 2000 and 2008.

Biologics manufacturing is a prime example of how Singapore’s efforts have successfully attracted large-scale industry investments. The efforts by Bioprocessing Technology Institute (BTI) and that of other research institutes to develop the country’s bioprocessing capabilities prompted five leading biologics manufacturing companies (GlaxoSmithKline, Baxtor, Novartis, Genentech, and Lonza) to set up six commercial-scale biologics manufacturing plants in Singapore, which will potentially employ 1,300 staff and bring in more than SGD $2.5 billion in investments. Building on this success, A*STAR’s Singapore Stem Cell Consortium and BTI engaged Lonza in further discussions, culminating in the recent establishment of a Cell Therapy Manufacturing Facility in Singapore, the first one set up by Lonza outside the United States and Europe.

Between 2006 and 2009, the biomedical sciences sector engaged in double the number of industry collaborations as compared to the overall total of the preceding five years. Industry funding has increased by 26%. Between 2006 and 2009, A*STAR was involved in 66 biomedical sciences projects, including many with leading multinational companies.

In terms of biomedical sciences research output, Singapore’s performance is creditable. A*STAR’s research institutes are a case in point. Between 2002, when A*STAR was established, and 2008, its institutes published 1,927 papers in the biomedical sciences. By 2008, it had also filed 216 primary patents. These will form a pipeline for commercialization and economic activities in the years to come. More significantly, some lab work has resulted in breakthroughs with important effects on society. In 2007, a team of researchers from the Institute of Bioengineering and Nanotechnology developed a microfluidic device and chemical kit capable of detecting the influenza A (H5N1) virus using a simple swab sample from the throat. It is now being adapted to be able to detect the H1N1 virus within two hours. Some of A*STAR’s technologies have also been commercialized, resulting in spinoff companies such as VeriStem, Curiox Biosystems, and MerLion Pharmaceuticals.

The R&D push in the biomedical sciences is a long-term process. Indeed, Singapore’s investments are only beginning to show success. Singapore is now examining new ways to further bolster its capabilities. For example, it is taking steps to build its drug discovery and drug development capabilities, as well as its medical technology development. A*STAR’s Singapore Institute of Clinical Sciences and the National Neuroscience Institute are collaborating with Lilly Singapore to look for new drugs that could help treat brain tumors.

Singapore will continue to use its advantages of a supportive government committed to R&D, an integrated and well-coordinated public sector, a wide spectrum of public-sector R&D capabilities, an educated and skilled workforce, and a good business and regulatory environment to become a world-class global R&D hub.

The New Global Landscape of Educational Achievement

Some 10 years ago, we lived in a very different world where education systems tended to be inward-looking, where schools and education systems typically considered themselves to be unique, and where the perceived walls of language, culture, and political structure made it impossible for them to borrow policies and practices developed elsewhere.

Comparisons provide one way to break through some of these walls, and they have become a powerful instrument for policy reform and transformational change by allowing education systems to look at themselves in the light of the policies and performance of other systems. When education ministers meet at the Organization for Economic Cooperation and Development (OECD) these days, they begin almost every conversation with a comparative perspective. It seems that information is creating pressure to improve performance and that public accountability is now often more powerful than legislation, rules, and regulations.

U.S. labor market experts Frank Levy and Richard J. Murnane document how demand for various types of skills changed during the last three decades of the 20th century. Work involving routine manual input, the jobs of the typical factory worker, was down significantly. Non-routine manual work, jobs we do with our hands but in ways that are not so easily put into formal algorithms, was down too, albeit with much less change over recent years. We are not ready for machines to drive our buses or cut our hair.

This will not be news to most people, but many might be surprised to learn that the sharpest decline was for work requiring routine cognitive input, that is cognitive work that can easily be put into the form of algorithms and scripts. It is middle class white collar jobs that involve the application of routine knowledge that are most at threat today. And that is where schools still put a lot of their focus and what we value in multiple-choice accountability systems.

Levy and Murnane find that the skills that are easiest to teach and test are also the skills that are easiest to digitize, automate, and move offshore. The skills that will be more in demand in the future are expert thinking and complex communications, skills that require advanced high-quality education. The yardstick for success is no longer improvement by national standards, but the ability to prepare students to perform at the highest international standards.

Awareness of international standing has already motivated many countries to take dramatic action to reform their education systems, and the results are clearly visible in the data on educational achievement in the industrialized world during the past decade. Even more dramatic change is on the horizon as developing countries, particularly China and India, provide education to a larger share of their populations.

In recent years detailed data have become available about exactly how much students are learning in school, about the relative performance of advantaged and disadvantaged students, and about how schools in various countries allocate their resources. These data hold important lessons for all countries, but they should attract particular interest in the United States, which has long considered the quality of its education system to be a powerful asset but that now must face the reality that many countries are doing a better job of preparing future generations for the challenges to come.

All of the data presented in the following figures come from the OECD education database, which is available online. The test scores used in the figures are mean national scores for 15-year-olds on the science test of the 2006 Programme in International Student Assessment, a triennial survey that tested 400,000 students in 57 countries. Complete test results can be found in the OECD publication PISA 2006: Science Competencies for Tomorrow’s World.

College graduation rates

Each dot on this chart represents one country. The horizontal axis shows the college graduation rate, and the vertical axis shows you how much it costs to educate a graduate per year. Data for Australia andthe United Kingdon are not available for 1995.

The chart shows that in 1995 the United States had the highest proportion of young people earning college degrees, the highest level of spending per student.

By 2000 several countries had matched the U.S. level of college completion, and by 2005 the United States was no longer the world leader.

STEM degrees

In the fields of science, technology, engineering, and math, which are of particular importance for the economic strength of a country in a high technology economy, the position of the United States is even more precarious.

Quality and equity

When one looks at the distribution of student performance within each country, there are some countries where students from socioeconomically disadvantaged backgrounds trail far behind more privileged students and others where background has little influence on achievement. The social equality index was calculated by looking at the effect on socioeconomic status on test performance. Countries to the left had the largest gap between advantaged and disadvantaged students, and those on the right had relatively small differences across social classes.

Every country aspires to be in the upper right quadrant, where performance and equity are both strong. And no country wants to be in the lower left, where performance is low and the underprivileged trail far behind their peers. Whether it is better to have high performance at the price of large disparities or to minimize disparities at the price of mediocrity is subject to debate, but a few countries are demonstrating that it is not necessary to choose.

Countries such as Korea, Finland, and Japan have been able to combine high performance levels with an exceptionally moderate impact of social background on student performance.

Some defenders of the U.S. system argue that the disappointing performance of U.S. students is the result of factors outside the education system, such as the challenges that immigrant inflows pose to the education system. However, among the 41 countries that took part in the 2003 Programme for International Student Assessment (PISA), the United States ranks 10th in the proportion of 15-year-olds with an immigrant background, and all countries with larger immigrant shares outperformed the United States.

Future supply of high school and college graduates

What we have seen so far is just the beginning. The first and easy phase of globalization, the time that the industrialised world had to compete against only the China’s and India’s that offered a low-skilled, low-wage work force, is long gone. What we now see is that that countries like China or India are starting to deliver high skills at low costs at an ever-increasing pace, which will have powerful effects on the middle- and high-skills sectors in the industrialized countries. Although the quality of their education systems might not be producing graduates with the most advanced skills, many jobs require only second-tier skills.

Money

Student performance cannot simply be tied to spending, at least not average spending. One must look beyond how much is spent to how it is spent. In international comparisons of primary school children the United States does relatively well, which given the country’s wealth, is what you would expect. The problem is that as they get older, U.S. students make less progress each year than their contemporaries in the best performing countries. This does not apply just to poor kids in poor neighbourhoods but to most kids in most neighborhoods. Some U.S. advocates for education reform see a need for even greater spending, but that will be a hard sell. In these times, sceptical citizens will not continue to invest precious tax dollars into a system that does not seem to be working. Nor will talented people flock into a profession where individuals are not rewarded for outstanding performance.

It is noteworthy that spending patterns in many of the world’s successful education systems are often markedly different. These countries invest the money where the challenges are greatest, and they put in place incentives and support systems that get the most talented school teachers into the most difficult classrooms. Successful countries such as Finland, Japan, and Korea emphasize more classroom time and higher teacher salaries, whereas the United States invests more heavily in reducing class size and limiting salaries.

Brazil: Challenges and Achievements

Until World War II, Brazil had a small number of scientists and only an incipient institutional research base. Its industry was at an embryonic stage and based only in traditional areas. Full-time employment for university teaching staff and graduate programs did not exist until the 1960s. Not until the 1970s did an institutional base devoted to science and technology (S&T) began to be effectively established. This situation, together with the business sector’s historical lack of appreciation for innovation, limited the possibilities for developing sectors that were potentially more dynamic in the national economy.

The first actions of the federal government to start building up the country’s scientific capacity were taken in 1951 with the creation of the National Research Council (CNPq) and the Commission for the Improvement of Personnel in Higher Education (CAPES). The CNPq and CAPES provided fellowships for Brazilians to pursue graduate studies abroad, mainly in the United States and Europe. Until recently, the main objective of this policy was the training of human resources for scientific research and expanding the academic S&T system. Now, innovation has become part of the agenda for federal and state policies and has attracted increased business interest.

The construction of a national system for science and technology began in the 1960s with the creation of a fund (FUNTEC) by the National Bank for Economic Development (BNDES) to support the establishment of graduate programs in engineering and hard sciences. In 1968, the Ministry of Education promoted a reform of the federal university system, introducing academic departments to replace the traditional chairs and creating full-time positions for faculty members holding postgraduate degrees. In 1967, a new funding agency was created, the Financing Agency for Studies and Projects (FINEP), which in 1969 became the managing agency of a new and robust fund, the National Fund for Scientific and Technological Development (FNDCT), which replaced the one established earlier by BNDES. This fund provided FINEP, the CNPq, and CAPES with ample financial resources to provide various forms of support to stimulate the large-scale expansion of the postgraduate programs and research activities in universities and research institutes that took place during the 1970s and most of the 1980s.

FINEP provided grants to academic institutes or departments and to research centers to cover all needs for institutional maintenance or expansion. The CNPq provided fellowships for undergraduate research and graduate studies, as well as research grants for individuals or groups, and also created new research centers or took charge of existing ones. CAPES, on the other hand, dedicated the majority of its efforts to supporting graduate programs, providing fellowships for students and establishing a national system for evaluating and accrediting graduate courses.

The Ministry of Science and Technology (MCT) was created in 1985, signaling the increased importance of S&T in the federal government. FINEP and the CNPq (as well as its research institutes) were absorbed into the structure of the new ministry, which consolidated two decades of federal initiatives that had made possible the establishment of a national system of S&T with several tens of thousands of researchers. The MCT managed to obtain substantial budget increases for the FNDCT and the CNPq. Because the system had developed in a spontaneous manner, its expansion had occurred in a very uneven way. Disciplines such as engineering, physics, mathematics, and some areas of biological and medical sciences, which had strong leadership, had attracted most of the students and financial support. This led the MCT to create the Support Program for Scientific and Technological Development (PADCT), partially financed by a loan from the World Bank, to develop strategic areas such as chemistry, biotechnology, advanced materials, and instrumentation.

The progress in the federal system for supporting S&T was followed by similar state initiatives, most notably the Foundation for the Support of Science in the State of São Paulo. One problem was that most graduate programs and research efforts were concentrated in the rich southeastern and southern regions. In addition, industry commitment to R&D was still weak, and there was a lack of interaction between S&T and industrial policies. As a result, research and innovation activities were strongly concentrated in universities and academic institutions and therefore had little impact on business practices.

There were, however, a few important exceptions to this scenario. The creation in 1972 of the Brazilian Agricultural Research Company (Embrapa), with experimental centers throughout the country, was decisive in making Brazil a world leader in tropical agriculture and the main producer of several crops. Another example of success in applying S&T to the conditions found in Brazil is the federal oil company Petrobras, which developed the technology for deep-water oil drilling that eventually led to self-sufficiency in fossil fuels. In the aeronautics industry, Embraer became one of the largest aircraft manufacturers in the world by focusing on specific market segments with high growth potential in commercial, defense, and executive aviation. Finally, another success story, and one of the most notable, is in the biofuel area. Research in this area dates back to the 1920s and was given new life in the 1970s when Brazil was hit by the oil crisis. The creation of the Proalcool ethanol program, which mandated that gasoline contain 25% ethanol and encouraged the automobile industry to manufacture vehicles that used pure ethanol, catalyzed rapid improvement in ethanol production technology and impressive growth in production. More recently, the development of flex-fuel engines, which can run on any admixture of gasoline and ethanol, and the improvements in the production of ethanol from sugar cane have boosted the ethanol market to equal the demand for gasoline.

Troubled times

In the late 1980s and early 1990s, Brazil suffered from political instability and uncertainties as well as economic difficulties, and the relatively new S&T system paid a price. The MCT was twice closed down and recreated. Rampant inflation corroded the budget. In spite of this and the irregular supply of funds, the essential elements of the financial instruments of FINEP and the CNPq were preserved.

The 1994 economic reform designed to control inflation was followed by a tight fiscal policy that led to budget constraints and modest economic growth. The S&T system suffered greatly from the lack of jobs for researchers and engineers as well as from budget cuts. The number of scholarships granted by the CNPq, which had increased steadily for four decades, began to decrease. In 1997, the CNPq program of grants for small groups was interrupted, and FINEP cancelled existing institutional grant agreements because of a drastic reduction in its funding. In 1999, the PADCT was phased out although it still possessed some resources from the World Bank loan. The net result was a serious crisis in the national S&T system (Figure 1).

Attempts to overcome the crisis

In the late 1990s, the government took several steps to deal with the crisis. The broad system of financial support for research projects spontaneously submitted to the CNPq was replaced by three programs designed to support a smaller number of more-targeted projects. One of these was the Support Program for Nuclei of Excellence (PRONEX), the aim of which was to give financial support to research groups considered to be highly competent and leaders in their areas of activity. Administered initially by FINEP, the program was transferred to the CNPq in 2000, being essentially replaced by the Millennium Institutes Program, which took the form of virtual networks of institutions coordinated by a main institution.

The most significant advance in the S&T sector at the end of the 1990s was the implementation of the Sectoral Funds for Science and Technology. These were first created in 1999 after the establishment by law in the previous year of the Sectoral Fund for Oil and Natural Gas. Congress approved several other draft laws proposed by the MCT that designated that the new funds would come from taxes on several sectors of economic activity (such as natural resources exploitation, petroleum royalties and specific industrial products, as well as from fees on licenses for the acquisition of technology from abroad). The sectoral funds provided a source of revenue to the FNDCT, making possible its resurgence (Figure 2). However, until 2003 most of these revenues were used to pay the federal debt rather than to support S&T programs. Nevertheless, the creation of the sectoral funds provided important legal instruments for implementing a new policy for science, technology, and innovation. The Second National Conference on Science, Technology, and Innovation, held in 2001, provided guidance for this new stage of S&T policy.

Despite all difficulties and the relatively short history of S&T policy, at the turn of the century Brazil had achieved significant successes in some areas and had built up a scientific community consisting of over 50,000 researchers with Ph.D.s, the largest and best-qualified such body in Latin America.

New policy and a plan for S&T

Science, technology, and innovation have enjoyed unprecedented support during President Luiz Inácio Lula da Silva’s administration. The respective budgets have increased severalfold in recent years (Figure 2), and the legal framework has been continuously improved.

The 2004 Innovation Law established several mechanisms to promote innovation in Brazil. It created the conditions for setting up strategic and cooperative partnerships between universities, public research institutes, and businesses aimed at increasing research, development, and innovation (RD&I). The 2005 Lei do Bem (Helpful Law) provided a set of fiscal incentives to promote RDI activities in businesses. The law also authorizes S&T agencies to subsidize the salaries of research staff with master’s or doctoral degrees employed in technological innovation activities in companies based in Brazil. The 1991 Information Technology Law, modified in December 2004, is another important instrument for industrial and technological policy within the context of digital connection.

In 2007, the government took the initiative of creating a set of plans and policies that were supported by an economic policy that has been very successful in several aspects and by a social policy that has helped to increase the domestic market. In January of that year, the government announced the Program for Accelerating Growth that was organized in terms of groups of investments in infrastructure. Subsequent to that, several sectoral plans have been announced, among them the Action Plan for Science and Technology for National Development. The Policy for Productive Development was also announced as a means of broadening and enlarging the Industrial, Technological, and Foreign Trade Policy that had been launched in 2004.

PACTI 2007-2010, which is coordinated by the MCT, involves investments of more than R $41 billion (U.S. $22.4 billion) during its period of activity. The Action Plan aims at training and mobilizing the country’s scientific and technological base with a view to encouraging innovation in the capacities and directives of the Industrial, Technological and Foreign Trade Policy. It facilitates strategic programs to preserve the country’s sovereignty and promote social inclusion and development, especially in the most deprived areas. The plan has four primary objectives:

We may state with conviction that for the first time in the history of this country, there exists in many areas of S&T a sufficient “density of competences” to make a decisive contribution to carrying out ambitious development projects using local knowledge.

Expansion and consolidation of the national system of science, technology and innovation. Its structure has been planned in conjunction with the business sector, states, and municipalities, taking into account those areas that are strategic for the development of Brazil and for the revitalization and consolidation of international co-operation. Other targets include increasing the number of scholarships for training and upgrading qualified human resources and improving the system to encourage the consolidation of the S&T research infrastructure in different areas of knowledge.

Promoting technological innovation in business. The key activities are encouraging technological innovation in production chains by means of actions carried out in conjunction with government organs and institutions and partner bodies in the public and private sectors; developing and publicizing technological solutions and innovations aimed at improving the competitiveness of the products and processes of national industries; and favor the increased Brazilian participation in the international market.

RD&I in strategic areas. Priority will be placed on studies and projects aimed at including Brazil in space research, either on its own or in partnership with other countries; in the peaceful use of nuclear energy; and in the complex interactions between the environment, the climate, and society in terms of encouraging the conservation and sustainable use of Brazilian biodiversity, paying particular attention to the Amazon region and activities involving international cooperation.

Science, technology, and innovation for social development. The goals are contributing to the proliferation and improvement of science teaching, providing universal access to the goods created by S&T, increasing economic competitiveness, and improving the quality of life of people in the most deprived areas of the country.

A crucial aspect of PACTI is that it incorporates the concept of innovation into the country’s scientific and technological policy as reflected in initiatives such as the Brazilian Technology System (SIBRATEC). This effort was inspired by Embrapa’s successful agriculture policy as well as in foreign institutions such as the German Fraunhofer organization, which brings together 60 technological institutes working on specialized projects. SIBRATEC consists of existing institutions bent on R&D activities aimed at developing innovation projects for products and processes according to industrial, technical, and foreign trade priorities. During the period from 2009 to 2010, the system will benefit from resources of about R $120 million from the FNDCT. These funds will come both from the government and from the productive sector. The recipient institution must provide at least 20% of the total funding. SIBRATEC’s activity will be decentralized, and individual states will have the task of interfacing with participating institutions.

The resources for financing PACTI activities are mainly those available within the MCT budget and include the budgets for the CNPq and the FNDCT/Sectoral Funds. The FNDCT is the main financial instrument for the MCT’s wider involvement in the National System for Science, Technology, and Innovation. Previously, the FNDCT supported only sectoral activities, but we have introduced significant changes in the management of the fund, emphasizing the possibility of using resources from various funds to support a wider range of initiatives rather than merely sectoral ones. The implementation of these actions has been possible thanks to the substantial increase of FNDCT funding. In recent years, mainly since 2004, public calls for the selection of projects to be financed have been regularly published. PACTI also receives significant funding from other ministries and institutions such as Petrobras and Embrapa.

The MCT carries out its activities through its 22 research centers and institutes. Among these, the CNPq and FINEP are especially important as agencies that encourage research. An increasingly important role is being played by the Center for Management and Strategic Studies, created in 2001, in planning and evaluating the work of the MCT and its agencies. Whereas the CNPq gives priority to supporting individuals by means of scholarships and other forms of aid, FINEP supports science, technology, and innovation in public and private institutions.

The National Council for Scientific and Technological Development operates a number of programs, the three most important of which are:

  • The Program for Qualifying Human Resources for Research, which has a fixed timetable and includes granting scholarships (for junior scientific initiation, scientific initiation, master’s, doctoral, and postdoctoral qualifications).
  • The Program for the Expansion and Consolidation of Knowledge, which is directed toward financing the projects of research groups in all areas (through general public announcements) and of specialized networks (nanoscience and nanotechnology, among others), absorbing and stabilizing the supply of human resources (grants for productivity in research, grants for regional development, and grants for development in technology and innovation and encouraging the formation of nuclei of excellence, such as PRONEX and the National Institutes for Science and Technology), as well as making public announcements of opportunities related to the sectoral funds.
  • The Program for International Cooperation, the main aim of which is to stimulate international exchange and encourage partnerships in the process of absorbing and disseminating knowledge and technology. This program supports bilateral and multilateral initiatives involving developed and developing countries.

In order to guarantee the presence of the Brazilian government in international scientific work in science, technology, and innovation, the MST is a signatory, through the CNPq, to several cooperation agreements, and it finances group research projects (scientific and technological exchanges) and scientific visits. One of the most successful examples of this cooperation is Prosul: the South American Program to Support Cooperative Activities in Science and Technology.

In this context, we must also mention the great step forward in the form of the National Science and Technology Institutes Program. Begun in 2009, it has already enabled the creation of 123 institutes, investing total resources of R $581 million ($330 million) from various sources: the FNDCT, the CNPq, state foundations for supporting research, CAPES, the Ministry of Education, BNDES, the Ministry of Health, and Petrobras. It uses resources from many financial agencies in order to bring together the best research groups in the country that are working at the frontiers of science and in areas that are strategic for the country’s sustainable development. It is a very effective instrument for pushing forward basic and pure scientific research and making it internationally competitive. One of its important features is its close cooperation with SIBRATEC.

FINEP promotes and encourages innovation and scientific and technological research in universities, institutes of technology, research centers, and other public or private institutions. More recently, FINEP has begun to offer the possibility of economic support for business. This is the greatest innovation in the MCT’s range of instruments to encourage innovation. This support involves nonreturnable investment in companies, which was previously forbidden by law. The new initiative was made possible by the regulation based in the Innovation Law and the Lei do Bem. This instrument works in three ways: the development of products and processes related to strategic and important components of the National Productive Development Policy, aimed at businesses of any size; the accreditation of partners for the decentralized implementation of the instrument in various states of Brazil, with a view to increasing access on the part of micro- and small businesses to support funds for developing products and processes, carrying on the work done by the Support Program for Research in Business; and encouraging the hiring of qualified people (with master’s and doctoral degrees) by subsidizing a part of their salaries.

The programs involve several lines of action:

Financial support for RD&I projects. Institutions in strategic sectors whose competence has been recognized may have their projects supported by means of special funding. This support comes mainly from the FNDCT but may also come from other ministries.

Finance for R&D projects in business. This type of support has become possible thanks to the Innovation Law. It allows nonreturnable public funding to be invested in companies, thus sharing with them the risks inherent in R&D activities.

Loans (credit) for R&D and innovation projects in business. These are low-interest loans with resources coming from the FNDCT and other federal funds.

Zero-interest loans. Rapid funding without bureaucracy and without requiring substantial guarantees. These are aimed at innovative and marketing activities of small businesses that are operating in areas that are priorities for the Productive Development Policy.

The Support Program for Research in Business. Operated in partnership with state-based foundations that support research, it encourages interaction between researchers and technology-based businesses to develop innovative projects.

Inovar (Innovating). This program consists of creating and sustaining an environment favorable for innovative companies using venture capital. Its activities include Innovating Funds and Innovating Seed Funds, both aimed at attracting investors, and organizing Seed Forums and Venture Forums for business training and attracting investors.

The National Program for Business Incubators and Technology Parks. This supports the planning, creation, and consolidation of incubator organizations for innovating businesses and technology parks.

In recent years, with the support of the National Congress, the federal government has set up new instruments that, after the crisis of the1990s, have enabled it to again take up its crucial role in encouraging the expansion and improvement of the National System for Science, Technology, and Innovation. Just as or even more important than this is the task of making Brazilian society aware of the strategic value of S&T.

We may state with conviction that for the first time in the history of this country, there exists in many areas of S&T a sufficient “density of competences” to make a decisive contribution to carrying out ambitious development projects using local knowledge. Equally, over the past 40 years it has developed a complex system which today contains more than 200,000 researchers. Thanks to the granting of fellowships that began in the late 1980s, by 2008 Brazil had 46,700 people with graduate degrees, including 10,700 with doctorates.

Brazil is in an intermediate position in the world in terms of productive and academic capacity but has the critical mass necessary to gradually draw closer to the technological levels of developed economies. Between 1981 and 2008, the number of scientific papers by Brazilian authors published in international journals grew at an annual rate of 11.3%.

The challenges are not simple. The total Brazilian science, technology, and innovation investment is still only 1.3% of GDP as compared to a rate of about 3% in industrialized countries. Business is currently investing no more than 0.5% of gross domestic product in R&D, and the aim is that this should come up to 0.65% by the end of this decade.

It is important to continue the expansion of programs to train personnel in all areas of knowledge, because to reach the same proportion of such individuals as is found in industrialized countries, Brazil should have about 500,000 researchers. However, it is also necessary to give greater emphasis to training personnel in areas that are strategic in terms of economic and social development.

The STI Action Plan unites the public policies that have been developed in various ministries, government bodies, and startup agencies. The aim is to make it possible for initiatives to move out from the academic world and the government to become vigorous change agents for development in the widest possible variety of productive areas, both private and public. By treating questions of science, technology, and innovation as questions of state, Brazil is taking a definitive step forward in its role as a player on the international stage.

Start with a Girl: A New Agenda for Global Health

Much of the frustration that permeates efforts to improve the lives of people in the developing world springs from the fact that the commonly identified roots of the problem are factors that are difficult to change. But one fundamental cause of the social and economic hardships in the developing world can be addressed: the poor health and limited education of adolescent girls.

As a partial glimpse into the hardships that girls face, consider the burdens of their older counterparts. Women’s challenges take different forms across regions, countries, and socio-economic classes, although there are similarities, especially their exposure to discrimination, violence, and poverty. Women comprise two-thirds of the 759 million adults lacking basic literacy skills. Women’s job options are more limited and less remunerated than those for men. More than half a million girls and women die in childbirth every year, often without the benefit of health services and skilled assistance. And all too often, women do not have a voice in the important decisions affecting their lives and those of their children, households, and communities.

To take their rightful place as drivers of economic growth and a healthier tomorrow, women need to be better prepared during their years preceding adulthood—the crossroads of adolescence. Adolescence is a period of risk and opportunity, with lifelong effects on the future of girls and their families. Indeed, the size and strength of tomorrow’s labor force will be shaped by the health of today’s girls, along with their education. It is during adolescence that girls establish patterns of sexual behavior, diet, exercise, tobacco use, and schooling that profoundly affect their lifelong health. And health problems experienced during adolescence, such as sexually transmitted infections, anemia, and gender-based violence, can have long-term consequences.

Most girls are still reachable during early adolescence, when they are 10 to 14 years old, through institutions such as families, schools, and for some, workplaces. When girls are reached during this phase, their behavioral patterns can be shaped to protect their health as well as the health of their future children.

But just as the opportunities are clear, the threats to girls’ health and well-being are numerous and are linked to fundamental injustices present in many societies. A general guide for how to overcome these injustices is emerging from evidence gathered from the actions of a variety of international organizations. National leaders now need to evaluate, support, and expand the lessons that have been learned in order to make a real difference for girls.

Litany of problems

Most girls enter adolescence healthy, but social and biological forces make them vulnerable to illness and disability. What happens during the eight or nine years following puberty (roughly between the ages of 10 and 19) can have long-term consequences.

In many communities across the developing world, most girls steer clear of health services from the age of their last immunization until their first pregnancy. The reasons are many. Girls often lack the financial means and autonomy to seek care when they need it and tend to be even more uninformed than boys about their bodies, ways to maintain their health, and the health services available. The health sector rarely orients services in the way adolescents need them to be: non-judgmental, confidential, and easily accessible. Surveys reveal that even youth-oriented programs, such as youth centers and peer-based programs, often fail to reach the most vulnerable girls. A risky adolescence is the result.

The conditions that girls face during adolescence and the decisions (often involuntary) that they make can affect not only themselves but also can have drastic implications for the health status of their future children. Unhealthy mothers pass on poor health to their children, and young mothers are even more likely than older mothers to pass on poor health to their babies. For example, one study summarizing demographic and health survey results from nearly 87,000 women in 76 countries has shown that age is a risk factor even when controlling for a number of other factors, including education level, socioeconomic status, and residence. Compared with outcomes for children of mothers aged 24 to 26, young mothers were more likely to have babies who were too small (underweight and stunted), anemic, and less likely to survive to the age of 5. Delaying those births by just a few years would have improved the health status and life prospects for many tens of thousands of children.

Understanding girls’ health requires looking beyond epidemiology to the social forces making them vulnerable—the social determinants of health. Examples of the types of social determinants that have major health effects on girls include their need (often forced) to work, child marriage, limited education opportunities, and the inequitable gender norms that lay the foundation for all of these. For girls forced to work, employment options are usually limited to risky and exploitive jobs, which usually come with health risks. Domestic work is the main economic activity for girls under 16. It often takes the form of unregulated employment and exploitation, and sometimes servitude or slavery. Girls in this behind-closed-doors occupation are vulnerable to abuse, often receive little or no pay, do not have access to education and skill training to compete for better jobs, and are isolated from friends and family.

All working girls are vulnerable to sexual exploitation, but it is a certainty for those working as prostitutes. It is estimated that as many as 10 million girls and boys ages 10 to 17 are exploited within the sex industry, with perhaps 1 million children entering prostitution annually. In emerging market countries, more girls and young women work in industries in which they are exposed to chemical and physical risks, including dirty drinking water, environmental toxins, unsafe conditions, and long hours. Girls often find themselves in such adverse situations because others—mothers, fathers, and husbands—control their mobility, life choices, finances, sexuality, and reproductive decisions.

Child marriage, a manifestation of girls’ powerlessness, makes girls vulnerable in multiple ways. Marriage before the age of 18 is not an isolated practice. Thirty-six percent of all women ages 20 to 24 in the developing world (excluding China) report that they were married as children; in 10 countries, more than half of all girls are married before 18, many much younger than that. In Niger, more than three-quarters of girls are married by age 18. Child marriage is a driver of a number of health risks. Research shows that in addition to being exposed to the risks of early childbearing, married girls in sub-Saharan Africa are 48-65% more likely to be HIV-infected than their unmarried peers. Their increased vulnerability is due to the typical age gap between young wives and their older husbands, as well to the nature of sex during marriage, which is typically frequent and unprotected, with wives unlikely to insist on condom use. Child brides are also less likely to gain education and vocational skills, and often face isolated lives of restricted mobility.

Lack of education poses significant health risks, and many studies have established that simply enrolling and keeping girls in school for at least six years is one of the most effective ways to benefit their health. Completing primary school is strongly associated with later age of marriage, later age of first birth, and lower lifetime fertility. Unfortunately, the impressive global progress in enrolling girls in primary school is not matched by efforts to enable them to continue into secondary school, and only 43% of girls of appropriate age are in secondary school in the developing world. School leaving is frequently followed by cohabitation, marriage, and early pregnancy, with all of their associated risks.

Girls and younger women may actually be at higher risk of gender-based violence than older ones. Studies from varied settings reveal that approximately one-third of girls’ sexual initiations are forced through physical violence by their partners. This age-old but often hidden problem causes injuries, HIV and other sexually transmitted infections, unwanted pregnancies, and in many cases, mental health disorders.

Many adolescent girls, as well as boys, also establish behaviors that put them at elevated risk of developing chronic diseases such as heart disease. These risky behaviors are in some cases reinforced by social conditions and pressures. Unhealthful practices include poor diets and exercise habits, unsafe sex, tobacco use, and substance abuse. Smoking rates are increasing rapidly among girls, and it is estimated that by 2030 tobacco use will be the single biggest cause of death globally. Obesity also is on the rise. Higher rates of overweight generally are associated with increasing urbanization and gross national income. However, a recent survey of women ages 20 to 49 in 36 low- and middle-income countries revealed that most countries had substantially more overweight women than underweight women. The pattern was reversed in other areas where high rates of malnutrition persist, such as in rural areas of India and Haiti, and in certain other areas of the least developed countries.

The health picture

With recent improvements in data availability, a picture is emerging of the direct causes of adolescent girls’ death and disability in different regions. In mortality terms, for girls ages 15 to 19, deaths are most frequently the result of childbearing, AIDS, depressive and panic disorders, and burns, with violence also being a significant problem. Issues related to pregnancy and childbirth, including hemorrhage, abortion, and hypertensive disorders, kill more girls ages 15 to 19 than any other cause.

In terms of regional and gender differences in adolescent mortality, total death rates for adolescent girls and boys are similar in nearly all developing regions, at just under 1 per 1,000 population. The exceptions are South Asia, where death rates are twice as high, and sub-Saharan Africa, where rates are nearly three times as high. For South Asia, infectious disease and injury deaths account for the excess death rate. For sub-Saharan Africa, AIDS, tuberculosis, and maternal deaths account for most of the excess death rate, with contributions from other infectious diseases, violence, and war. In the 10- to 24-year-old group, more males die than females in all regions except Africa and Southeast Asia. In Southeast Asia, the number of injury deaths among young women, particularly from fire-related death and suicide, is pronounced. Although reports from India once attributed many of these deaths to suicide and accidents, the role of violence from family members is now known to be an important factor in many cases.

Mortality statistics give some indication of health risks, but death is actually a relatively rare event during adolescence, compared with infancy and older age. In fact, in regions without substantial maternal mortality, rates of female deaths are generally low throughout adolescence. To better understand the direct causes of adolescents’ ill health, it is much more informative to explore their burden of disease, a measure that combines death, illness, and disability. Worldwide, neuropsychiatric conditions, especially unipolar major depression, as well as schizophrenia and bipolar disorders, are the primary cause of girls’ burden of disease. The risk factors driving this mental health burden go beyond identity crises or peer pressure to include exposure to violence, restriction of girls’ opportunities, and poverty, especially where it affects girls’ ability to attend school. Road traffic accidents are the second most important cause of girls’ burden of disease, although they are only half as likely as males to die in this manner. Road accidents are predicted to dramatically increase in India, China, and elsewhere as the use of cars becomes more widespread, but the infrastructure and driving culture fail to keep pace. Injuries, including injuries from gender-based violence, comprise 4 of the 11 leading causes of burden of disease.

Reproductive health concerns deserve special mention because of the defining nature of girls’ reproductive capacity for their lives. Children born to girls between the ages of 15 and 19 account for only 11% of all births worldwide, yet these births account for 23% of the overall burden of disease from maternal conditions annually. (These figures vary considerably by region, but the overall problem is persistent throughout the developing world.) Adolescent mothers have high rates of complications from pregnancy, delivery, and abortion, yet in many cases their contraceptive needs are unmet. One of the most horrifying results of early childbearing is fistula, a devastating consequence of obstructed labor that causes lifelong leakage of urine or feces. This disorder affects approximately 2 million women, mostly in Africa and Asia. Although age itself does not appear to be the key risk factor for this or other poor pregnancy outcomes, adolescents are at higher risk because they are usually having their first baby (first births are riskier regardless of age), and they also are likely to be small in physical size, poorly nourished, already suffering from diseases such as malaria, and relatively uninformed about how to manage a pregnancy and birth.

Adolescent pregnancy and childbirth pose higher risks not only for young mothers but also for the children of these young women as well. In particular, stillbirth and death are 50% more likely for babies with mothers under age 20 than for babies with mothers 20 to 29 years old. As a result of their low birth weights, babies who are born to adolescents and who survive are significantly more likely to suffer from undernutrition, late physical and cognitive development, and adult chronic diseases such as coronary heart disease.

An achievable agenda

Most of what threatens adolescent girls’ health is preventable. But prevention will require a comprehensive response within and outside of the health sector. In some cases, a strong evidence base points to interventions that need expansion. In others, especially where communities are taking innovative action to tackle some of the most insidious causes of ill health, further evaluation and operational research is required to identify and make the case for expanding promising approaches. The recommendations that follow are global and apply to all girls to varying degrees. Their relative prioritization will depend on the specific threats that girls face in a given region or country.

Expanding health services. Adolescent girls encounter multiple barriers in obtaining appropriate preventative and curative health services. Youth-friendly health services are one way to attract more young people to essential care, especially when their introduction is accompanied by demand-creation activities in communities. The hallmark of youth-friendly services is not what services are offered, which will vary by location and burden of disease, but how they are offered. To be youth-friendly, providers and services must be sensitive to adolescent health and psychosocial needs, nonjudgmental, and confidential, and they must operate with convenient hours and locations.

South Africa’s National Adolescent Friendly Clinic Initiative (NAFCI), launched by Lovelife and the South African Department of Health in 1999, is an accreditation program designed to improve the quality of adolescent health services and strengthen the public sector’s ability to respond to their health needs. NAFCI works by making health services more accessible and acceptable to young people, establishing national standards and criteria for adolescent health care in clinics throughout the country, and building the capacity of health care workers to provide quality services. A study conducted at 32 NAFCI clinics from 2002 to 2004 found that clients ages 10 to 19 showed a statistically significant increase in average monthly clinic visits during this period, but the evaluation also showed room for improvement.

Making services more responsive. Girls need health system changes that make the entire sector more responsive to their needs. Many such changes will require only marginal adjustments that are likely to be low cost. Efforts to strengthen health systems should pay particular attention to improving community-based service delivery, especially delivery of youth-friendly services; training health workers to understand and treat the causes of adolescents’ ill health; and using adolescent-specific indicators to measure health-system functioning. Demand-side financing mechanisms such as micro-insurance and cash transfers can also be modified to ensure that family benefits reach adolescent girls. Collectively, these strategies can make health services more acceptable and accessible and ensure that girls benefit from health sector reforms. As an added gain, all users of the health systems will benefit from changes made in the name of adolescent girls.

A base of broader reforms

Truly altering the equation for girls also will require actions outside of the health sector to tackle the social determinants of girls’ ill health. This will require work in families, communities, workplaces, and schools to chip away the underlying factors that prevent girls from accessing the skills, information, and services they need to navigate adolescence successfully and grow into empowered, healthy women. The complementary actions that need increased support include changing social norms to promote healthy behavior, creating community resources for girls to empower them to manage risk, and increasing the health-related benefits of schooling and other investments in other sectors. But even as this vision is clear, the evidence on best ways to achieve it is slim, although there are encouraging signs. Of the recommendations below, some are derived from solid evidence from a single site, and some are based on promising approaches that need more thorough evaluation as work progresses.

Changing social norms. Firmly entrenched cultural practices, especially those affecting girls with limited power, are hard to alter. Community education and mobilization has had some impact on combating harmful traditional practices such as female genital cutting and child marriage. Tostan, a nongovernmental group that works in a number of countries across West, Central, and East Africa, has carried out numerous community-based efforts that have proved effective in reducing harmful traditional practices. Tostan’s work in Senegal, for example, includes basic education for women on health, human rights, literacy, and problem solving. Among other efforts, the group organizes public declarations where men and women, including local opinion shapers such as religious leaders, speak out in front of their communities to oppose harmful practices. The program appears to have significantly affected community attitudes toward female genital cutting, leading to a dramatic decrease in the number of parents who intend to have their daughters cut. Evaluation results also show positive effects on child marriage.

In changing social norms, it is important to work with the boys and men surrounding girls. A small but growing number of evaluations, often from the HIV/AIDS world, demonstrate the potential of male engagement for girls’ and women’s health, although more rigorous analyses are needed, including on cost effectiveness. Addressing gender norms may prove especially important in preventing the spread of HIV. Currently, HIV prevention efforts too often fail to tackle the underlying causes of girls’ vulnerability: gender inequality, the typical age gap and power imbalances between girls and their sexual partners, and girls’ biological vulnerability. As a result, girls and young women have come to bear a disproportionate HIV/AIDS burden. In parts of sub-Saharan Africa, for example, young women 15 to 24 years old account for three of every four new HIV infections.

Working in Brazil, the nongovernmental group Promundo has applied a number of strategies to reduce gender inequity and thereby prevent violence and reduce HIV transmission. Interventions have included conducting group education and lifestyle social marketing that incorporate gender-equitable messages to promote condom use. Study results have demonstrated that improving gender attitudes was associated with improvements in at least one HIV risk outcome.

Changing the social norms around tobacco use, diet, and exercise will be critical in reducing the chronic disease threat emerging in the developing world. The most effective route to reducing smoking’s attractiveness to young people is to increase pricing through taxation, which has beneficial population-wide effects because it also reduces smoking among adults. Although evidence from the developing world on effective approaches to reduce obesity are extremely limited, integrated campaigns that make healthy diets and regular exercise more appealing and feasible appear to be the most effective approach. Brazil’s Agita São Paolo program targets schoolchildren, older adults, and workers in an effort to expand physical activity by 30 minutes of moderate activity at least five times a week. The program uses special events, informational materials, mass media outreach, training for physical educators and physicians, worksite health promotion, and cooperative ventures with public agencies from several sectors. By working through this multipronged approach, the Agita program has achieved measurable gains in reducing overweight and obesity among the target populations.

Creating community resources. Working in the social environments in which girls live is essential for effectively protecting their health by empowering them to manage risk. Community resources are particularly important for the most marginalized girls, who are at greatest risk of ill health but the least likely to attend school or to have access to health services or friendship networks. The creation of “safe spaces” for the most socially isolated girls is an important strategy for the poorest, most vulnerable girls, yet it remains critically underfunded. This approach is about creating spaces where girls can to gather with a mentor on a regular basis to learn about their bodies and rights, learn skills, make friends, and discuss their lives.

In Ethiopia, the Berhane Hewan Safe Spaces project reached out to girls who were out of school, married, and working as domestic servants to link them to mentors, friendship networks, and services through clubs that met regularly. Researchers found that over a relatively short period of involvement, the girls, who had no other access to institutional support, showed improvements in their lives in all areas targeted by the project, including participation in friendship networks, school attendance, reproductive health knowledge and communication, and contraceptive use. Statistical analysis also revealed considerable effects on increasing the age at marriage for younger girls, an effect believed to be due to giving them a few extra years to do other socially acceptable activities, such as expanding their social networks, attending school, and learning more skills.

Increasing investments in education. Ensuring that girls complete secondary school is one of the most efficient actions governments can take to improve girls’ chances for good health. Governments should expand their focus beyond primary school to encompass access to and quality of lower secondary education programs through age 16. Governments and the private sector, with donor support, need to increase formal and informal schooling opportunities by extending primary school facilities, offering scholarships, expanding household cash transfer schemes to disadvantaged girls, and offering alternative learning programs. Ridding schools of violence and sexual harassment is another important strategy for making schools safe and accessible for girls. Schools need to offer comprehensive sexuality education, which should include gender equality and human rights education in order to be effective.

A sound investment

Important in its own right, improving adolescent girls’ health is a feasible investment, although estimating costs is a challenge because information on costs is available from only a small number of programs. In an exercise undertaken for the report Start With a Girl: A New Agenda for Global Health, an estimate of distinct components of a comprehensive set of interventions suggests that the total cost of providing girls ages 10 to 19 years old living in low and low-middle income countries with essential health services and comprehensive sexuality education (through a variety of efforts, including community-based services and mass media campaigns) is $360 per girl per year. For roughly a dollar a day, then, a generation of adolescent girls could be protected from factors beyond their control that limit their life chances and that of the next generation.

Without information on what is currently spent—and there is no source of such information—it is impossible to know how much of this funding would be additional, although it is reasonable to guess that much of it would have to be. But even so, the real bottom line is not the costs of the activities alone, but their net cost, taking into account the benefits of investments within and outside of the health sector. In broad terms, it is reasonable to expect that scaled-up programs for girls’ health would yield medium- and long-term reductions in maternal and infant mortality, HIV incidence, cervical cancer, and chronic disease, along with increases in girls’ education and women’s labor market productivity. These are high pay-offs for a relatively modest investment.

To date, the international community has issued many high-level statements and policy documents on the importance of helping girls navigate the critical juncture of adolescence. But the rhetoric has not been matched by specific, high-impact actions through government policies or donor support, and as a result girls’ needs remain overlooked. The international community now must align its talk with its walk.

Leadership at the national level is paramount. Girls’ health should be a high priority for ministers of health as well as ministers of finance and planning officials. International donors and technical organizations must encourage and support leadership at national levels by providing knowledge on effective approaches within and outside of the health sector, as well as a share of the financial resources to step up action for girls’ health. Civil society groups must marshal advocacy to solve girls’ health problems in ways tailored to local needs. Above all, adolescent girls need support to be their own advocates. Healthy, empowered girls who speak up for their own rights and those of their sisters will be most effective in bringing about sustained change.

Double-Edged DNA: Preventing the Misuse of Gene Synthesis

During the past decade, a global industry has emerged based on synthetic genomics: the use of automated machines to construct genes and other long strands of DNA by stringing together chemical building blocks called nucleotides in any desired sequence. Some 50 companies—concentrated primarily in the United States, Germany, and China—synthesize gene-length segments of double-stranded DNA to order. Scientists in government, university, and pharmaceutical laboratories worldwide use these products to study fundamental cellular processes and to develop new vaccines and medicines, among other beneficial applications. But synthetic genomics presents a dual-use dilemma in that outlaw states or terrorist groups could potentially exploit synthetic DNA for harmful purposes. Of the biotechnologies that entail dual-use risks, gene synthesis has elicited the greatest concern because of its maturity, availability, and potential consequences.

Already, the ability to synthesize long strands of DNA and stitch them together into a genome, the blueprint of an organism, has enabled scientists to recreate infectious viruses from scratch in the laboratory. This feat was accomplished for poliovirus in 2002, the Spanish influenza virus in 2005, and the SARS virus in 2008. Some analysts worry that it will soon become technically feasible to synthesize the smallpox virus, a deadly scourge that was eradicated from nature in the late 1970s and currently exists only in a few highly secure repositories.

It is critical, then, to devise effective governance measures for synthetic genomics that permit the beneficial use of this powerful technology while minimizing, if not eliminating, the risks. Some analysts contend that the best approach is to have governments impose top-down, legally binding controls. Yet formal government regulations have a number of drawbacks. Not only are regulations time-consuming and cumbersome to develop and promulgate, but they are static and hard to modify in response to rapid technological change.

A better approach is to adopt a form of “soft” governance based on voluntary guidelines or industry best practices. This type of self-regulation, involving suppliers and perhaps consumers of synthetic DNA, can be reinforced by government policies that encourage responsible behavior. Although biosecurity measures for the gene-synthesis industry are being implemented in the United States and elsewhere, these activities are not well coordinated, and continued efforts will be needed on a national and international basis to fashion an effective global regime.

A movement begins

The science behind commercial DNA synthesis may be cutting-edge, but ordering a synthetic gene over the Internet is quite straightforward. A customer—say, a university research scientist—goes to a supplier’s Website, enters the sequence of the desired gene, and provides payment information, such as a credit card number. The company then synthesizes the requested strand of DNA. After verifying that the genetic sequence is correct, the company inserts it into a loop of DNA (called an expression vector) that can be cloned in bacteria to produce a large number of copies. Finally, the order is shipped to the customer by express mail.

The worry, of course, centers on what the recipient will do with the synthetic gene. Early on, a few suppliers recognized the dual-use nature of their product and began to develop voluntary biosecurity measures to reduce the risk that criminals or terrorists could order dangerous DNA sequences over the Internet. Blue Heron Biotechnology, founded in 2001 in Bothell, Washington, was one of the first to implement such measures. Initially, the company relied exclusively on screening customers to verify their bona fides, but in the wake of 9/11 and the anthrax letter attacks, it deployed a second line of defense: screening DNA synthesis orders.

As part of this effort, Blue Heron agreed to serve as a testbed for a software package called Blackwatch, developed at Craic Computing in Seattle. Blackwatch uses a standard suite of algorithms to compare incoming synthesis orders against a database of DNA sequences of known pathogens: viruses and bacteria that cause infectious disease. If an order closely matches a genetic sequence in the database, the program flags it as a “hit.” When that happens, a human expert employed by the company assesses the security risk associated with the flagged sequence, checks the identity of the customer, verifies that the intended end-use is legitimate, and confirms that all biosafety and biosecurity concerns have been addressed.

To date Blue Heron’s screening system has detected a large number of pathogenic sequences, but follow-up review by human experts has yet to identify a customer with malicious intent. Instead, nearly all orders for pathogenic genes have involved the development of new vaccines or basic research into the molecular mechanisms of infectious disease. In July 2009, for example, the company received an order from a Japanese government laboratory for 30% of the genome of Lujo virus, a hemorrhagic fever virus that was discovered in Zambia in September 2008 and then sequenced in the United States. The lab in Japan requested two genes coding for proteins in the viral coat, presumably for vaccine development. After careful review, Blue Heron determined that the order was legitimate and proceeded to fill it.

Craic Computing is now developing an improved version of the Blackwatch software, under the working title Safeguard. This new version is expected to be better at spotting DNA sequences related to pathogenicity, such as genes coding for virulence factors (traits that increase the ability of a bacterium or virus to cause disease) or toxins (poisonous substances produced by bacteria and other living organisms). The goal of improved screening is to reduce the number of false-positive hits caused by the presence of common “housekeeping genes” in both pathogenic and nonpathogenic microbes.

By the mid-2000s, many suppliers of synthetic DNA in the United States and Europe had begun to screen sequence orders voluntarily, but the methodology varied from company to company, and a few firms resisted screening entirely. During the summer of 2006, in an effort to harmonize the inconsistent biosecurity practices in use across the industry, Blue Heron Biotechnology and six other leading gene-synthesis companies in the United States and Europe (GENEART, Codon Devices, Coda Genomics, BaseClear, Bioneer, and Integrated DNA Technologies) formed the International Consortium for Polynucleotide Synthesis to promote safety and security in the emerging field of synthetic biology. The participating firms worked closely with officials at the FBI on a pilot project called the Synthetic Biology Tripwire Initiative. In October 2007, this collaboration culminated in a mechanism by which companies could report suspicious gene-synthesis orders to the FBI. Because the industry consortium relied on volunteer labor, however, it gradually became inactive.

Meanwhile, in April 2007, a group of five German companies (ATG:biosynthetics, Biomax Informatics, Entelechon, febit Holding, and Sloning BioTechnology) formed another consortium called the International Association Synthetic Biology (IASB). A year later, the IASB held an industry workshop in Munich at which leading gene-synthesis experts from Europe and the United States discussed creating a uniform Code of Conduct for screening customers and gene-synthesis orders, based on best practices already in use by several companies. IASB members agreed that biosecurity was not an area of competition and pledged to share resources to develop a screening system that would benefit them all, while creating a level playing field. By mid-2008, they had prepared and circulated for comment a draft “Code of Conduct for Best Practices in Gene Synthesis.”

In mid-2009, however, a split emerged within the industry over the role of human experts in the screening process. The two largest suppliers of synthetic genes—DNA2.0, based in California, and GENEART, based in Germany—proposed replacing human experts with an automated system that would screen all gene-synthesis orders against a predetermined list of virulence-related sequences that would be frequently updated. Advocates of this approach argued that it would be fast and cheap to implement, with no need to pay human experts to determine the function of close matches. But the DNA2.0/GENEART proposal met with considerable resistance because it was less capable than existing screening methods, and by September 2009 its supporters had backed off. Even so, the two firms continued to pursue an alternative to the draft IASB Code of Conduct by holding a series of secret meetings with other large gene-synthesis providers.

Events continued apace. In November 2009, the IASB held a second industry workshop in Cambridge, Massachusetts, to put the finishing touches on its Code of Conduct. Companies at the workshop reached consensus on a basic set of guidelines for screening customers and gene-synthesis orders, but they delegated the details of the process to a Technical Expert Group on Biosecurity, to be established at a later date. All five members of the IASB immediately endorsed the Code of Conduct, and not long afterwards, the first non-IASB company, Generay Biotech of Shanghai, China, also adopted it. But other leading firms, including Blue Heron Biotechnology and GENEART, declined to sign on to the IASB code because they objected to putting key technical decisions in the hands of an expert group that was not directly accountable to the participating companies.

Under the IASB system, firms that adopt and comply with the Code of Conduct will receive a “seal of approval” that they can display on their Websites and use in promotional materials. The seal is designed to give these companies a competitive advantage by identifying them as reputable suppliers. Because large-volume customers such as major pharmaceutical firms are unlikely to accept the IASB seal at face value, the association plans to certify participating suppliers on an annual basis and to field complaints about alleged noncompliance. To this end, the IASB is considering a “red team” strategy in which it sends companies fake gene-synthesis orders containing pathogenic sequences in order to verify that they are screening effectively. Large scientific societies and major customers could also reinforce the standard by refusing to purchase synthetic DNA from companies that do not screen.

To complement its Code of Conduct, the IASB is preparing a Web-based, password-controlled database called the Virulence Factor Information Repository, or VIREP. This database will “annotate” the genes of viral and bacterial pathogens according to biological function, such as the proteins they encode, thereby helping human screeners to distinguish between harmless genes and those that pose a biosecurity risk. Screeners will also be able to deposit information about newly discovered virulence factors into VIREP, saving those with access to the database the trouble of repeatedly investigating the same sequences. As a result, biosecurity screening will become a dynamic process that is refined over time as additional genes associated with pathogenicity are identified.

Developments continue

Even after the IASB introduced its Code of Conduct, matters were far from settled. Two weeks after the Cambridge workshop, five leading gene-synthesis companies—GENEART, DNA2.0, Blue Heron Biotechnology, Integrated DNA Technologies, and GenScript—announced the formation of a separate industry group called the International Gene Synthesis Consortium (IGSC). The five members of IGSC, which together account for more than 80% of the global market for synthetic genes, also launched their own standard called the “Harmonized Screening Protocol for Gene Sequence and Customer Screening to Promote Biosecurity.”

The IGSC protocol provides that companies should “screen the complete DNA sequence of every synthetic gene order … against all entities found in one or more of the internationally coordinated sequence reference databanks.” Whenever this process identifies a sequence associated with pathogenicity, the order will receive further scrutiny from a human expert, including “enhanced” customer screening. IGSC member companies are currently developing a Regulated Pathogen Database that will include all gene sequences identified as potentially hazardous in several existing national lists, including the U.S. Select Agent List (which includes 82 pathogens and toxins of bioterrorism concern) and the Core Control List compiled by the Australia Group, an informal forum of 41 countries that harmonize their national export controls on dual-use materials and equipment suitable for chemical and biological weapons production.

Whatever biosecurity system for the commercial gene-synthesis industry is ultimately selected, it will probably remain in place for several years, while presumably adapting to changing circumstances.

Under the IGSC protocol, participating companies must reject all sequence orders that encode an infectious virus or a functional toxin on the Select Agent List. Whenever the screening software identifies a sequence associated with pathogenicity that does not encode an entire Select Agent, a human expert will review the order to ensure that the customer is legitimate and the material will be used only for peaceful purposes. If enhanced customer screening turns up grounds for suspicion, the IGSC protocol requires the supplier to deny the order and to notify the FBI or some other law enforcement authority. Consortium members agree to retain all customer, order, and screening records for at least eight years, the length of the statute of limitations for obtaining an indictment.

Substantively, the IGSC Harmonized Screening Protocol and the IASB Code of Conduct are remarkably similar, differing only in a few minor details. The main difference between the two industry standards is the process by which they were developed. Whereas the IASB code was drafted in an open and transparent way by all firms that wished to participate, the IGSC protocol was written in secret by a self-selected group that was limited to the suppliers with the largest market share. Thus, although the IGSC has urged all gene-synthesis providers to adopt its standard, only the member companies will have a say in how the screening system evolves in the future. For this reason, smaller U.S. and European firms worry that their interests and concerns will not be taken into account during the IGSC decision-making process.

Enter the U.S. government

In parallel with the development of the two industry screening standards, the U.S. government prepared its own set of guidelines for commercial gene synthesis. A federal advisory committee called the National Science Advisory Board for Biosecurity (NSABB) recommended in December 2006 that the government “develop and promote standards and preferred practices for screening gene-synthesis orders and interpreting the results, and require that orders be screened by providers.” Responding to this recommendation, the White House convened an interagency working group in June 2007 to develop biosecurity guidelines for the U.S. gene-synthesis industry.

Despite the NSABB’s call for legally binding regulations, the interagency group decided to develop a set of voluntary guidelines and test them for a few years to assess their effectiveness. The government took this approach partly out of concern that binding regulations would impede legitimate scientific research and put U.S. suppliers at a disadvantage vis-à-vis their foreign competitors. Another reason was that formal regulations are best suited to situations that are relatively static, whereas synthetic genomics is an emerging technology that is currently in flux both technically and commercially. Recognizing that overly flexible guidelines could permit lax or variable implementation, U.S. government officials sought a reasonable balance between flexibility and consistency.

Although there was no formal coordination between the government and industry tracks, enough discussion occurred to ensure that the two efforts were largely compatible. Finally, on November 27, 2009, the U.S. government published a draft “Screening Framework Guidance for Synthetic Double-Stranded DNA Providers” in the Federal Register and opened it up for 60 days of public comment. The proposed federal guidelines called for the simultaneous screening of new customers and orders for double-stranded DNA greater than 200 nucleotide base-pairs long.

According to the draft guidelines, customer screening involves confirming the purchaser’s identity and institutional affiliation, ensuring that it is not in a database of “denied entities” involved in terrorism or biological weapons proliferation, and verifying the intended end use. Suppliers must also look for “red flags” suggestive of illicit activity, such as the use of a post office box instead of a street address. At times, customer screening demands considerable research. In India, for example, gene-synthesis customers rely on extensive networks of distributors and middlemen, making it difficult to identify the actual end-user. One currently unresolved issue is whether gene-synthesis companies should supply synthetic DNA to researchers who lack an institutional affiliation, such as hobbyists working in home laboratories.

The primary difference between the draft U.S. government guidelines and the two industry standards is the method for screening gene-synthesis orders. In the draft federal guidelines, companies are urged to use a “Best Match” algorithm that compares requested sequences against the National Institutes of Health’s comprehensive GenBank database and flags an order if it is “more closely related to a Select Agent or Toxin sequence than to a non-Select Agent or Toxin sequence.” The U.S. government chose this approach because its primary objective was to prevent would-be bioterrorists from circumventing security controls on access to Select Agents by ordering the corresponding genetic material over the Internet.

In January 2010, the Center for Science, Technology and Security Policy at the American Association for the Advancement of Science (AAAS) held a scientific workshop in Washington, D.C., to discuss the draft federal guidelines. Participants called into question the proposed 200 base-pair cutoff for screening sequence orders on the grounds that it was arbitrary and hard to justify scientifically, and suggested instead that screening be performed on any piece of double-stranded DNA delivered in an expression vector, regardless of length. The proposed Best Match algorithm also provoked a great deal of discussion. Critics argued that although Best Match may be simple and easy for companies to implement, it is weaker than either industry standard because it cannot detect genetic sequences of pathogens and toxins of biosecurity concern that are not on the Select Agent List, such as the SARS virus or recently emerged hemorrhagic fever viruses. Indeed, although the draft federal guidelines admit that non-Select Agent sequences “may pose a biosecurity threat,” such sequences are exempted from routine screening “due to the complexity of determining pathogenicity and because research in this area is ongoing.”

The consensus of the AAAS workshop was that although automated screening using Best Match is the most practical approach for detecting genetic sequences that code for Select Agents, there is a clear need to capture a larger universe of sequences of concern. Not only are static defenses such as the Select Agent List easily circumvented, but the marginal cost of screening for pathogens and toxins outside the list is relatively low. One workshop participant warned that if the U.S. government endorses the Best Match algorithm, companies that have argued in the past for fast and cheap screening methods will almost certainly embrace this approach. In that case, other firms will follow suit to remain competitive, moving the industry toward a screening standard that is less capable than what is already practiced by most companies today.

In response to the concerns expressed at the AAAS workshop, it appears likely that the U.S. government will strengthen the draft guidelines. A possible solution to the shortcomings of the Best Match algorithm, for example, is a modified strategy that might be termed “Best Match-plus.” This approach would involve screening all gene-synthesis orders for their similarity to Select Agent sequences, flagging those of concern for further review. Then, in a follow-on process, a human screener would compare the requested sequence to a curated database of non-Select Agent sequences that pose potential security risks. Such a database might be either an annotated version of GenBank or a dedicated database of sequences known to be associated with pathogenicity. (Simply adding more pathogens to the Select Agent List would be undesirable because it could impede vital public health research. Indeed, the SARS virus was deliberately kept off the list for that reason.)

Some workshop participants also argued that the screening software should be open source, meaning that the programming code would be freely available. The advantage of open-source software is that it can be updated and validated by the scientific community as knowledge grows, whereas proprietary software tends to be more static. Development of open-source screening software and a curated database could be supported by the gene-synthesis industry, the U.S. government, or both.

Addressing global governance

Because gene synthesis capabilities have spread worldwide, a voluntary biosecurity regime will be effective at preventing misuse only if it is adopted internationally. A harmonized method for customer and sequence screening is needed to ensure uniformity in biosecurity practices across the industry, reducing the risk that a problematic order denied by one company will be filled by another. Any framework that applies to the United States alone would yield few security benefits and could be counterproductive by driving illicit customers to the small minority of foreign suppliers that refuse to screen. At present, the fact that the IASB and IGSC rules do not cover all gene-synthesis firms is a serious gap in the global biosecurity regime. Although it is unclear how many rogue suppliers might seek to profit from the black market in synthetic DNA, the existence of even one non-adhering company reduces security everywhere.

Advocates of legally binding regulations note that past attempts at industry self-policing have failed, most notably in the environmental field. According to a skeptical editorial on the IASB Code of Conduct in the journal Nature, “Although such a code of conduct is useful and welcome, compliance and enforcement will be paramount. There have been, and will probably continue to be, companies that are not interested in cooperating with any industry group and that are happy to operate in the unregulated grey area. The ultimate hope is that customers will put economic pressure on those non-compliers to fall in line, or else lose all but the most disreputable business. But that is just a hope.” Despite this skepticism, a formal treaty is not a viable solution because it would take years to negotiate and, once enacted, would be difficult to modify in response to rapid technological change.

To harmonize voluntary biosecurity measures for commercial gene synthesis, it will be necessary for the industry groups and the U.S. government to engage companies worldwide. At present, such outreach is insufficient. Although China is home to several leading suppliers, most of them have yet to endorse either industry screening standard. In 2009, the U.S. government discussed gene synthesis with Germany and initiated exploratory contacts with the Chinese Foreign Ministry. Consultations are also under way with countries participating in the Australia Group.

One way to encourage international compliance with the voluntary industry guidelines is to publicize which suppliers are complying with the rules, for example through the IASB’s “seal of approval” program. By giving gene-synthesis companies that screen a competitive advantage, this approach may motivate the small number of holdout firms to change their practices.

A second way to promote compliance with the biosecurity standard is to take advantage of market forces. Today, economies of scale and intense price competition are driving the consolidation of the gene-synthesis industry. Firms with sufficient sales volume to automate their manufacturing operations have extremely low costs, enabling them to force less efficient competitors out of business. These trends will result in a major shakeout in which companies with the best technology and the highest volume will prevail. As a result of this process, the gene-synthesis industry will become an integrated global market in which a limited number of major suppliers in several countries compete for the same pool of customers. In such a highly competitive environment, the biggest consumers of synthetic DNA—leading research institutions and major pharmaceutical companies—will have a great deal of leverage. The European pharmaceutical giant AstraZenica, for example, already requires its suppliers of synthetic DNA to comply with the screening guidelines. Thus, as the industry consolidates, gene-synthesis firms that wish to maintain or expand their market share will have a strong incentive to screen.

Voluntary DNA screening differs from other areas of policy, such as limiting emissions of greenhouse gases, in that it does not follow the logic of the “prisoner’s dilemma,” a game-theoretical model in which the individual players obtain the biggest payoff by defecting while others cooperate, but end up worse off if the other players defect as well. Thus, without binding government regulation to ensure a level playing field, companies in industries with this type of incentive structure often find themselves in a race to the bottom. The economics of gene synthesis are fortunately quite different. Because sequence screening is cheap compared with other costs of doing business, gene-synthesis companies cannot gain a competitive price advantage by defecting. Moreover, the black or gray market for gene-synthesis products is not large enough to sustain rogue companies on illicit orders alone. Suppliers of synthetic DNA therefore have a strong incentive to screen as a matter of survival, and they have few incentives to defect. In other words, the payoff from unilateral defection is lower than that from cooperation, which benefits all players. As Stephen Maurer of the University of California, Berkeley, has observed, if gene-synthesis companies comply with the screening standard because they cannot compete in the market otherwise, then there is really nothing voluntary about it.

A third way to encourage suppliers to comply with the voluntary screening guidelines is through the threat of legal liability. Under the “reasonable person” standard in tort law, if synthetic DNA supplied by a gene-synthesis company is used in a bioterrorist attack, it may be possible to prosecute the supplier for not taking precautions that a reasonable person would have taken—for example, rejecting a suspicious order that used a post office box instead of a verifiable address. The risk of litigation would deter companies from flouting the screening standard. Of course, a major challenge in bringing a tort action against a supplier of synthetic DNA would be to prove that the genetic sequence used in a bioterrorist weapon had been supplied by a specific company. Doing so would require tracing the synthetic DNA back to the source and demonstrating a clear chain-of-custody from supplier to end-user. In any event, complying with government-approved guidelines gives companies a good deal of legal protection. Without such guidelines, what constitutes responsible corporate behavior is largely a judgment call. But if guidelines exist, complying with them is tantamount to performing due diligence, immunizing a company from liability for damages in the event its product is misused.

Anticipating future developments

The current existence of three competing screening standards is an unstable situation, leading some observers to worry that the biosecurity regime will devolve to the lowest common denominator. Because the draft U.S. government guidelines are widely considered inadequate, however, they are likely to be revised, perhaps by incorporating strategies such as Best Match-plus. Whatever screening system for the commercial gene-synthesis industry is ultimately selected, it will probably remain in place for several years, while presumably adapting to changing circumstances.

Despite the tendency to view voluntary guidelines and legally binding regulations as opposite ends of a spectrum, they are not mutually exclusive. Once voluntary guidelines have been in place for some time, governments could take action at the national and global levels to reinforce industry compliance. For example, countries could mandate that government-funded researchers purchase synthetic DNA from suppliers that comply fully with the voluntary screening standard.

Another possible approach would be to require all companies that sell synthetic DNA to have their biosecurity policies reviewed by an objective third party. To perform this function, the industry associations could hire outside contractors to monitor companies’ screening activities and certify that they meet the agreed standard. This certification would be renewed periodically, perhaps every one or two years. Purchasers of pathogenic gene sequences might also be made subject to certification to verify their bona fides. Because scientists move frequently from one lab to another, such certification should probably take place at the institutional level. Ideally, the list of approved customers would be made public so that suppliers know to whom they are authorized to sell.

Finally, some mechanism could be established for notifying suppliers whenever a sequence order is denied, so that a request turned down by one company is not filled by another. Although antitrust concerns and companies’ desire to protect trade secrets make them reluctant to share customer information, the small fraction of the DNA synthesis market that involves pathogenic sequences (about 1%) is not particularly lucrative, making such information less competition-sensitive and hence easier to share. It is also possible that the U.S. government could identify firms that refuse to screen and keep them and their customers under surveillance.

The proposed industry and government guidelines represent an important first step toward a system of global risk management for the gene-synthesis industry. Of course, any biosecurity system will inevitably be imperfect and contain gaps. For example, advanced bench-top DNA synthesizers capable of producing gene-length sequences with high accuracy will probably reach the market within a decade. If in-house gene synthesis becomes widespread, it will reduce scientists’ reliance on commercial suppliers and thus diminish the security benefits of customer and sequence screening. For the next several years, however, an effective governance regime for the gene-synthesis industry will remain the low-hanging fruit for managing the risk of misuse.

From the Hill – Spring 2010

Obama budget includes bright spots for R&D

Although the Obama administration’s overall R&D budget proposal for the 2011 fiscal year is essentially flat as compared to that for 2010, it does contain bright spots for the nation’s science and technology (S&T) enterprise, the president’s science adviser John P. Holdren said during a briefing at the American Association for the Advancement of Science.

Overall, the budget proposal includes $147.7 billion for R&D, an increase of $343 million or just 0.2% above the 2010 level enacted by Congress. However, basic science research, along with energy, health, and climate, are among the sectors that would receive expanded funding in the coming budget year. At the same time, the administration would abandon a controversial Moon landing program and would cut the Department of Homeland Security’s R&D program by 9% or $104 million.

Acknowledging that the plan required many tough decisions on R&D priorities, Holdren said Obama had managed to “preserve and expand” S&T programs that the administration considers essential to promoting economic growth, protecting the environment, and setting the stage for a clean energy future.

The proposed 2011 budget includes a 5.6% overall increase for basic and applied research, to $61.6 billion, while cutting the total development budget by 3.5%, to $81.5 billion. It proposes a substantial increase for nondefense R&D, which would increase by $3.7 billion or 5.9% more than 2010. Defense Department R&D, meanwhile, would be reduced 4.4% to $77.5 billion, primarily through cuts in low-priority weapons development programs and congressional projects.

Among the highlights of the Obama budget:

The plan maintains the path to a doubling by 2017 of budgets for three key science agencies: the National Science Foundation (NSF), the Department of Energy’s (DOE’s) Office of Science, and the National Institute of Standards and Technology’s (NIST’s) laboratories.

The proposed 8% increase in NSF funding to $7.4 billion will expand efforts in climate and energy research and education, networking and information technology research, and research on environmental and economic sustainability. The budget would also sustain the administration’s effort to triple the number of new NSF Graduate Research Fellowships to 3,000 by 2013.

The budget would end the National Aeronautics and Space Administration’s (NASA’s) Constellation program, which was begun under President George W. Bush as an effort to send U.S. astronauts back to the Moon by 2020. The administration proposes to spend $6 billion over the next five years to encourage private companies to build and operate their own spacecraft to carry NASA astronauts to the International Space Station.

The budget for the National Institutes of Health (NIH) would rise by 3.2% to $32.1 billion. The budget would focus on five strategic priorities: applying genomics and other high-throughput technologies, translating basic science discoveries into new and better treatments and diagnostics, using science to enable health care reform, global health, and reinvigorating and empowering the biomedical research community. NIH also will continue to award and oversee $10.4 billion provided in the American Recovery and Reinvestment Act of 2009.

The R&D budget for the National Oceanic and Atmospheric Administration (NOAA) would rise by 10%, or almost $1 billion. The budget for the multiagency U.S. Global Change Research Program would rise 21% to $2.6 billion. The funding reflects the administration’s concerns about climate change and the declining health of the world’s oceans. “This is the largest increase in NOAA’s science budget in over a decade,” said NOAA Administrator Jane Lubchenco at the AAAS briefing.

The budget for the new National Institute of Food and Agriculture’s key competitive research program, the Agriculture Food and Research Initiative, would rise 63% to $429 million.

The budget proposes to spend $3.7 billion on science, technology, engineering, and mathematics (STEM) education programs. About $1 billion, an increase of nearly 40%, would go to K-12 programs to encourage interest in those fields.

The budget also proposes making the Research and Experimentation Tax Credit permanent; provides $300 million for DOE’s new Advanced Research Projects Agency–Energy (ARPA-E); gives $3.1 billion to the Defense Advanced Research Projects Agency, a 3.7% increase; and provides $679 million for R&D at the U. S. Geological Survey, a 2.9% increase.

In its budget documents, the Obama administration says that NASA’s Constellation program, on which more than $9 billion has already been spent to develop a crew capsule called Orion and a rocket called Ares I, threatened other parts of NASA’s endeavors while “failing to achieve the trajectory of a program that was sustainable, executable, and ultimately successful.”

The 2011 NASA R&D budget would increase by $1.7 billion or 18.3%. The emphasis would be on technology development and testing to “reverse decades of under-investment in new aerospace ideas and re-engage our greatest minds,” the budget document says. A new heavy-lift and propulsion R&D program will be part of the administration’s effort to “re-baseline” the nation’s space exploration efforts.

“Simply put, we’re putting the science back into rocket science,” Holdren said. The NASA budget also calls for a steady stream of new robotic missions to scout locations for future human missions.

The proposed changes at NASA are expected to draw intense attention on Capitol Hill. “The space agency’s budget request represents a radical departure from the bipartisan consensus achieved by Congress in successive authorizations over the past five years,” said Rep. Bart Gordon (D-TN), the chairman of the House Committee on Science and Technology. “This requires deliberate scrutiny. We will need to hear the administration’s rationale for such a change and assess its impact on U.S. leadership in space before Congress renders its judgment on the proposals.”

Congress examines energy R&D

With climate change legislation on hold for now, Congress is focusing on the role of energy R&D. The Senate Energy and Natural Resources Committee held a January 21 hearing to examine initiatives that will help the United States address climate change through energy R&D.

At the hearing, Secretary of Energy Steven Chu said that federal investment in new energy technologies helps U.S. competitiveness, creates jobs, and combats climate change. The fact that most R&D projects do not yield positive returns is more than offset by extremely high returns from some investments. He said that estimates of the net return on public investment in R&D range from 20 to 67%, with some projects yielding returns of more than 2,000%.

Chu highlighted agency priorities: increasing the production of biofuels; enhancing car batteries; improving photovoltaics; designing computers that will improve building efficiency; and creating large-scale energy storage systems that will enable renewable energy sources such as wind and solar to become base load sources. DOE wants to focus emissions reduction research in areas such as trucking, where reductions will be difficult to achieve.

Chu said that progress toward several energy R&D goals would be helped by several newer programs: Energy Frontier Research Centers (EFRCs), Energy Innovation Hubs, and ARPA-E. EFRCs support multiyear, multi-investigator scientific collaborations focused on overcoming hurdles in basic science that block transformational discoveries. Energy Innovation Hubs are collaborative efforts focused on a specific energy challenge, especially barriers to transforming energy technologies into commercially viable materials, devices, and systems. Three Energy Innovation Hubs have been created that focus on the production of fuels from sunlight, energy-efficient building systems design, and modeling and simulation of advanced nuclear reactors. ARPA-E funds high-risk, high-reward energy research. Among its projects are smaller and more efficient wind turbines, new carbon-capture technology, and liquid-metal batteries.

On January 27, the House Science and Technology Committee held a hearing to examine the progress of ARPA-E, which has been in existence for less than a year. Authorized by the America COMPETES Act in 2007, the program received its initial funding in the 2009 economic stimulus bill.

ARPA-E Director Arun Majumdar said prospective grantees had expressed a high level of interest in the program. ARPA-E received more than 3,700 concept papers in response to its first announcement, leading to 37 awards averaging $4 million each. Majumdar noted the importance of translating the upstream research funded by this program into jobs for U.S. workers. Nearly all members of Congress present at the hearing echoed the importance of ensuring that the United States is able to commercialize and manufacture the technologies developed through the grants.

Other witnesses testified in support of ARPA-E, with Charles Vest, president of the National Academy of Engineering, stating the program is “off to a great start.” John Denniston of Kleiner, Perkins, Caufield & Byers encouraged Congress to expand the program. The Obama administration appears to agree: The fiscal year 2011 budget request contains $300 million for ARPA-E. The request also contains $40 million in new funding to create several new EFRCs as well as support for the 46 existing centers. The budget proposes $107 million to support the three existing Energy Innovation Hubs and create a new hub to focus on batteries and energy storage.

House considers reauthorization of America COMPETES Act

On January 20, the House Science and Technology Committee held a hearing to discuss the America COMPETES Act and its role in supporting STEM education, the economy, and R&D. The committee is planning a vote to reauthorize the act before Memorial Day.

The America COMPETES Act was passed in 2007 and authorized $33.6 billion from 2008 to 2010 in new spending for a host of research and education programs at NSF, DOE, NIST, NOAA, NASA, and the Department of Education. If funding hits the targets authorized by the bill, the main beneficiaries—NSF, DOE’s Office of Science, and NIST—will double their budgets over seven years. However, appropriated funding has not matched the doubling path of authorized funding.

At the hearing, all witnesses agreed that the American COMPETES Act provides a necessary boost for STEM education, fields that will be critical for students’ success in future jobs. John Castellani, president of the Business Roundtable, said that the reauthorization of America COMPETES would increase the number of STEM students and good STEM teachers as well as increase the ability of the United States to create new jobs in both the near and long term. “Investments in research and education provide the tools for accelerated technological innovation, which drives productivity growth,” Castellani said. “Innovation leads to new products and processes, even whole new industries, thereby generating high-wage employment and a higher standard of living for all Americans.”

Tom Donohue, chief executive officer of the U.S. Chamber of Commerce, said more R&D investment is needed, noting that U.S. businesses in math and science fields are dropping out of the upper echelons in those fields: Only 4 of the world’s top 10 businesses in math and science fields are from the United States, he said.

The America COMPETES Act also established ARPA-E. At the January 20 hearing, former Michigan Governor John Engler, who is now the president and chief executive officer of the National Association of Manufacturers, testified in support of ARPA-E. He stated that to improve manufacturing, “there needs to be fundamental transformation in how we produce, distribute, and consume energy. This transformation should start with a shift in how we view and approach energy research. This is the goal of ARPA-E and it presents a unique platform to integrate innovative industry, research, and development and yield results.”

Multiple witnesses testified that the reauthorization of the America COMEPTES Act could better coordinate federal investment in R&D and federal/state and federal/private R&D partnerships. Witnesses said that there should be more public/private R&D partnerships.

Committee considers reform of export controls

On January 15, the House Foreign Affairs Committee held a hearing in Stanford, California, on the impact of export controls on national security and U.S. leadership in S&T.

The hearing provided an opportunity to hear the testimony of John L. Hennessey, president of Stanford University and co-chair of the National Research Council’s Committee on Science, Security, and Prosperity, which published the report Beyond Fortress America. That report criticized the existing export control regime as an antiquated artifact of the Cold War and proposed recommendations for reforming the system to better assess the sharing of dual-use technologies that have both civilian and military applications.

Hennessey warned that U.S. leadership in science is slipping and that this Cold War approach to dual-use technologies does not accurately reflect 21st-century research that relies on foreign students at U.S. universities, international collaboration, and teamwork across multiple campuses. He cited examples of research at Stanford that has been impeded by dual-use restrictions. For example, investigators working with a NASA research instrument aboard a satellite were limited to what information they could share with foreign students because satellite technologies are considered military munitions.

The other witnesses and members of Congress present at the hearing all agreed that export controls should be updated to better balance national security and international competitiveness, but there was no consensus on what constituted an appropriate balance.

Rep. Dana Rohrabacher (R-CA) argued that the United States should err on the side of caution when dealing with nations that infringe on human rights and have a record of attempting to steal high-tech information. He maintained that students from countries such as China have been sent to study in the United States for the express purpose of stealing technological know-how.

Hennessey, on the other hand, argued that if a university is conducting basic fundamental research, then that research should be open and available to all students, even if the field of research exposes students to a potential problem.

Bits and pieces

  • Senate Environment and Public Works Subcommittee on Clean Air and Nuclear Safety Chairman Tom Carper (D-DE) and Sen. Lamar Alexander (R-TN) introduced The Clean Air Act Amendments of 2010, which call for substantial reductions in soot-forming sulfur dioxide, smog-forming nitrogen dioxide, and mercury. The Environmental Protection Agency (EPA) is also in the process of drafting regulations for these pollutants, after its previous efforts were voided by court rulings.
  • The Cybersecurity Enhancement Act of 2009 (H.R. 4061) passed the House on February 4. The bill would expand cybersecurity research programs at NSF and NIST. The bill requires federal agencies to create a strategic cybersecurity plan and makes the Office of Science and Technology Policy responsible for creating a university/industry task force to find areas to collaborate. The bill also provides NSF with grant and fellowship funding for computer and network security.
  • President Obama announced a $250 million public/private effort to boost STEM education. The initiative seeks to prepare more than 10,000 new teachers over five years and provide professional development opportunities to more than 100,000 current teachers. It would effectively double the campaign launched by Obama in November 2009.
  • The EPA released a proposed rule that would reduce the allowable amount of ground-level ozone in the air from 75 to between 60 and 70 parts per billion for any eight-hour period. Ground-level ozone is a primary component of smog. The new proposal mirrors the unanimous recommendation of EPA’s Clean Air Scientific Advisory Committee in 2007, a recommendation that was not adopted by the Bush administration in 2008 when new rules were released. The EPA also announced that it will set a secondary, seasonal ozone limit to protect plants and trees. The proposal must undergo 60 days of public comment before becoming final.
  • The Food and Drug Administration (FDA) said that it has “some concern” about the potential effects of bisphenol-A, a chemical found in plastic bottles and food packaging, on the brain, behavior, and prostate glands of fetuses, infants, and children. This is a reversal of the FDA position during the Bush administration. The agency now plans further study of the compound’s effects on humans and animals.
  • Energy Secretary Steven Chu announced the creation of a 15-member Blue Ribbon Commission on America’s Nuclear Future to provide recommendations for developing a long-term national solution for managing used nuclear fuel and nuclear waste. The commission, co-chaired by former Rep. Lee Hamilton and former National Security Adviser Brent Scowcroft, will produce an interim report within 18 months and a final report within 24 months. DOE officials emphasized that the commission will not attempt to find a new site for permanently storing nuclear wastes but will instead focus on alternative ways for dealing with nuclear waste. DOE recently abandoned plans to locate a nuclear waste repository at Yucca Mountain in Nevada.
  • In a 3-2 party-line vote, the Securities and Exchange Commission (SEC) added risks and opportunities from climate change to the possible effects that public companies should disclose. In an “interpretive guidance” document, the SEC noted several areas in which climate change may trigger disclosure requirements, including the impact of legislation, regulation, and international accords, as well as the physical effects of climate change.
  • President Obama announced that the federal government, the nation’s largest energy consumer, will reduce its greenhouse gas emissions 28% below 2008 levels by 2020. The announcement builds on targets submitted by federal agencies in response to an October 5, 2009, Executive Order on Federal Sustainability.

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.