The Shifting Landscape of Science

Citations to U.S. science, in the aggregate, have been flat over the past 30 years, whereas citations of research papers from the rest of the world have been rising steadily. Although the relative decline in U.S. scientific prowess is perceived by many to be unalloyed bad news, the spectacular rise in scientific capacity around the world should be viewed as an opportunity. If the nation is willing to shift to a strategy of tapping global knowledge and integrating it into critical local know-how, it can continue to be a world research leader. Science is no longer a national race to the top of the heap; it is a collaborative venture into knowledge creation and diffusion.

We all know the story of the recent scientific past. Since the middle of the 20th century, the United States has led the world rankings in scientific research in terms of quantity and quality. U.S. output accounted for more than 20% of the world’s papers in 2009. U.S. research institutions have topped most lists of quality research institutions since 1950. The United States vastly outproduces most other countries or regions in patents filed. This privileged status was partly due to the historical anomaly at the end of World War II when the United States had a newly developed and expanding scientific system, whereas most of the rest of the industrialized world had to rebuild their war-torn science systems. The United States then capitalized on its advantage by rapidly expanding government support for research.

Many governments around the world, responding to the perceived significance of science to economic growth, have increased R&D spending. In 1990, six countries were responsible for 90% of R&D spending; by 2008, this number has grown to include 13 countries. According to the United Nations Educational, Scientific, and Cultural Organization (UNESCO), since the beginning of the 21st century, global spending on R&D has nearly doubled to almost a trillion dollars, accounting for 2% of the global domestic product. Developing countries have more than doubled their R&D spending during the same period.


The number of scientific papers worldwide has grown as R&D spending has increased. The number of scientific articles registered in the catalog services managed by Thomson-Reuters and Elsevier has increased from 1.1 million in 2002 to 1.6 million in 2007 (see Figure 1). And Thomson-Reuters indexes only about 5% of all scientific or technical publications, so there is much more science done and shared than what we can see in these numbers. The UNESCO report documents that the global population of researchers has increased from 5.7 million in 2002 to 7.1 million in 2007. The distribution of talent is spread more widely, and the quality of contributions from new entrants has increased.

FIGURE 1
Number of papers, 1980-2009

The number of papers has been computed using fractional counting at the level of addresses. For example, a paper with authors from two Canadian institutions and three U.S. institutions would register as 0.4 papers for Canada and 0.6 papers for the United States.

From 1980 to 2010, the growth in the output of scientific articles has resulted in a shift in the relative position of many countries. An explicit policy in the European Union countries plus Switzerland to close the quality gap with the United States has produced results as measured by citation counts. Switzerland surpassed the United States in citation quality measures in 1985, albeit based on a small number of publications, and Denmark, the Netherlands, Belgium, the United Kingdom, Germany, Sweden, and Austria have moved ahead of the United States in the past decade (see Figure 2).

FIGURE 2
Average of Relative Impact Factors (ARIF), 1981-2009


Asia lags behind the United States and Europe in quality indicators of scientific output, though some of the gap might be explained by the fact that Asian journals are not well represented in the indexing services. Nevertheless, Singapore is leading an Asian surge in the rankings. If current trends continue, Singapore will rank fifth in the world in quality by 2015.

One explanation offered for the slide in the U.S. quality ranking is that a growing number of U.S. papers now include non-U.S. coauthors who share the credit for high-quality research. A second explanation is that the United States is producing output at a maximum level of efficiency, so that adding additional resources would not improve quality. A third is that other countries and regions have made a concerted effort to enhance the quality of their R&D, and they have seen good results. All three of these explanations may be factors.

As other parts of the world have enhanced their science bases, the U.S. percentage shares of all aspects of the knowledge system are giving way to a broader representation of countries. China and South Korea, two countries that are exponentially increasing their investment as well as the quantity and quality of their output, are rapidly taking leadership positions in scientific output. Between 1996 and 2008, the United States dropped 20% in relative terms in its share of global publications as other nations have increasingly placed quality scientific publications in journals cataloged by Thomson-Reuters and/or Elsevier.

The sustained rate of growth of China has caught the attention of many who track global science. Its rise may be due to the increasing availability of human capital at Chinese universities and research institutions. In addition, the Chinese Academy of Sciences is providing incentives for researchers to publish in cataloged journals. Chinese scientists who have been living abroad have been encouraged to return to China or to collaborate with their colleagues in China. These changes have increased the number of Chinese scientists who seek to publish in the cataloged journals, contributing to the growth in overall numbers in the Science Citation Index Expanded and the drop in percentage share of other leaders. At the same time that Asian countries have supported exponential growth in scientific publications, the United States and other scientifically advanced countries have maintained slow growth.

To view the relative position of national outputs in a way that normalizes for size of the workforce and the disciplines’ propensity to publish and cite, Eric Archambault and Gregoire Coté of Science-Metrix calculated the average of relative citations (ARC) by paper and by address of each author across 30 years of publication data (see Figure 3). The Science-Metrix ARC index is obtained by counting the number of citations received by each paper during the year in which the paper is published and for the two subsequent years. To account for different citation patterns across fields and subfields of science (for example, there are more citations in biomedical research than in mathematics), each paper’s citation count is divided by the average number of citations in that field. In other words, the calculation provides the average citation rate of papers in that same field during the same time period. An ARC value above 1.0 means that a country’s publications are cited more than the world average, and below 1.0, less than average. Counts are aggregated from the paper level by field up to the country level.

FIGURE 3
Average of Relative Citations (ARC), 1980-2008


FIGURE 4
International collaboration, 1980-2009

Note: The percentage of international collaboration is calculated by dividing the number of papers co-authored with at least one foreign institution by the total number of papers.

The policy challenge

More knowledge worldwide can be a net gain for the United States. Gathered from afar and reintegrated locally, knowledge developed elsewhere can be tapped to stoke U.S. innovation. Despite this fact, the shifts in the global science and technology (S&T) landscape have been viewed with alarm by many U.S. observers. A number of groups have expressed concern that the rise of foreign capacity could undermine U.S. economic competitiveness. The 2010 update of the National Academy of Sciences’ Rising Above the Gathering Storm report observed that “the unanimous view of the committee members . . . is that our nation’s outlook has worsened” since the first report was issued in 2005, due in part to the rising scientific profile of many other countries. This reflects a nation-centered view of science unaware of the global dynamic of collaboration and knowledge exchange that characterizes science now.

As new researchers and new knowledge creators arise around the globe, those people, centers, and places that are in a position to access and absorb information will benefit. Unlike some economic resources, such as factories or commodities, knowledge is what economists call non-rivalous because its consumption or use does not reduce its availability or usefulness to others. In fact, scientific knowledge increases in value as it is used, just as some technologies become more valuable through the network effect as more people adopt them.

As centers of excellence emerge in new places, the United States can enhance efficiency through collaboration. This already takes place in some fields such as astrophysics and climate science, where the cost to any one nation of investing in S&T is too great. A more aggressive strategy of collaboration and networking can draw in knowledge in ways that free up U.S. scientists to produce more specialized or cutting-edge work. This could leverage investments at the national level to focus on more critical capacity-building, an approach called global knowledge sourcing. In business parlance, global knowledge sourcing means the integration and coordination of common materials, processes, designs, technologies, and suppliers across worldwide operating locations. Applying a similar vision to national-level investments could result in significant efficiencies for the United States at a time when budget cuts are pressuring R&D budgets.

Although the U.S. research system remains the world’s largest and among the best, it is clear that a new era is rapidly emerging. With preparation and strategic policymaking, the United States can use these changes to its advantage. Because the U.S. research output is among the least internationalized in the world, it has enormous potential to expand its effectiveness and productivity through cooperation with scientists in other countries.

Only about 6% of U.S. federal R&D spending goes to international collaboration. This could be increased by pursuing a number of opportunities: from large planned and targeted research projects to small investigator-initiated efforts and from work in centralized locations such as the Large Hadron Collider in Geneva to virtual collaborations organized through the Internet. Most federal research support is aimed at work done by U.S. scientists at U.S. facilities under the assumption that this is the best way to ensure that the benefits of the research are reaped at home. But expanded participation in international efforts could make it possible for the United States to benefit from research funded and performed elsewhere.

U.S. policy currently lacks a strategy for encouraging and using global knowledge sourcing. Up until now, the size of the U.S. system has enabled it to thrive in relative isolation. Meanwhile, smaller scientifically advanced nations such as the Netherlands, Denmark, and Switzerland have been forced by budgetary realities to seek collaborative opportunities and to update policies. These nations have made strategic decisions to fund excellence in selected fields and to collaborate in others. This may account in part for the rise in their quality measures. An explicit U.S. strategy of global knowledge sourcing and collaboration would require restructuring of S&T policy to identify those areas where linking globally makes the most sense. The initial steps in that direction would include creating a government program to identify and track centers of research excellence around the globe, paying attention to science funding priorities in other countries so that U.S. spending avoids duplication and takes advantage of synergies, and supporting more research in which U.S. scientists work in collaboration with researchers in other countries.

One recent example of movement in the direction of global knowledge sourcing is the U.S. government participation with other governments in the Interdisciplinary Program on Application Software toward Exascale Computing for Global Scale Issues. After the 2008 Group of 8 meeting of research directors in Kyoto, an agreement was reached to initiate a pilot collaboration in multilateral research. The participating agencies are the U.S. National Science Foundation, the Canadian National Sciences and Engineering Research Council, the French Agence Nationale de la Recherche, the German Deutsche Forschungsgemeinschaft, the Japan Society for the Promotion of Science, the Russian Foundation for Basic Research, and the United Kingdom Research Councils. These agencies will support competitive grants for collaborative research projects that are composed of researchers from at least three of the partner countries, a model similar to the one used by the European Commission. Proposals will be jointly reviewed by the participating funding organizations, and successful projects are required to demonstrate added value through multilateral collaboration. Support for U.S.-based researchers will be provided through awards made by the National Science Foundation. It would be useful to begin discussions about the metrics of success of these types of activities.

Tapping the best and brightest minds in S&T and gathering the most useful information anywhere in the world would greatly serve the economy and social welfare. Looking for the opportunity to collaborate with the best place in any field is prudent, since the expansion of research capacity around the globe seems likely to continue and it is extremely unlikely that the United States will dramatically increase its research funding and regain its dominance. Moreover, it may be that the marginal benefit of additional domestic research spending is not as great as the potential of tapping talent around the world. Thus, seeking and integrating knowledge from elsewhere is a very rational and efficient strategy, requiring global engagement and an accompanying shift in culture. Leadership at the policy level is needed to speed this cultural shift from a national to a global focus.

Asian Women in STEM Careers: An Invisible Minority in a Double Bind

Asian Women in STEM Careers: An Invisible Minority in a Double Bind

In the effort to increase the participation of women and people of color in science, technology, engineering, and math (STEM) careers, a common assumption is that Asian men and women are doing fine, that they are well represented in STEM and have no difficulty excelling in STEM careers. This belief is supported by the easy visibility of Asian faces on campuses, in STEM workplaces, and in government laboratories. Indeed, Asians are generally considered to be overrepresented. Data from the 2009 Survey of Earned Doctorates from U.S. universities show that 22% of the 2009 doctoral recipients planning to work in the United States were individuals of Asian descent. With so many entering the workforce, it is easy to assume that Asians women are progressing nicely and that they can be found at the highest levels of STEM industry, academics, and government institutions. The data tell a different story.

The advancement of Asian female scientists and engineers in STEM careers lags behind not only men but also white women and women of other underrepresented groups. Very small numbers of Asian women scientists and engineers are advancing to become full professors or deans or university presidents in academia, to serve on corporate board of trustees or become managers in industry, or to reach managerial positions in government. Instead, in academia 80% of this population can be found in non-faculty positions, such as postdocs, researchers, and lab assistants, or nontenured faculty positions, and 95% employed in industry and over 70% employed in government are in nonmanagerial positions. In earning power they lag behind their male counterparts as well as behind women of other races/ethnicities in STEM careers.

The challenges faced by women of color in STEM fields were clearly articulated 35 years ago when the term double bind was first used in reference to challenges unique to the intersection of gender and race/ethnicity that are faced by women of color in STEM fields. At the time these challenges were, and still are, commonly thought to apply less to Asian women than to black, Latina, and Native American women.

This data presented here point to the existence of a double bind for Asian women, facing both a bamboo ceiling because of Asian stereotyping and a glass ceiling because of implicit gender bias. The scarcity of Asian women in upper management and leadership positions merits greater attention, more targeted programmatic efforts, and inclusion in the national discussion of the STEM workforce.

Academic faculty

The percentage of Asian women employed by colleges and universities who are tenured or who are full professors is the smallest of any race/ethnicity and gender.

Percentage of doctoral scientists and engineers employed in universities and 4-year colleges (S&E occupations) who are tenured, by race/ethnicity and gender (2008)

Source. National Science Foundation, Division of Science Resources Statistics, Survey of Doctorate Recipients: 2008. Table 9-26 “Employed doctoral scientists and engineers in 4-year educational institutions, by broad occupation, sex, race/ethnicity, and tenure status: 2008” Accessed July 16, 2011

Note: Data of American Indian/Alaska Native and Native Hawaiian/Other Pacific Islander is suppressed for data confidentiality reasons.

Percentage of doctoral scientists and engineers employed in universities and 4-year colleges (S&E occupations) who are full professors, by race/ethnicity and sex (2008)

Source. National Science Foundation, Division of Science Resources Statistics, Survey of Doctorate Recipients: 2008. Table 9-25 “Employed doctoral scientists and engineers in 4-year educational institutions, by broad occupation, sex, race/ethnicity, and faculty rank: 2008” Accessed July 16, 2011

Note: Data of American Indian/Alaska Native and Native Hawaiian/Other Pacific Islander is suppressed for data confidentiality reasons.

Academic leadership

A 2006-7 survey of 2,148 presidents of two-year and four-year public and private colleges published by the American Council on Education (“The Spectrum Initiative: Advancing Diversity in the College Presidency”) found that only 0.9% of all college presidents were Asian. By comparison, 5.8% were black and 4.6% were Hispanic.

Asians holding science and engineering (S&E) doctorates comprise 34% of postdocs but only 7% of deans and department chairs. A similar bamboo ceiling for being Asian emerges in Table 2 when the data are disaggregated by academic rank; the higher the rank the smaller the percentage of Asians in the position. And we find the largest proportion of Asians fall in the “rank not available” group which includes mostly post-docs but also non-faculty researchers and staff or administrators who do not have a faculty rank.

Percentage S&E doctorate holders employed in universities and 4-year colleges who are Asian, by type of academic position (2008)

Academic Position Total employees Non-Asians Asians percentage Asian
Post doc 18,500 12,200 6,300 34.1%
Teaching Faculty 179,600 157,700 21,900 12.2%
Research Faculty 115,200 96,900 18,300 15.9%
Dean, Department head, Chair 28,700 26,700 2,000 7.0%
President, Provost, Chancellor 3,300 Over 3,200** D* N/A

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table 9-22 “S&E doctorate holders employed in universities and 4-year colleges, by type of academic position, sex, race/ethnicity, and disability status: 2008” Accessed July 16, 2011.

Notes: * Refers to data suppressed for data confidentiality reasons. **Includes 2,900 White, 200 Black, and 100 Hispanic.

The same pattern is found among Asian females. For Asians in S&E occupations, the percentage of females steadily decreases from 35% of assistant professors to 28% of associate professors to 12% of full professors. Furthermore, at each of these professorial ranks, the percentage of females in the Asian population is consistently lower than the percentage of females in the non-Asian population. (This is true for all occupations and S&E occupations.)

S&E doctoral holders employed in universities and 4-year colleges, by broad occupation, sex, and rank, for Asians and non-Asians (2008)

S&E Occupations Total Total Asians Asians:
Female /Total
Total
Non-Asians
Non-Asians:
Female /Total
Total 210,700 32,400 29.9% 178,300 31.1%
Rank Not Available 38,200 9,800 39.8% 28,400 39.1%
Other Faculty 10,400 1,300 46.2% 9,100 45.1%
Assistant Professor 44,000 8,100 34.6% 35,900 40.4%
Associate Professor 46,200 5,800 27.6% 40,400 34.4%
Professor 71,800 7,500 12.0% 64,300 18.4%

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table 9-25 “S&E doctorate holders employed in universities and 4-year colleges, by broad occupation, sex, race/ethnicity, and faculty rank: 2008”. Accessed July 16, 2011.

Government

Disaggregating NSF government workforce data by gender and race/ethnicity reveals that the same pattern of under-representation of Asian women in management positions. American Indian/Alaska Native women are less well represented in management.

Percentage of scientists and engineers employed in government who are managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 32, with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in government, by managerial status, age, sex, race/ethnicity, and disability status: 2006.” . Accessed December 5, 2009.

Percentage of scientists and engineers holding doctorate degrees employed in government who are managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 32, with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in government, by managerial status, age, sex, race/ethnicity, and disability status: 2006.” . Accessed December 5, 2009.

Note: Data for Alaska Natives/Anerican Indian women are not available

Industry

According to the 2003 report Advancing Asian Women in the Workplace by Catalyst, a nonprofit research and advisory organization working to advance women in business and the professions, Asian-American women in industry are most likely to have graduate education but least likely to hold a position within three levels of the CEO. Among the more than 10,000 corporate officers in Fortune 500 companies, there were about 1,600 women of whom 30 were Asian.

This trend has been borne out for scientists and engineers employed in industry and business as well. Disaggregating NSF industry workforce data by gender and race/ethnicity, we see that the percentage of Asian women scientists and engineers, including those with PhDs, who are S&E managers is the smallest of any race/ethnicity and gender.

Percentage of scientists and engineers employed in business or industry who are S&E managers, by race/ethnicity and gender (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34. “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010. Note. Data for Alaska Natives/American Indians women are not available

Percentage of scientists and engineers doctorate degree holders employed in business or industry who are S&E managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34 ,with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010. Note. Data for Hispanic women, Alaska Natives/American Indians are not available

Industry leadership

The Leadership Education for Asian Pacifics, Inc. (LEAP) reported that in 2010 among the Fortune 500 companies there were only ten Asians, of whom three were women, with the position of chair, president, or CEO; of the 5,250 board members, only 2.08% were Asians or Pacific Islanders; and 80.4% of the companies hade no Asian and Pacific Islander board members.

A review of NSF data on the science and engineering business and industry workforce reveals a surprising under-representation of Asians at the managerial level. Only 6% of Asian scientists and engineers are managers, and only 2% are S&E managers. Again, the Asians are outpaced by all other ethnic/racial.

For the Asian scientists and engineers employed in industry, although women comprise 37% of the non-managers in this group, they are only 23% of the managers and 16% of the S&E managers. As in the other sectors, among all scientists and engineers who are employed in industry at the manager rank, the percentage of Asian females is consistently lower than the percentage of black and Hispanic females.

Scientists and engineers employed in business or industry, by managerial status, sex, and race/ethnicity (2006)

All scientists and engineers Non-managers Managers S&E managers
Total Female/Total Total Female/Total Total Female/Total
Total 9,024,000 35.60% 954,000 19.00% 241,000 21.60%
White 6,780,000 34.50% 790,000 17.60% 191,000 20.90%
Asian 1,179,000 36.60% 77,000 23.40% 25,000 16.00%
Black 407,000 47.70% 33,000 42.40% 11,000 45.50%
Hispanic 467,000 38.10% 38,000 23.70% 11,000 18.20%
American Indian / Alaska Natives 26,000 42.30% 3000 N/A 1,000 N/A

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34 ,with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010.

Note. D = Suppressed for data confidentiality reasons; *= estimate less than 100

Science Policy Tools: Time for an Update

All of us involved in science and technology (S&T) policy are fond of commenting on the increasing pace of change, the upheavals caused by novel technologies and expanded scientific understanding, and the unprecedented challenges to Earth’s resources and natural systems. Yet we typically find ourselves responding to these developments within the conceptual framework established by Vannevar Bush more than 60 years ago. From time to time, we need to step back from the specific challenges we face to reflect on the effectiveness of the assumptions, the strategies, and the institutions that shape our responses. Does the policy framework that underlies the U.S. S&T enterprise need to be updated?

My short answer is that many things need to change if the United States is to continue to be a leader in S&T and ensure that the American people are the beneficiaries. Government agencies and many other institutions are of a different era and ill-equipped to function well in today’s world. But to that I would add several caveats: Positive change is very hard to bring about in this system and usually comes slowly; there are fundamental elements of the Vannevar Bush philosophy that should be protected; and in the U.S. political system, especially at this moment in time, we should be careful what we ask for! With those provisos in mind, I will offer a few suggestions for policy changes that are feasible and would help move the country forward.

The federal government is in dire need of an enhanced interagency mechanism to coordinate S&T-related activities, share information, and work with Congress to obtain more flexibility in funding interagency activities.

First, a word about Vannevar Bush and his 1945 report Science—the Endless Frontier. So much has been written and said about Vannevar Bush and his report that I have to be reminded occasionally to actually read it again. It really is an amazing document, for its content and foresight as well as its brevity (about 33 pages plus appendices in the National Science Foundation’s 1990 reprinted version).

Bush argued that science and the federal R&D system that proved to be so successful during World War II would be important to the nation’s progress in peacetime, which turned out to be dominated by the Cold War and arms race with the Soviet Union. Bush made three main points:

“Scientific progress is essential.” It is needed to meet the nation’s needs: the war against disease, national security, and public welfare. To accomplish this, he recommended federal support of basic research in universities and medical schools; strengthening applied research in federal agencies, guided by a Science Advisory Board reporting to both the executive and legislative branches; and creating incentives for industry to fund research.

“We must renew our scientific talent.” He recommended a program of federal support for scholarships and research fellowships, with special immediate attention given to those returning from the war.

“The lid must be lifted.” He recommended the formation of a board of civilian scientists and military officials to review all secret government scientific information and release, as quickly as possible, everything that did not have to be kept secret.

And to implement these recommendations, Bush put forward a plan of action that included the creation of a new civilian federal agency, the National Research Foundation (NRF), to take on the task of funding (basic) research and education in universities in all fields, including health, medicine, and long-range military research.

These three points and plan of action made up Bush’s vision and strategy to ensure that the federal government would continue its investment in science in the postwar years.

One further comment should be made about Bush’s report. He has often been criticized for oversimplifying his arguments for a robust federal research investment by accepting a linear model of progress: Basic research should be carried out without an application in mind; basic and applied research are at opposite poles; all technological advances are the result of research; and the nation that does the research will reap most of the benefits. To some extent, these notions are as much a reflection of how the public and policymakers thought about the role of science in World War II as they were statements of fact by Bush. It can be argued that none of these is entirely correct. Indeed, Bush himself would have agreed. But I offer a word of caution. Although scientists are comfortable engaging in “nonlinear thinking,” the same is not true for the general public and most policymakers. So although it is useful to revisit Bush’s assumptions in an effort to craft the most effective means to argue for the importance (perhaps even unique importance) of S&T, we should proceed with caution, lest we find that the message that is received is not the message intended.

The Bush effect

Much of Vannevar Bush’s vision has come to pass, even if not entirely as he intended. But today’s world is very different from that at the end of World War II, and Bush could not have been expected to foresee developments such as globalization and the rise of multinational corporations, the collapse of the Soviet Union and the rise of terrorism, Moore’s Law and the information revolution, erratic swings in U.S. politics, and other factors that have placed S&T in a precarious place in 21st-century U.S. society.

Bush’s notion that “scientific progress is essential” to meet the nation’s health, national security, and public welfare needs has become accepted by policymakers of all political stripes and by the majority of voters. That said, the genuine need to address immediate national issues, such as unemployment, the lack of affordable health care, inadequate K-12 education for most Americans, and many others, tends to crowd out important long-range goals, including investments in basic research. But even for short-term objectives, the uncoordinated federal support structure for R&D is ineffective in aligning more-applied R&D with urgent national needs.

The public tends to support the investment of taxpayer dollars in scientific research but has little understanding of science, particularly the nature of research or how results are translated into things people need. Aside from medicine, it is not easy for the public to see the connections between research and the things that are most important in their lives. Moreover, deep partisan divides on almost all issues and a media focused on entertainment rather than news make it almost impossible for the public to actually know what is going on in the country, let alone the rest of the world, in S&T or anything else. This disconnect with the public is perhaps the greatest threat to the future of the country’s research system.

Bush’s advice “to renew our science, engineering and technical talent” remains a priority(at least it is a subject of much study and political rhetoric), but the nation’s efforts to attract homegrown boys and girls to these careers, as well as to improve science, technology, engineering, and mathematics education have been disappointing. The United States has been fortunate in attracting many of the brightest young women and men from other parts of the world to study and establish their careers here. But current U.S. policies and practices on visas and export controls are making the country less attractive as a place to study and work. Increasingly, bright young people are finding attractive opportunities elsewhere.

Bush’s recommendation that the “lid be lifted” on classified information was influential, at least initially. But there remain issues of overclassification and ambiguous categories such as “sensitive but unclassified.” In spite of laws designed to shine light on government, some federal agencies are inclined to hold back information that might be inconvenient or embarrassing. In addition, the imperative to make all data resulting from federally supported research available to researchers who want to confirm or refute various scientific claims has become especially challenging, in part because of the enormous volume of data involved, the cost of making it available to others, the need for software to interpret the data, and other factors. Yet the integrity of the scientific process depends on this kind of openness. The National Academies have focused on this issue and made recommendations in the 2009 National Research Council report Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age.

Bush’s “plan of action” to establish the NRF did not happen, at least not as he proposed. The National Science Foundation (NSF), with its presidentially appointed director and National Science Board, was established in 1950, but it was a very different agency with a narrower mission and a much smaller budget than Bush envisioned. It is doubtful that Bush’s model of the NRF would have been successful. Had it been established, pieces would probably have spun off as fields evolved and generated their own interested constituencies.

Bush’s so-called linear arguments still, I believe, continue to underpin the continued political support for federal funding, even if the processes leading to discovery, engineering design, innovation, and application are recognized to be more complex than Bush had indicated. Finding the right language to explain to the public how scientific discoveries make their way to applications and markets remains a work in progress.

Finally, Bush’s government-university (GU) partnership, in which federal agencies provide funding for academic research and for the construction and operation of experimental facilities at national and international laboratories that are open to university researchers, remains in place, at least for now. This partnership is, perhaps, the most important outcome of Bush’s report. With the federal agencies doing their best to select the “best people with the best ideas” as determined by expert peer review, the standards have been kept high, and the political manipulation of the research through congressional earmarking has been minimized. The resulting integration of research and education in the classrooms and laboratories of universities across the country has enabled the United States to build the most highly respected system of higher education in the world.

Another strength of the GU partnership, on the government side, is the plurality and diversity of federal agencies, each with a different mission and structure, that support academic research. It’s not what Bush intended, but it has benefits. Academic researchers can propose their ideas to several agencies, looking for the best fit and timing as well as sympathetic reviewers. Also, the agencies can focus their support on areas of science and engineering that are most relevant to their missions.

In recent decades, especially after passage of the Bayh-Dole Act in 1980, universities have established partnerships with companies, both as a means of providing their students with better access to industry and future jobs, and as a possible generator of revenues from the intellectual property created by faculty researchers. Thus, the two-way GU partnership has evolved into a more complex government-university-industry (GUI) partnership, and that trend is likely to continue. Industry’s support has grown steadily but still constitutes only 6% of total academic research funding, compared with 60% from the federal government, 19% from university funds, 7% from state and local governments, and 8% from other sources.

Thus, although much has changed at home and around the world, Bush’s GU (now GUI) partnership remains in place and is perhaps the most important outcome of Bush’s report. But now all three sectors are under stress, pressed to meet rising expectations with limited resources, and in the case of industry, faced with intense competition from abroad. The partnership is in trouble.

A troubled partnership

Universities are going through a difficult period of transition as the costs of higher education and, consequently, tuition continue to rise, and governors and legislators demand greater accountability while cutting their states’ contributions to public universities. It is not clear that the term “public university” should be applied to institutions that receive only 20% or less of their budgets from the state government. Private universities are less affected by state politics, but all universities face the burden of complying with a growing body of uncoordinated federal regulations and other reporting requirements related to faculty research that add more to the institutions’ costs than the 26% maximum overhead reimbursement for administrative costs that comes with government grants.

Universities with major investments in biomedical research, especially those with medical schools, face special challenges in planning and operations. When National Institutes of Health (NIH) budgets are going up, institutions hire more researchers (non–tenure-track faculty and postdoctoral researchers) and borrow money to build new buildings, with the expectation that the overhead on future grants will support the salaries and pay off the loans. When NIH budgets fail to match expectations, the institutions are left to cover the costs. The model makes sense only if NIH budgets continue to rise at a predictable rate, indefinitely. Large swings in NIH funding since the 1990s have exacerbated the situation, and thousands of bright young biomedical researchers have ended up in long-term postdoctoral positions or have left the field. As former National Academy of Sciences President Bruce Alberts has observed, it appears that the GU partnership in this important field is not sustainable.

One more point about the university side of the GU partnership. The nation is producing more Ph.D.s in some fields than there are jobs, or at least jobs that graduates want and are being trained for, and the imbalance is likely to get worse. This may be especially true for biomedical research, but it is a problem for some other fields as well. The nation may need more scientists and engineers, even Ph.D.s, but not in all areas. There are policy options that could be employed, but they are not easy. Given the rapid pace of change in this age of technology, it is simply not possible to predict what the specific needs will be 20 years from now. The ability of the United States to continue to be a world leader in technological innovation will depend not only on having the necessary technical talent, homegrown and from abroad, but also being able to retrain and redirect that talent in response to new developments. At the very least, universities need to ensure that their graduate programs include curricula and mentoring that will adequately prepare their students for careers very different from those of their professors. Some of the professional master’s programs do a good job and should be expanded so that all students pursuing graduate study have the option to earn a professional master’s degree, even if the Ph.D. is their ultimate objective.

Federal agencies have their own problems. Because the agencies’ budgets have not grown at a rate that matches the expansion of research opportunities, NSF, NIH, NASA, the Department of Energy (DOE), and other agencies are unable to provide adequate support for even the most meritorious research proposals and the necessary experimental facilities. Most researchers must apply for and manage multiple research grants to support a viable program and thus end up spending much of their time dealing with administration rather than science. The multiple grant applications and reviews also add to the administrative costs of the agencies, cutting into the efficiency of their operations. The agencies must try to plan with little certainty about their budgets for the next fiscal year or even the current fiscal year.

Every president has coherent goals and priorities when the budget request is put together, but Congress often does not share the president’s priorities, yet has no plan to offer as an alternative. It becomes painfully clear in the subsequent budget sausage-making that there is no consensus on national priorities or goals, no process to decide on optimum funding allocations, and no mechanism to provide stability in funding. It’s up to each agency to fight for the resources it needs to do its job, as required by law, regardless of how politically unpopular that job may be for a particular administration or Congress or, more accurately, the House and Senate appropriation subcommittees. The agencies complain that their operations are micromanaged by the subcommittees, and their decisions are often attacked by members of Congress who object to a grant with “sex” in the title or to a whole field such as climate science because they are unhappy with what the research reveals.

Congress has no mechanism to have a serious discussion about S&T. The congressional committee structure, at least as regards S&T, makes little sense. Neither the House nor the Senate has an authorization or an appropriations committee that takes a broad overview of the entire federal S&T portfolio. The House Committee on Science, Space and Technology is the closest thing, but its jurisdiction does not include NIH. In addition, Congress does not have any S&T advisory committees, at least not any that are visible. The authorization legislation for the defunded Office of Technology Assessment is still in place, and it would be a step in the right direction for Congress to again appropriate funds for it. One research funding agency that has enjoyed favorable treatment by Congress over several decades, at least through 2003, is NIH. As a result, the NIH budget is roughly half of all federal research funding. But NIH has had a history of boom-bust budget fluctuations. The budget doubled between 1998 and 2003, remained flat through 2008, received a $10 billion infusion as part of the 2009 stimulus package, and has remained flat since. NSF and the DOE also have had to manage the stimulus bump. Managing such rapid ups and downs is difficult. The impact on universities, as has already been noted, can be severe. NIH Director Francis Collins has been fairly candid with his observations and cautions. There is also the larger question of balance. Should the nation be devoting 50% of its research funding to biomedical research? It was just above 30% in the early 1990s. That might be the proper share, and voters are not complaining. But progress has been slow in many areas, and medical costs have been taking a steadily rising share of the gross national product. The problem is that the U.S. political system lacks a mechanism to even discuss balance or priorities in research funding.

Several agencies have to deal with other issues of balance in the federal R&D portfolio and the GU partnership. I’ll use the DOE national laboratories as an example. Clearly, the DOE labs have important functions. Because universities must provide an open environment for study and research, they are not appropriate sites for the type of classified weapons work being conducted at Los Alamos, Lawrence Livermore, and Sandia. Nor can universities afford to build and maintain the large experimental facilities of Fermilab, Brookhaven, Argonne, Jefferson, and others. The national labs also have the capability, at least in principle, of responding quickly to national needs. But there are some troubling issues with regard to all national labs: The roles of the labs were clear during World War II and the early Cold War years, but that is no longer the case. The labs cope with mixed signals from Washington and the ever-shifting political winds as agency heads come and go and White House and congressional priorities change. The nation probably does not need so many national labs with overlapping missions competing with one another for resources. But closing a lab can cause great hardship for states and communities. The process would require a research lab closing commission, and it could get very ugly. Science could become be even further politicized. What could add substantial value to the federal R&D investment would be a much stronger research collaboration between university researchers and federal laboratories, not only those that harbor large experimental facilities but the other general-purpose laboratories as well. Accomplishing that would require significant changes in how the agencies fund R&D and how they manage their national labs. It might be worth running a few pilot programs to explore the possibilities.

Beyond the matters of balance, a number of other issues are troublesome for the agencies, including trends toward short-term focus, demands for deliverables, and increased accountability (assessment, milestones, roadmaps, etc.); a conservative peer-review system that is risk-averse; contentious issues of cost-sharing and overhead; challenges of planning, cost, and management of large research facilities; and political barriers to international collaboration. Some of these matters have been discussed by the National Academies, the National Science Board, and more recently by the American Academy of Arts and Sciences.

One further comment about the federal agencies. Just as the academic researchers supported by the government are expected to hold to the highest standards of performance in carrying out research and disseminating the results for the public good, the federal government, in turn, should be expected to operate in a manner that is open, transparent, fair, and honest. In other words it should manifest integrity. The abuses can be in both the executive and legislative branches. We have seen, not too long ago, that any science that seems to violate someone’s special interests—religious, ideological, or financial—is fair game for attacks, including amendments offered on the floor of the House of Representatives to kill specific NIH grants that are judged by some members to be offensive or wasteful of money. The integrity guidelines laid out by President Obama will help ensure that federal agencies do their part. There is no corresponding commitment on the part of Congress. The integrity of the GU partnership requires responsible behavior on all sides. To the extent that the partnership lacks integrity, the American people are denied the benefits and can, in some cases, be harmed.

With both sides of the GU partnership having problems, it should be no surprise to find that the partnership is in trouble. The risk of doing nothing to address the problems is substantial. The National Academies report Rising Above the Gathering Storm and its recent update point out that clouds are gathering that threaten the nation’s S&T enterprise and its standing in the world. And although the Gathering Storm stresses the threat to the competitiveness of U.S. industry and the related matter of quality jobs for Americans, the arguments also apply to other national needs such as national security, health and safety, environmental protection, energy, and many others that also depend on the nation’s strength in S&T and its science and engineering workforce. It is likely that Vannevar Bush would see the need for another path-breaking report to address the question of whether science in the United States is still “the endless frontier.”

A way forward

I will pass on the option of trying to be the next Vannevar Bush by proposing a new government science policy structure, but I will suggest three areas of possible policy reform that do not require reorganizing the federal government or challenging congressional authority. None of these are fully developed proposals, but they could be useful as a stimulus to discussion.

First, the federal government is in dire need of an enhanced interagency mechanism to coordinate S&T- related activities, share information, and work with Congress to obtain more flexibility in funding interagency activities. The whole of the federal S&T effort should be significantly greater than the parts. The National Science and Technology Council (NSTC) and its coordinating committees have done good work, for example, in helping to organize the National Nanotechnology Initiative in the Clinton administration, but the NSTC needs more clout. The White House and Congress should consider authorizing the NSTC and providing a line of funding in the White House Office of Science and Technology Policy budget for NSTC staffing and activities such as reports, workshops, and seed funding for interagency cooperative R&D efforts.

Second, the federal R&D agencies should be encouraged to experiment with new structures modeled on the Department of Defense’s Advanced Research Projects Agency (DARPA or ARPA at different times) that can invest in higher-risk, potentially transformative R&D and respond quickly to new opportunities. The DOE is trying such an experiment with ARPA-Energy, which was launched with funds from the stimulus package and is included in the president’s fiscal year 2012 budget request. Examples of other new initiatives are DOE’s Energy Innovation Hubs and the National Oceanic and Atmospheric Administration’s Climate Service. Political inertia is difficult to overcome, so initiatives of this kind will gain traction only with leadership by the president and S&T champions in Congress.

The third is more of a stretch. The nation may have arrived at a time in its history when it needs a new kind of policy-oriented, nonpartisan organization: a GUI policy partnership among the federal government, universities, and industry, with funding from all three, that could address important areas of U.S. S&T policy such as the conduct of research and mechanisms for the translation of research into applications. This organization would be a place where knowledgeable individuals who have experience in the relevant GUI sectors and who have a stake in the health of the nation’s S&T enterprise would have access to the relevant data and policy analysis and could engage in serious discussions about a range of policy issues related to S&T.

Such a GUI policy organization would support policy research in areas of strategic importance, collect and analyze relevant information about the state of S&T in the nation and the world, and perhaps develop policy options for consideration by decisionmakers. This organization might be able to fill the pivotal role that Roger Pielke Jr. calls the “honest broker.” It would not go beyond defining policy options, leaving the final choice of direction to elected officials or whoever is responsible. Were it to make recommendations for a specific course of action, it would soon find its independence and integrity challenged as competing interests sought to influence its decisions. But even without advocating specific actions, an organization that is respected for the integrity of its data and analysis and the transparency of its operations would be of enormous value. Its credibility and political clout would derive from its grounding in the three critical sectors. There are many excellent nongovernment policy centers and other organizations that carry out policy research and issue reports, and their important work should continue. But there is no mechanism to follow through, to make sure someone is paying attention, to ask if any of the recommendations are being considered, and to explain to a largely uninformed public the implications of various policy options and report on subsequent decisions in a way that the public can understand.

The new GUI policy organization could take on many of the issues mentioned above, especially the problems the federal funding agencies are facing. Which of those are the most serious and might lend themselves to solutions short of reorganizing government? What are the most important policy barriers to cooperation between universities and industry and how might those be resolved? Are there ways to make a rational judgment about the various balance issues with regard to research funding? One task that such a new organization might take on is an analysis of trends in the respective roles of the federal government, research universities, and industry in the process of innovation. And here I mean innovation in both commercial products and processes, as well as how the federal government addresses various national needs.

Commercial innovation has been advanced as one of the prime rationales for increasing the federal investment in R&D and science education. Certainly, commercial innovation is vital to the nation’s future, but so is innovation in applying discoveries and inventions to national security, human health and food safety, energy security and environmental protection, transportation, and the many other societal needs that require new ideas and new technologies. In particular, the federal regulatory agencies have the task of complying with federal law by issuing rules that are consistent with the best scientific evidence, even when the evidence is not clear. Too often, the process, at least as portrayed by the media, looks more like a shootout between the affected industries and activists of various kinds than it does an evidence-based deliberative process. There are many policy issues relating to the regulatory process that could benefit by having the attention of an unbiased organization that is respected by all interested parties.

Those of us who have struggled in the labyrinth of S&T policymaking have often dreamed that some wise reorganization will come along, but waiting for that to happen is not a likely path to progress.

The new GUI policy organization should not attempt to duplicate the important work of the National Academies and National Research Council or the American Academy of Arts and Sciences or any other organizations. Nor would it replace the many outstanding policy institutes and centers around the country. NSF is authorized by Congress to collect and disseminate information about S&T, and the National Science Board publishes updated summaries in its Science and Engineering Indicators. The American Association for the Advancement of Science (AAAS) also is an invaluable source of S&T policy information, particularly R&D funding data. These efforts should continue. Indeed, one could imagine an alliance of organizations, governmental and nongovernmental, with common goals in support of a more rational national S&T policy. The kind of organization I am suggesting is not a Department of Science and Technology or a reinvention of Bush’s NRF. The present federal R&D funding agencies will remain in place, hopefully making improvements in their structures and operations. The latter is more likely if the agencies have a source of sound analysis and advice and, at least as important, public support for the changes they need to make.

As for the option of reorganizing the federal government, we could each devise an “ideal” structure that would do all the things we think need to be done. The only problem is that we would have to ignore the political realities. In the U.S. system of governance, structural change is very difficult, and no matter how elegant the proposal, what emerges from the political process is likely to be a disappointing, if not disastrous, caricature of what was proposed. Those of us who have struggled in the labyrinth of S&T policymaking have often dreamed that some wise reorganization will come along, but waiting for that to happen is not a likely path to progress. However, the failure of that dream solution is not a reason to abandon hope for more targeted incremental reform. There are many paths that could move the country in the direction of a more rational and inclusive approach to S&T policymaking.

I see the potential to take a few initial steps by generating synergy among some existing efforts. The National Academies have the Government-University-Industry Research Roundtable that meets regularly to discuss issues at the GUI interface. The Council on Competitiveness is an important forum for discussions about S&T’s role in commercial innovation. The American Association of Universities also focuses on the partnership, including federal research funding. All the major disciplinary societies have policy committees that deal with matters relevant to their memberships. AAAS has enormous convening power. Many other organizations pay close attention to policy matters that affect the GUI partnership. Perhaps some of these would entertain discussions about such a GUI policy initiative. It might even be a good agenda item for the President’s Council of Advisors on Science and Technology. And although I recognize that Congress is locked in an ideological battle that makes coherent action on any topic seem unlikely, perhaps one or more of the relevant congressional committees might give the idea some thought and perhaps discover some elusive common ground.

People Get Ready

In recent years, we have witnessed a dramatic increase in the economic cost and human impact from hurricanes, earthquakes, floods, and other natural disasters worldwide. Economic losses from these catastrophic events increased from $528 billion (1981–1990) to more than $1.2 trillion over the period 2001–2010.

Although we are only halfway through 2011, an exceptional number of very severe natural catastrophes, notably the March 2011 Japan earthquake and tsunami, will make 2011 a record year for economic losses. In the United States, the southern and midwestern states were hit by an extremely severe series of tornadoes in April and May, and at about the same time, heavy snowmelt, saturated soils, and over 20 inches of rain in a month led to the worst flooding of the lower Mississippi River since 1927. Hurricane Irene in August caused significant flooding in the northeast and is responsible for at least 46 deaths in the United States. Global reinsurance broker Aon Benfield reports that U.S. losses from Irene could reach as high as $6.6 billion; Caribbean losses from Irene are estimated at nearly $1.5 billion.

Given the increasing losses from natural disasters in recent years, it is surprising how few property owners in hazard-prone areas have purchased adequate disaster insurance. For example, although it is well known that California is highly exposed to seismic risk, 90% of Californians do not have earthquake insurance today. This is also true for floods. After the flood in August 1998 that damaged property in northern Vermont, the Federal Emergency Management Agency (FEMA) found that 84% of the homeowners in flood-prone areas did not have insurance, even though 45% of these individuals were required to purchase this coverage because they had a federally backed mortgage. In the Louisiana parishes affected by Hurricane Katrina in 2005, the percentage of homeowners with flood insurance ranged from 57.7% in St. Bernard Parish to 7.3% in Tangipahoa when the hurricane hit. Only 40% of the residents in Orleans Parish had flood insurance.

Similarly, relatively few homeowners invest in loss-reduction measures. Even after the series of devastating hurricanes that hit the Gulf Coast states in 2004 and 2005, a May 2006 survey of 1,100 adults living in areas subject to these storms revealed that 83% of the respondents had taken no steps to fortify their home and 68% had no hurricane survival kit.

For reasons we will explain in this article, many homeowners are reluctant to undertake mitigation measures for reducing losses from future disasters. This lack of resiliency has made the United States not only very vulnerable to future large-scale disasters but also highly exposed financially. Given the current level of government financial stress, it is natural to wonder who will pay to repair the damage caused by the next major hurricane, flood, or earthquake.

To alleviate this problem, we propose a comprehensive program that creates an incentive structure that will encourage property owners in high-risk areas to purchase insurance to protect themselves financially should they suffer losses from these events and to undertake measures to reduce property damage and the accompanying injuries and fatalities from future disasters.

Why are losses increasing?

Two principal socioeconomic factors directly influence the level of economic losses due to catastrophic events: exposed population and value at risk. The economic development of Florida highlights this point. Florida’s population has increased significantly over the past 50 years: from 2.8 million inhabitants in 1950 to 6.8 million in 1970, 13 million in 1990, and 18.8 million in 2010. A significant portion of that population lives in the high-hazard areas along the coast.

Increased population and development in Florida and other hurricane-prone regions means an increased likelihood of severe economic and insured losses unless cost-effective mitigation measures are implemented. Due to new construction, the damage from Hurricane Andrew, which hit Miami in 1992, would have been more than twice as great if it had occurred in 2005. The hurricane that hit Miami in 1926 would have been almost twice as costly as Hurricane Katrina had it occurred in 2005, and the Galveston hurricane of 1900 would have had total direct economic costs as high as those from Katrina. This means that independent of any possible change in weather patterns, we are very likely to see even more devastating disasters in the coming years because of the growth in property values in risk-prone areas. In addition, recent climate studies indicate that the United States should expect more extreme weather-related events in the future.

Table 1 depicts the 15 most costly catastrophes for the insurance industry between 1970 and 2010. Many of these truly devastating events occurred in recent years. Moreover, two-thirds of them affected the United States.

Increasing role of federal disaster assistance

Not surprisingly, the disasters that occurred in now much more populated areas of the United States have led to higher levels of insurance claim payments as well as a surge in the number of presidential disaster declarations. Wind coverage is typically included in U.S. homeowners’ insurance policies; protection from floods and earthquakes is not.

The questions that need to be addressed directly by Congress, the White House, and other interested parties are:

• Who will pay for these massive losses?

• What actions need to be taken now to make the country more resilient when these disasters occur, as they certainly will?

In an article published this summer in Science about reforming the federally run National Flood Insurance Program (NFIP), we showed that the number of major disaster declarations increased from 252 over the period 1981–1990, to 476 (1991–2000), to 597 (2001–2010). In 2010 alone there were 81 such major disaster declarations.

This more pronounced role of the federal government in assisting disaster victims can also be seen by examining several major disasters that occurred during the past 60 years as shown in Table 2. Each new massive government disaster relief program creates a precedent for the future. When a disaster strikes, there is an expectation by those in the affected area that government assistance is on the way. To gain politically from their actions, members of Congress are likely to support bills that authorize more aid than for past disasters. If residents of hazard-prone areas expect more federal relief after future disasters, they then have less economic incentive to reduce their own exposure and/or purchase insurance.

TABLE 1
15 most costly catastrophe insurance losses, 1970–2010 (in 2011 U.S. dollars)

Cost 
($ billion)

 

Event

 

Victims
(dead or missing)

 

Year

 

Area of primary damage

 

48.6 Hurricane Katrina 1,836 2005 USA, Gulf of Mexico, et al.
37.0 9/11 Attacks 3,025 2001 USA
24.8 Hurricane Andrew 43 1992 USA, Bahamas
20.6 Northridge Earthquake 61 1994 USA
17.9 Hurricane Ike 348 2008 USA, Caribbean, et al.
14.8 Hurricane Ivan 124 2004 USA, Caribbean, et al.
14.0 Hurricane Wilma 35 2005 USA , Gulf of Mexico, et al.
11.3 Hurricane Rita 34 2005 USA, Gulf of Mexico, et al.
9.3 Hurricane Charley 24 2004 USA, Caribbean, et al.
9.0 Typhoon Mireille 51 1991 Japan
8.0 Maule earthquake (Mw: 8.8) 562 2010 Chile
8.0 Hurricane Hugo 71 1989 Puerto Rico, USA, et al.
7.8 Winter Storm Daria 95 1990 France, UK, et al.
7.6 Winter Storm Lothar 110 1999 France, Switzerland, et al.
6.4 Winter Storm Kyrill 54 2007 Germany, UK, Netherlands, France

Reducing exposure to losses from disasters

Today, thanks to developments in science and technology, we can more accurately estimate the risks that different communities and regions face from natural hazards. We can also identify mitigation measures that should be undertaken to reduce losses, injuries, and deaths from future disasters, and can specify regions where property should be insured. Yet many residents in hazard-prone areas are still unprotected against earthquakes, floods, hurricanes, and tornados.

We address the following question: How can we provide short-term incentives for those living in high-risk areas to invest in mitigation measures and purchase insurance?

We first focus on why many residents in hazard-prone areas do not protect themselves against disasters (a behavioral perspective). We then propose a course of action that overcomes these challenges (a policy perspective). Specifically, we believe that multiyear disaster insurance contracts tied to the property and combined with loans to encourage investment in risk-reduction measures will lead individuals in harm’s way to invest in protection and therefore be in a much better financial position to recover on their own after the next disaster. The proposed program should thus reduce the need for disaster assistance and be a win-win situation for all the relevant stakeholders as compared to the status quo.

Empirical evidence from psychology and behavioral economics reveals that many decisionmakers ignore the potential consequences of large-scale disasters for the following reasons:

Misperceptions of the risk. We often underestimate the likelihood of natural disasters by treating them as below our threshold level of concern. For many people, a 50-year or 25-year storm is simply not worth thinking about. Because they do not perceive a plausible risk, they have no interest in undertaking protective actions such as purchasing insurance or investing in loss-reduction measures.

Ambiguity of experts. Experts often differ in their estimates of the likelihood and consequences of low-probability events because of limited historical data, scientific uncertainty, changing environmental conditions, and/or the use of different risk models. The variance in risk estimates leads to confusion by the general public, government entities, and businesses as to whether one needs to pay attention to this risk. Often, decisionmakers simply use estimates from their favorite experts that provide justifications for their proposed actions. We recently conducted an empirical study of 70 insurance companies and found that insurers are likely to charge higher premiums when faced with ambiguity than when the probability of a loss is well specified. Furthermore, they tend to charge more when there is conflict among experts than when experts agree on the uncertainty associated with the risk of flood and hurricane hazards.

Short horizons for valuing protective measures. Many households and small businesses project only a few years ahead (if not just months) when deciding whether to spend money on loss-reduction measures, such as well-anchored connections where the roof meets the walls and the walls meet the foundation to reduce hurricane damage. This myopic approach prevents homeowners from undertaking protective measures that can be justified from an economic perspective after 5 or 10 years. This short-sighted behavior can be partly explained by decisionmakers wanting to recoup their upfront costs in the next year or two even though they are aware that the benefits from investing in such measures will accrue over the life of the property.

Procrastination. If given an option to postpone an investment for a month or a year, there will be a tendency to delay the outlay of funds. When viewed from a long time perspective the investment will always seem worthwhile, but when one approaches the designated date to undertake the work, a slight delay always seems more attractive. Moreover, the less certain one is about a correct course of action, the more likely one is to choose inaction. There is a tendency to favor the status quo.

TABLE 2
Examples of federal aid as a percentage of total disaster losses

Disaster

Federal aid as
% of total damage

Hurricane Ike (2008)

 

69%
Hurricane Katrina (2005)

 

50%
Hurricane Hugo (1989)

 

23%
Hurricane Diane (1955)

 

6%

Source: Michel-Kerjan and Volkman-Wise (2011)

Mistakenly treating insurance as an investment. Individuals often do not buy insurance until after a disaster occurs and then cancel their policies several years later because they have not collected on their policy. They perceive insurance to be a bad investment by not appreciating the adage that the “best return on an insurance policy is no return at all.”

Failure to learn from past disasters. There is a tendency to discountpast unpleasant experiences. Emotions run high when experiencing a catastrophic event or even viewing it on TV or the Internet. But those feelings fade rapidly, making it difficult to recapture these concerns about the event as time passes.

Mimetic blindness. Decisionmakers often imitate the behavior of others without analyzing whether the action is appropriate for them. By looking at what other firms in their industry do, or following the example of their friends and neighbors, decisionmakers can avoid having to think independently.

In addition to these behavioral biases, there are economically rational reasons why individuals and firms in hazard-prone areas do not undertake risk-reduction measures voluntarily. Consider the hypothetical Safelee firm in an industry in which its competitors do not invest in loss-prevention measures. Safelee might understand that the investment can be justified when considering its ability to reduce the risks and consequences of a future disaster. But the firm might decide that it cannot now afford to be at a competitive disadvantage against others in the industry that do not invest in loss prevention. The behavior of many banks in the years preceding the financial crisis of 2008–2009 is illustrative of such a dynamic.

Families considering whether to invest in disaster prevention may also find the outlay to be unattractive financially if they plan on moving in a few years and believe that potential buyers will not take into account the lower risk of a disaster loss when deciding how much they are willing to offer for the property. More generally, homeowners might have other rational reasons for not purchasing disaster coverage or investing in risk-reduction measures when this expense competes with immediate needs and living expenses within their limited budget. This aspect has more significance today given the current economic situation the country faces and the high level of unemployment.

Reconciling the short and long term

The above examples demonstrate that individuals and businesses focus on short-term incentives. Their reluctance to invest in loss-prevention measures can largely be explained by the upfront costs far exceeding the short-run benefits, even though the investment can be justified in the long run. Only after a catastrophe occurs do the decisionmakers express their regret at not undertaking the appropriate safety or protective measures.

But it does not have to be that way. We need to reorient our thinking and actions so that future catastrophes are perceived as an issue that demands attention now.

Knowing that myopia is a human tendency, we believe that leaders concerned with managing extreme events need to recognize the importance of providing short-term economic incentives to encourage long-term planning. We offer the following two concepts that could change the above-mentioned attitudes.

Extend financial responsibility over a multiyear period. Decisionmakers need an economic incentive to undertake preventive measures today, knowing that their investments can be justified over the long term. The extended financial responsibility and reward could take the form of multiyear contracts, contingent or delayed bonuses, reduced taxes, or subsidies.

Government agencies and legislative bodies need to develop well-enforced regulations and standards, coupled with short-term economic incentives to encourage individuals and the private sector to adopt cost-effective risk-management strategies.

The public sector should develop well-enforced regulations and standards to create level playing fields. Government agencies and legislative bodies need to develop well-enforced regulations and standards, coupled with short-term economic incentives to encourage individuals and the private sector to adopt cost-effective risk-management strategies. All firms in a given industry will then have good reasons to adopt sound risk-management practices without becoming less competitive in the short run.

Insurance mechanisms can play a central role in encouraging more responsible behavior in three ways. First, if priced appropriately, insurance provides a signal of the risk that an individual or firm faces. Second, insurance can encourage property owners in hazard-prone areas to invest in mitigation measures by providing them with premium reductions to reflect the expected decrease in losses from future disasters. Third, insurance supports economic resiliency. After a disaster, insured individuals and firms can make a claim to obtain funds from their insurance company, rather than relying solely on federal relief, which comes at the expense of taxpayers.

A multiyear approach

We propose that insurance and other protective measures be tied to the property rather than the property owner as currently is the case. We recommend the following features of such a program:

Required insurance. Since individuals tend to treat insurance as an investment rather than a protective mechanism, it may have to be a requirement for property located in hazard-prone areas, given the large number of individuals who do not have coverage today.

Vouchers for those needing special treatment. We recommend a new disaster insurance voucher program that addresses issues of equity and affordability. This program would complement the strategy of risk-based premiums for all. Property owners currently residing in a risky area who require special treatment would receive a voucher from FEMA or the U.S. Department of Housing and Urban Development as part of its budget or through a special appropriation. This program would be similar to the Supplemental Nutrition Assistance Program (food stamps) and the Low Income Home Energy Assistance Program, which enable millions of low-income households in the United States to meet their food and energy needs every year. The size of the voucher would be determined through a means test in much the same way that the distribution of food stamps is determined today.

Multiyear insurance tied to property. Rather than the normal one-year insurance contract, individuals and business owners should have an opportunity to purchase a multiyear insurance contract (for example, five years) at a fixed annual premium that reflects the risk. At the end of the multiyear contract, the premium could be revised to reflect changes in the risk.

Multiyear loans for mitigation. To encourage adoption of loss-reduction measures, state or federal government or commercial banks could issue property improvement loans to spread the costs over time. For instance, a property owner may be reluctant to incur an upfront cost of $1,500 to make his home more disaster-resistant but would be willing to pay the $145 annual cost of a 20-year loan (calculated here at a high 10% annual interest rate). In many cases, the reduction in the annual insurance premium due to reduced expected losses from future disasters for those property owners investing in mitigation measures will be greater than their annual loan costs, making this investment financially attractive.

Well-enforced building codes. Given the reluctance of property owners to invest in mitigation measures voluntarily, building codes should be designed to reduce future disaster losses and be well enforced through third-party inspections or audits.

Modifying the National Flood Insurance Program

The National Flood Insurance Program (NFIP) was established in 1968 and covers more than $1.2 trillion in assets today. The federally run program is set to expire at the end of September 2011, and options for reforms are being discussed. We believe that revising the program offers an opportunity to take a positive step in implementing our above-mentioned proposal.

We recently undertook an analysis of all new flood insurance policies issued by the NFIP over the period January 1, 2001, to December 31, 2009. We found that the median length of time before these new policies lapsed was three to four years. On average, only 74% of new policies were still in force one year after they were purchased; after five years, only 36% were still in force. The lapse rate is high even after correcting for migration and does not vary much across different flood zones. We thus propose replacing standard one-year insurance policies with multiyear insurance contracts of 5 or 10 years attached to the property itself, not the individual. If the property is sold, then the multiyear flood insurance contract would be transferred to the new owner.

Premiums for such multiyear insurance policies should accurately reflect risk and be lower for properties that have loss-reduction features. This would encourage owners to invest in cost-effective risk-reduction measures, such as storm shutters to reduce hurricane damage. If financial institutions or the federal government provide home improvement loans to cover the upfront costs of these measures, the premium reduction earned by making the structure more resistant to damage is likely to exceed the annual payment on the loan.

A bank would have a financial incentive to make such a home improvement loan because it would have a lower risk of catastrophic loss to the property that could lead to a mortgage default. The NFIP would have lower claims payments due to the reduced damage from a major disaster. And the general public would be less likely to have large amounts of their tax dollars going for disaster relief, as was the case with the $89 billion paid in federal relief after the 2004 and 2005 hurricane seasons and resulting floods. A win-win-win-win situation for all!

A governmental program that has some similarities to our proposal is the Property Assessed Clean Energy (PACE) program, which has been adopted by 27 states for promoting energy efficiency. PACE provides short-term rewards to encourage investments in technologies that will have long-term benefit. PACE provides long-term funding from private capital markets at low cost and needs no government subsidies or taxes. It increases property values by making heating and cooling less expensive, and it enjoys broad bipartisan support nationwide at state and local levels. Several features of the program that encourage property owners to take measures to make their home more energy-efficient mirror how property owners would want to make their homes more disaster-resistant:

Multiyear financing. Interested property owners opt in to receive financing for improvements that is repaid through an assessment on their property taxes for up to 20 years. PACE financing spreads the cost of energy improvements such as weather sealing, energy-efficient boilers and cooling systems, and solar installations over the expected life of these measures and allows for the repayment obligation to transfer automatically to the next property owner if the property is sold. PACE solves two key barriers to increased adoption of energy efficiency and small-scale renewable energy: high upfront costs and fear that project costs won’t be recovered before a future sale of the property.

Annual savings. Because basic energy-efficiency measures can cut energy costs by up to 35%, annual energy savings will typically exceed the cost of PACE assessments. The up-front cost barrier actually turns into improved cash flow for owners in much the same way that the reduction of annual insurance premiums could exceed the annual loan costs.

Transfer to new property owner. Like all property-based assessments, PACE assessments stay with a property after sale until they are fully repaid by future owners, who continue to benefit from the improvement measures. The multiyear insurance and mitigation contracts we propose would operate in the same way.

Now is the time

The nation has entered a new era of catastrophes. Exposure is growing, and the damage from disasters over the next few years is likely to exceed what we have experienced during this past decade. When the next catastrophe occurs, the federal government will very likely come to the rescue—again. If the public sector’s response to recent disasters is an indicator of its future behavior, new records will be set with respect to federal assistance.

In order to avoid this outcome, we recommend that the appropriate governmental bodies undertake an economic analysis of the benefits and costs of the proposed multiyear insurance and risk-reduction loan programs compared to the current system of private and public insurance and federal disaster assistance.

We need bold leadership for developing long-term strategies for dealing with low-probability, high-consequence events. If Congress authorizes a study that examines these and other proposals when the NFIP comes up for renewal in September, it will be major step forward in setting a tone for addressing the challenges of managing catastrophic risks. The United States is at war against natural hazards and other extreme events. Winning this war will be possible only if public policy integrates behavioral factors much more systematically into efforts to find sustainable solutions. As we have indicated, taking these steps will be difficult because of human reluctance to change. But we know what steps need to be taken. All it takes is the courage for us to act and the initiative to do so now.

From the Hill – Fall 2011

Applied research facing deep cuts in FY 2012 budget

The funding picture for most R&D agencies for the fiscal year (FY) 2012 is relatively bleak, as it is for most government functions. In actions taken thus far, basic research has generally been supported, whereas applied research programs would see deep cuts, in some cases more than 30%.

Congress is not expected to complete work on the FY 2012 budget before the new fiscal year begins on October 1. Debates over spending for this budget will focus on its composition, not its size, because the Budget Control Act of 2011, passed on August 2 to allow the U.S. debt ceiling to rise, set total discretionary spending at $1.043 trillion, down 0.7% or $7 billion from FY 2011.

Although the budget situation for R&D in the FY 2012 budget looks bad, the situation for the following fiscal year could be even worse. Office of Management and Budget Director Jacob Lew sent a memo to department and agency heads dated August 17 providing guidance on the preparation of their FY 2013 budget requests. The memo directs agencies to submit requests totaling at least 5% below FY 2011 enacted discretionary appropriations and to identify additional reductions that would bring the total request to at least 10% below FY 2011 enacted discretionary appropriations.

The enactment of the Budget Control Act will not end the bitter controversies about the size and role of federal spending. The act requires $1.2 trillion in budget cuts during the next 10 years and calls for a 12-member congressional commission to find additional savings of up to $1.5 trillion by December 2011. The additional cuts can come from any combination of sources: reductions in discretionary spending, changes in entitlement programs, or revenue increases. However, if the commission can’t agree on a package of reductions, automatic cuts will occur. These cuts, which would have the greatest effect on discretionary spending, would occur on January 2, 2013.

In the meantime, work continues on the FY 2012 budget. As of August 31, the House had approved 9 of the 12 appropriations bills, the Senate only 1. Here are some highlights.

The House-passed Defense Appropriations Bill would increase funding for basic research by 4.3% and cut applied research by 3.2%.

In the House-passed bill, funding for Department of Energy (DOE) R&D spending is set at $10.4 billion, $166 million less than FY 2011 and $2.6 billion less than the president’s request. The Office of Science, which sponsors most of DOE’s basic research, is funded at $4.8 billion, a 0.9% cut from FY 2011 and $616 million or 11.4% less than the president’s request. Applied research programs face much larger cuts. The Energy Efficiency and Renewable Energy (EERE) program is funded at $1.3 billion, a $527 million or 40.6% cut and $1.9 billion or 59.4% less than the president’s request. The Fossil Energy R&D program is facing a cut of 22.5%.

In the House-passed bill funding the Department of Agriculture (USDA), R&D funding is $1.7 billion, $350 million or17.3% less than the president’s request and a $334 million or16.6% decrease from last year. The Agricultural Research Service, the USDA’s intramural funding program, would receive $988 million, down 12.8%, and the National Institute for Food and Agriculture, the extramural funding program, would receive $1.01 billion, down 16.7%.

In the House Appropriations Committee–approved Commerce, Justice, Science, and Related Agencies Appropriations Act, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) face large cuts, whereas the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) fare better. Because of a cut to NOAA’s Operations, Research, and Facilities account, NOAA’s R&D spending will be down 9.2%. NASA is funded at $16.8 billion, down $1.6 billion or 8.9% from last year, with the largest cuts occur-ring in Space Operations, because of the end of the Space Shuttle program, and the Science Directorate, because of the cancellation of the James Webb Space Telescope. The bill funds NIST at $701 million, a $49 million or 6.6% decrease. NSF is funded at $6.8 billion, the same as in FY 2011. Decreases in the Major Research Equipment and Fa-cilities account and other R&D equipment and facilities investments will be the main contributors to a small decrease in NSF’s R&D investment of $5 million or 0.1%.

The House-passed bill for the Department of Homeland Security (DHS) funds R&D at $416 million, down $296 million or 41.5% from last year.

The House Appropriations Committee–approved bill for the Depart-ment of the Interior increases R&D spending overall by $36 million or 4.7%. However, R&D at the Environ mental Protection Agency (EPA) would decline by $36 million or a 6.3% decrease, and it would decline $30 million or 2.8% at the U.S. Geological Sur-vey. The EPA’s Science and Technology Programs would be cut by 7.2% to $755 million, while the entire agency faces a 17.7% or $1.5 billion cut to $7.15 billion. The bill also includes a number of policy riders, many of which would limit the regulatory authority of the EPA.

The House passed the Military Construction and Veterans Affairs and Related Agencies Appropriations Act, 2012 (H.R.2055) on June 14, and the Senate passed its version on July 20. The total R&D investment in the Senate bill is estimated at $1.16 billion, $144 million or 14.2% more than the president’s request and $2 million or 0.2% more than FY 2011, whereas the House would spend $1.06 billion, $44 million or 4.3% more than the president’s request and $98 million or 8.4%less than in FY 2011. Veterans Affairs (VA) also performs R&D for other fed-eral agencies and nonfederal organizations, which is estimated at $720 mil-lion for FY 2012. Adding this non–VA-funded R&D brings the total for VA-performed R&D to $1.87 billion in the Senate bill and $1.77 billion in the House bill.

House approves patent reform bill

On June 23, the House approved by a vote of 307 to 117 a patent reform bill that would give inventors a better chance of obtaining patents in a timely manner and bring the U.S. patent system into line with those of other industrialized countries. The bill would also provide greater funding for the U.S. Patent and Trademark Office (PTO) to allow it to hire more examiners to deal with a backlog of more than 700,000 applications.

The Senate passed its own reform bill in March by a 95 to 5 vote, and differences between the legislation must be reconciled, most importantly the provision for providing more funding.

The Senate bill would deal with the underfunding of the patent office by allowing the PTO to set its own user fees and keep the proceeds instead of returning some of them to the U.S. Treasury. The House bill originally contained this provision, but it was changed during floor debate after Appropriations Committee Chairman Hal Rogers (R-KY) and Budget Committee Chairman Paul Ryan (R-WI) argued that the provision would limit congressional oversight of the patent office by circumventing the appropriations process. The bill was changed so that excess user fees would be placed into a PTO-dedicated fund that appropriators would direct back to the PTO. Some members of the Senate have objected to this change, arguing that it would jeopardize more funding be cause in the past, appropriators often have not spent funds that were supposed to be dedicated for a specific purpose. Despite these concerns, the Senate is expected to approve the House change.

The House and Senate bills would align the United States with international practice by granting patents to the first person to file them. Now, they are awarded to the first to invent a product or idea.

The change in the application system was favored by large technology and pharmaceutical companies, which argued that it would put the United States in sync with other national patent offices around the world and make it easier to settle disputes about who has the right to a certain innovation.

Many smaller companies and inventors opposed the change, however, arguing that it favored companies that could hire legions of lawyers to quickly file applications for new permutations in manufacturing or product design.

The Obama administration said it supported the House-passed bill as long as the “final legislative action [ensures] that fee collections fully support the nation’s patent and trademark system.”

Climate adaptation programs under fire

Republicans in Congress, having already blocked any legislation to mitigate climate change, are now aiming at programs dealing with climate change adaptation. Several of the appropriation bills being considered in the House would bar the use of funds for climate programs. Meanwhile, members of the House Science, Space and Technology Committee are fighting an effort by NOAA to create a National Climate Service, which would consolidate the majority of climate programs into a single office to achieve efficiencies.

Rep. John Carter (R-TX) has sponsored an amendment to the DHS spending bill that would prohibit the department from participating in the administration’s Interagency Task Force on Climate Change Adaptation. Carter said participation is unnecessary because NOAA and the EPA already have climate programs.

Rep. Steve Scalise (R-LA) has proposed an amendment to the House Agriculture Committee appropriations bill that would prohibit funding for implementing the June 3, 2011, USDA regulation on climate change adaptation. Scalise’s staff said the congressman was concerned that the adaptation policy could lead the department to introduce greenhouse gas restrictions for farmers. The regulation calls for the USDA to “analyze how climate change may affect the ability of the agency or office to achieve its mission and its policy, program, and operational objectives by reviewing existing programs, operations, policies, and authorities.” It notes that “Through adaptation planning, USDA will develop, prioritize, implement, and evaluate actions to minimize climate risks and exploit new opportunities that climate change may bring. By integrating climate change adaptation strategies into USDA’s programs and operations, USDA better ensures that taxpayer resources are invested wisely and that USDA services and operations remain effective in current and future climate conditions.”

The spending bill for DOE would make a 10.6% cut in a program that includes climate research. In a statement, the House Energy Committee said that “The Climate and Environmental Sciences program devotes the majority of its funding to areas not directly related to the core mandate of science and technology research leading to energy innovations. Further, climate research at the Department of Energy is closely related to activities carried out in other federal agencies and may be better carried out by those organizations. The Department proposes to eliminate medical research focused on human applications in order to direct limited funds to on-mission purposes, and the Department should apply the same principles to climate and atmospheric research.”

At a June 22 hearing, members of the House Science, Space and Technology Committee criticized NOAA’s proposed National Climate Service. Chairman Ralph Hall (R-TX) expressed concern that NOAA was implementing the service without congressional approval and questioned the service’s impact on existing research.

NOAA Administrator Jane Lubchenco testified that the service has not been established and that, when it was, it would allow NOAA to meet increased demand for information needed to address drought, floods, and national security while strengthening science. She said, “This proposal does not grow government, it is not regulatory in nature, nor does it cost the American taxpayer any additional money. This is a proposal to do the job that Congress and the American public have asked us to do, only better.”

Robert Winokur, deputy oceanographer of the Navy, testified that although he could not comment on the structure of a climate service, the Navy needed actionable climate information focused on readiness and adaptation and that the current structure makes it difficult to obtain the needed information.

Several members, including Rep. Dana Rohrabacher (R-CA), reiterated Hall’s concern that NOAA was moving ahead with the climate service despite a provision in the FY 2011 appropriations bill that prohibits using funds for it. Rep. Paul Broun (R-GA) accused Lubchenco of “breaking the law” by still working to establish the climate service.

Role of government in social science research funding questioned

On June 2, the House Science, Space and Technology Subcommittee on Research and Science Education held a hearing to explore the government’s role in funding social, behavioral, and economic (SBE) science research. Chairman Mo Brooks (R-AL) said the goal of the hearing was not to question the merits of the SBE sciences, but to ask whether the government should support these “soft sciences.”

Ranking Member Daniel Lipinski (D-IL) said that support for NSF’s Directorate for Social, Behavioral, and Economic Sciences must continue, because the research funded is critical to programs such as disaster relief, benefits multiple government agencies and society, and is not funded elsewhere.

Myron Gutmann, assistant director of the SBE directorate, and Hillary Anger Elfenbein, associate professor at Washington University in St. Louis, supported Lipinski’s statement by touting the social and fiscal value of various directorate grants. Guttman pointed to a study of auction mechanisms that was used by the Federal Communications Commission in developing auctions of spectrum, which she said ultimately netted the U.S. Treasury $54 billion. Gutmann cited another National Institutes of Health (NIH) study on economic matching theory that lead to better matching of organ donors and recipients, resulting in an increase in the number of organs available for transplant and saving lives. He argued that if funding is cut for SBE research, society will be deprived of solutions to its problems.

Elfenbein stressed that the application of basic research within the purview of the NIH directorate is often unknown and can take years to be realized. She argued that grants should not be singled out for termination based solely on the title of the grant application. She said that in 2007, a member of Congress singled out her grant to be cut because of its title, at about the same time as the U.S. military contacted her about how the re search could be applied in fighting the wars in Iraq and Afghanistan.

Peter Wood, president of the National Association of Scholars, supported the vast majority of SBE re search, but said that a small portion of the research is politicized and should be eliminated. In response, Elfenbein noted that the peer-review process greatly diminishes the politicization of science.

Diana Furchtgott-Roth, a senior fellow at the Hudson Institute, stated that the majority of the NSF directorate’s re search could be carried out by other organizations. She cited the economic re search done by Adam Smith as proof that researchers can be successful without government funds and at low costs. Elfenbein and Gutmann responded that although other organizations fund SBE research, they typically fund only a small portion of the needed research, and each organization primarily targets applied research that fits mission needs. This means, they said, that some fields of science would remain unexplored.

When asked by Rep. Brooks how the government should cut funds if it were forced to do so, Elfenbein argued that the peer-review process should determine which projects get funded. She added that large cuts would turn away future Ph.D. candidates from the field. Gutmann said that even in a fiscally constrained period, it was important not to cut seed corn.

Science and technology policy in brief

• On July 27, U.S. District Judge Royce Lamberth ruled in favor of the Obama administration policy allowing the National Institutes of Health to conduct research on human embryonic stem cells. He dismissed a suit in which plaintiffs claimed that federal law for bids the use of government funds for the destruction of embryos. Meanwhile, Rep. Diana DeGette (D-CO) vowed to continue to push her bill that would codify into law the rules permitting ethical human embryonic stem cell research. DeGette reintroduced the Stem Cell Research Advancement Act (H.R. 2376) with a new Republican lead cosponsor, Rep. Charlie Dent of Pennsylvania. The bill would allow federal funding for research on stem cells obtained from donated embryos left over from fertility treatments, as long as the donations meet certain ethical criteria.

• On July 26, the House Natural Re sources Subcommittee on Fisheries, Wildlife, Oceans and Insular Affairs held a hearing to examine how the National Oceanic and Atmospheric Administration’s (NOAA) fishery research affects the economies of coastal communities that rely on commercial or recreational fisheries. NOAA’s fishery research and management are per formed under the Magnuson-Stevens Act, which requires the use of “best available science” in establishing catch limits. However, several representatives at the hearing shared the concern of constituents working in the fishing industry who do not believe that best available science is being used because many stock assessments are old, incomplete, or missing. Recommendations to improve NOAA’s regulatory decision- making included more partnerships with universities and other research institutions, greater transparency, and improved stakeholder involvement in data collection and standard setting.

• The Department of Commerce Economic and Statistics Administration issued a report on women in the science, technology, engineering, and math (STEM) workforce. The report, Women in STEM: A Gender Gap to Innovation, found that women continue to be “vastly underrepresented” and hold fewer than 25% of STEM jobs. On a brighter note, women in STEM jobs earned 33% more than comparable women in non-STEM jobs, the report said.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Solving the Nation’s Security Affordability Problem

There is a clash coming in the next few years between the multiplicity and complexity of the security concerns facing the United States and the shrinking resources available to address them. Unfortunately, solving this growing mismatch between national security needs and declining budgets is being made far more difficult by cost trends within the Department of Defense (DOD). Almost across the board, including in equipment and personnel costs, the trends have been upward. At the same time, the trends for the federal budget are probably heading downward, driven by growing interest in controlling and even drastically cutting the nation’s spending.

To put this problem into a larger perspective, numerous historians and economists have highlighted the strong relationship between a nation’s security posture and its economic strength. Yet the escalating and huge projected costs of paying for retirement and health care for senior citizens are beginning to put enormous pressure on all other government spending, including spending for long-term investments in economic growth and national security. Every day, 10,000 more people become eligible for Social Security and Medicare. With this growth in nondiscretionary expenditures and the need for the nation to borrow in order to pay its tab, by 2017 the annual payment on the national debt will equal or exceed the defense budget. As Admiral Michael Mullen, chairman of the Joint Chiefs of Staff, stated in August 2010, “The single biggest threat to our national security is our debt.”

The only realistic answer to this security affordability problem will be to do more with less—to get more bang for the buck. Congress and the DOD must recognize this, and indeed there are some signs that this is happening. There also are a number of specific actions that the DOD and government can take to become more cost-effective while also increasing the nation’s capacity to ensure its security.

Problems on the rise

The record of security price tag creep is long. The costs of a number of weapons systems, already high, are rising rapidly. The next-generation fighter plane, the F-35, was to have cost $35 million each, but now is expected to cost more than $100 million each. The costs of supporting troops and equipment, more than $270 billion in fiscal year 2009, using inefficient legacy logistics systems, are high compared with those of world-class commercial logistics operations. The costs for supplying energy to the defense establishment are high and rising; in 2009, DOD paid $20 billion for fuel.

Costs associated with supporting the military labor force are also increasing. Among various factors, medical costs are growing at twice the rate of the national average. Costs for providing health care for retirees and their families hit more than $50 billion in 2010, up from $19 billion in 2001.

Some of these costs, such as those for providing health care for an aging workforce, are matched in the overall economy. But the costs in weapons acquisition are opposite to trends in the commercial world. For example, each generation of computers routinely provides more performance at lower costs. Suppliers typically introduce next-generation systems in 18-month cycles, a far cry from the 15- to 20-year DOD weapons development cycles.

Beyond immediate costs, another problem is of growing concern: the nation’s international economic competitiveness. Fewer and fewer U.S. students are enrolling in science and engineering programs in the nation’s universities. Roughly 35% of graduate degrees granted at U.S. universities in these fields are going to foreign students on temporary visas. At the same time, other countries, including China and India, are encouraging and supporting students going into these fields. The United States already is seeing the results of this loss of competitiveness in the downward trend in its high-tech trade balances.

Clearly, the United States needs to encourage more of its students to go into science and technology. And why does the nation require foreign graduate students to sign a document agreeing to leave when they get their degree? Instead, the United States, after performing necessary security checks, should essentially staple green cards to their graduate degrees and encourage these students, along with their U.S. counterparts, to seek work in fields related to national security. Realizing that Enrico Fermi was not a U.S. citizen when he worked on the Manhattan Project and that many of the founders of high-technology startups in Silicon Valley were immigrants, it only makes sense to put these talented newcomers to work where they can best help the United States.

These education trends are almost certain to reduce innovation across a range of commercial and military areas. What makes this critical is that for the past half-century, the United States has based its strategic security on maintaining technological superiority. History has shown that the first area to be affected in a budget decline is longer-term research. In addition to the human capital challenge, the declining budgets will, of course, also have a huge impact on university science and technology innovation and on innovation in industry. Small firms would be particularly hard hit, because of the expected shrinkage of programs such as the Small Business Innovative Research program. The impact of this loss of innovation will be felt in future U.S. international economic competitiveness and national security challenges.

Yet even as economic pressures increase, the list of critical security concerns continues to grow. Instability worldwide is increasing, brought on by global economic turmoil, dissatisfaction with corrupt and incompetent dictators, the actions of fanatical religious groups, natural disasters perhaps exacerbated by climate change, and other factors. In 2010, the former Director of National Intelligence, Dennis Blair, stated that the number one external threat to U.S. security was this growing worldwide instability, which could easily draw the nation into conflicts for moral, humanitarian, or security reasons. As witnessed in the horrific events of September 11, 2001, these instabilities can create threats that cross the ocean barriers that previously had protected the continental United States.

Today, the nation faces not only terrorists, pirates, irrational dictators, and the like, but these adversaries have greater access to increasingly lethal weapons. For example, about 100 countries now have ballistic missiles that can reach more and more distant targets. Cyber warfare is also becoming a concern, as potential adversaries develop sophisticated tools aimed at crippling military and civilian infrastructures and networks. There is also the growing threat of biological attacks by individuals, groups, or nations.

Responding to changing times

Clearly, today’s security environment is very different. Increasingly, civilian and military leaders, beginning with Defense Secretary Robert Gates, have begun to recognize that the nation must prepare for 21st-century security needs, even as fewer resources are available for the job. There are new technologies, including information technologies, biotechnologies, nanotechnologies, robotics, and others. There are new modes of warfare. There are new industrial structures that have resulted from the horizontal and vertical consolidations that marked the defense industry in the years after the Cold War and from the rapid advances in the high-tech commercial world. Critically important, there has been a globalization of technology, industry, labor, and coalition military operations.

In light of such globalization, it is apparent that the United States must use “soft power” (diplomacy, foreign aid, media campaigns, language and cultural education, and the like) along with the “hard power” of its military. The United States also must learn to work more closely with other nations in solving the new and emerging security concerns that cannot be addressed solely by any one nation. Signs of success are appearing, as multinational alliances have addressed conflicts in Iraq, Afghanistan, and Libya. Similarly, the United States will need China’s help with North Korea and Russia’s help with Iran.

Such international cooperation can have significant security and economic effects in a variety of ways. First and most obvious is the geopolitical benefit of a common interest in peace and stability in the world. There also are obvious potential benefits that come from the cofinancing and codevelopment of weapons, from the economies of scale that flow from the larger volume of common weapons produced, and from the sharing of technology, both economically and militarily. There is not a single U.S. weapon today that does not have foreign parts in it. This is because those parts are better, not because they are cheaper. Of course, the United States must remain mindful of the potential vulnerability of too much sourcing from abroad, but the presence of these parts gives the nation’s weapons higher military performance. Additionally, this sharing of technology among allies can and should result in the nations being able to operate together with each other’s weapons when fighting side by side against a common enemy, thereby greatly enhancing their combined military effectiveness.

There is obvious potential for cost saving as the United State and its allies share development, production, and support of weapon systems, and the savings could be of significant help in addressing each nation’s budget problems. But there are equally obvious concerns, with numerous historic cases to back them up, that such multinational efforts will be resisted by the United States, as well as its allies. For example, there is fear that weapons or technologies will leak to non-ally countries and be used by opponents in future conflicts, and fear that the technologies will be used by other countries in commercial applications and thus harm competitiveness. There is political fear within each country’s government of losing jobs, compared with the jobs that will result from going it alone. There is fear that a partner will back out of the joint effort, as when Israel counted on France for its common weapons, but France (for geopolitical reasons) stopped the supply, forcing Israel to develop its own defense industry.

There also are more practical concerns, such as institutional differences in procurement rules, budget cycles, and changing monetary exchange rates. In addition, designing weapons for common purposes sometimes can face conflicting goals. For example, one country may want aircraft that fly faster while another may want aircraft that fly higher. Producing aircraft that fly both faster and higher will add design complexity and cost.

Another area of potential cost savings is the increasing use of diplomacy and other forms of soft power in addressing U.S. security needs. As one illustration of such growth, the U.S. Southern Command and the new African Command have appointed civilian deputies from the State Department who are responsible for civil-military activities. To carry out soft-power activities, however, the State Department will need more money to support added civilian and military personnel overseas and significantly more money for foreign aid, language and culture education, and media investments. One way to address the current mismatch between available resources and staffing needs is to significantly reduce the large number of U.S. troops stationed in Europe (more than 79,000) and in Japan and South Korea (more than 62,000). Although maintaining some troops in these areas is necessary to show allies that they have U.S. support, the size of these standby forces could perhaps be reduced while still representing a credible deterrent.

Steps toward security

To address its growing security affordability problem—to get more national security capability for less money—the nation will have to take steps to change how the DOD operates. These steps fall in four main areas:

What the DOD buys. The government must make unit cost a design requirement. This is the common practice of the commercial world—meeting the price the market will pay—but it has not been the normal practice for the DOD. It has been accomplished in at least one case: development of the precision-guided air-to-ground JDAM missile. The Air Force Chief of Staff issued a three-part requirement, calling for a missile that would work, would hit the target, and would cost less than $40,000 each. The missile was successfully developed for under $17,000 each, making it affordable to be bought in the quantities needed. This missile fits in with commercial computer trends of higher performance at lower costs and shows that it is clearly possible for the DOD to achieve similar performance.

The government’s laws, regulations, and practices increasingly have served to isolate the military from the best available performance and lower costs of goods and services in the commercial and global markets.

Changing what the DOD buys also will require overcoming the cultural resistance of the military and the defense industry. With the support of Congress, these sectors continue to buy the ships, airplanes, tanks, and other weapons of the 20th century, rather than shifting to the weapons required for the 21st century. The new century will see more asymmetric warfare that incorporates features such as intelligence equipment; information systems; unmanned air, land, and sea systems; antimissile systems; and networks of land warriors outfitted with advanced weapons and other technological tools. Additionally, new equipment will need to be designed with the assumption that it will be used in what has been termed a net-centric system. Such an integrated system will include distributed sensors and shooters, rather than requiring every weapon to be self-sufficient and therefore extremely complex and expensive. In this way, secure information technology can be used as a force multiplier to achieve increased military effectiveness. At the same time, the lower cost of individual elements will enable far larger numbers to be acquired, thus lowering their costs still further through economies of scale. The bottom line is that the military will gain greater numbers of distributed sensors and shooters in the most affordable way possible.

How the DOD buys. Congress and the administration must change the way the government does its business. It must shift from a compliance mentality (that is, relying on thousands of rules on how to do something) to a results mentality in which flexibility and experimental judgment are encouraged in order to achieve desired outcomes in performance, cost, and schedule. To foster such a shift, the government should establish incentives and rewards for innovation in products and processes that result in continuous performance improvements, at lower and lower costs.

Continuous competition aimed at achieving best value, not just lowest cost, is the demonstrated model in the commercial world. But even today, the DOD is resisting continuous competition on two of its biggest procurements, both for the Air Force: the second engine for the F-35 advanced fighter plane and the KC-45 refueling tanker aircraft. The DOD still apparently clings to the belief that “this time the government will get it right” and will hold down costs in the face of program changes driven by new technology, new threats, new mission needs, and the like. But history has shown repeatedly that such cost control is unlikely to happen in a sole-source environment. On the other hand, it is very likely to happen in a continuously competitive environment in which industry is given incentives to reach goals in performance and cost.

In addition to lowering procurement costs, the DOD needs to shorten its development cycles. During the Cold War, it could take a decade or more to move new equipment from development and production to full deployment in the field. There was little security risk, because the Soviet Union moved just as slowly. But today, technology advances much faster. Moreover, U.S. military expeditionary operations frequently face the need to obtain new lifesaving or mission-saving response capabilities within a matter of days. Clearly, government’s acquisition and budgeting systems must be revised in order to be far more responsive, whenever the need requires it.

Who does the buying. The government needs to build a workforce of experienced, smart buyers. In the years after the end of the Cold War, Congress and the DOD began to greatly undervalue the importance of the acquisition workforce and took steps to significantly shrink it. In 1996, Congress mandated a further 25% reduction. Even after the 9/11 attacks led government to greatly expand its defense and national security spending, the DOD continued to neglect the acquisition workforce. For example, in 1990 the Army had five officers who held the rank of general and had backgrounds in contracting; in 2008, it had none. During the same period, the Air Force cut in half its complement of acquisition officers and civilian members of the Senior Executive Service who followed acquisitions. The Defense Contract Management Agency went from having four general officers to none and from having 25,000 employees to 10,000. Only recently has this shortcoming been recognized, but it will take many years for it to be fully corrected. A short-term fix would be to bring in experienced people from industry, under the government’s allowable category of “highly qualified experts.”

Who the DOD buys from. Because of the consolidations within the defense industry after the Cold War, the number of major system contractors fell from 50 to 5. Additionally, because of the vertical integration that took place at the critical subsystem level, there often was a shift from a competitive-buy mode to a sole-source award to a captive division. Two other dramatic changes in the industrial world also occurred during this period: the explosion of high-tech commercial companies, particularly in information technology, and the globalization of technology and industry. As a result, the government’s laws, regulations, and practices, historically unique to the DOD and Congress, increasingly have served to isolate the military from the best available performance and lower cost of goods and services in the commercial and global markets. In order to correct this, the government needs to improve its laws, regulations, and practices in a number of key areas, including export and import controls, procurement practices, and specialized accounting. The new system should provide industry with incentives that will reward companies that achieve higher performance and lower cost results. International firms also can be encouraged to participate by removing the legal barriers they now face.

Cultural change ahead

Making changes in these areas will require an overall cultural change. Fortunately, the literature on culture change is clear. For it to happen, two things are required. First, there must be widespread recognition of the need for change. Second, there must be leadership that clearly articulates a vision, a strategy, and a set of actions, and backs up the commitment with appropriate assignment of responsibility, shifts in resources, and designations of milestones and metrics for assessing change.

These requirements are being met today. There is widespread recognition of the need for change, and Defense Secretary Gates has articulated this vision for the DOD: “the need to do more, without more.” Certainly, there will be resistance, and the job will be challenging. But it must be done, and can be done. The nation’s future national security depends on it.

Apprenticeships Back to the Future

Concern about the rising costs of college education, the growing need for remedial and developmental education among new college students, and the low persistence and graduation rates among at-risk students have prompted education officials and policymakers to look for ways to fix college. But maybe college is not broken. Perhaps the real problem is that too many students enroll in college not because they want to be there, or because they are enthusiastic about learning, or because they believe that college will provide the right kind of learning experience. Instead, they enroll because they lack other career preparation alternatives.

Pressure to go to college can be great. Students may have heard from parents, high-school staff, elected officials, and the media that only by going to college can they enjoy a financially secure future and social prestige. Sometimes, even against their better judgment, students decide to give college a try (or a second or third try) without fully understanding the personal commitment they must make to be successful in higher education.

Compounding matters, the college experience may not be a good fit for all students. Some have educational gaps from high school that make them ill-prepared for advanced academics. Some have families or must work to make ends meet, and these demands can leave precious little time for studies. For some, the classroom is never going to provide the optimal learning environment, because they learn best by engaging in activities that yield tangible products. Such kinesthetic learners learn best by doing, and many college programs provide little doing time relative to listening or reading time.

Rather than trying to fix college, or fix students, a more effective approach would be to expand the postsecondary options available so that each student can find the right path to success based on his or her personal and professional goals, life circumstances, learning style, and academic preparedness. Apprenticeship programs offer one such alternative. Put simply, apprenticeship programs can efficiently and effectively prepare students for jobs in a variety of fields.

Many other countries already have learned this lesson. In Germany and Switzerland, for example, apprenticeships are a critical part of the secondary education system, and most students complete an apprenticeship even if they plan to pursue postsecondary education in the future. It is not uncommon for German or Swiss postsecondary institutions to require students to complete an apprenticeship before enrolling in a tertiary education program. In this way, apprenticeships are an important part of the education continuum, including for engineers, nurses, teachers, finance workers, and myriad other professionals.

In the United States, however, apprenticeships generally have been considered to be labor programs for training students to work in the skilled trades or crafts. They are not viewed as education programs, so they have not become a conventional part of most secondary or postsecondary systems or programs. This leaves untapped a rich opportunity for the nation, as well as for the host of students who might find an apprenticeship to be an attractive route into the future.

Apprenticeship advantages

An apprenticeship is a formal, on-the-job training program through which a novice learns a marketable craft, trade, or vocation under the guidance of a master practitioner. Most apprenticeships include some degree of theoretical classroom instruction in addition to hands-on practical experience. Classroom instruction can take place at the work site, on a college campus, or through online instruction in partnership with public- or private-sector colleges.

Some apprenticeships are offered as one-year programs, though most span three to six years and require apprentices to spend at least 2,000 hours on the job. Apprentices are paid a wage for the time they spend learning in the workplace. Some apprenticeship sponsors also pay for time spent in class, whereas others do not. Some sponsors cover the costs associated with the classroom-based portion, whereas others require apprentices to pay tuition out of their wages. All of these details are part of the apprenticeship contract, which provides the apprentice with a clear understanding of the requirements of the program, the expectations of the apprentice, and the obligations of the sponsor, including wages and tuition support.

Unlike in the traditional college setting, where students may be forced to complete years of theoretical training before having the opportunity to apply that knowledge to a practical challenge or problem, apprentices participate in real work from the first day of their programs. This alignment between theoretical and practical learning is likely to improve student mastery, and the challenges that arise naturally in the workplace provide authentic opportunities to cultivate critical thinking skills in ways that the contrived classroom environment cannot. Additionally, master practitioners may be a more credible source of information and training to some students than are academics who may not have direct experience working in the field for which they are preparing students.

Apprenticeships also aid learners by surrounding them with ready-made role models and mentors who help the novices develop and refine their skills, while also introducing them to the culture of the work environment and the mores of the field for which they are training. Apprentices have an opportunity to see firsthand what is required for career advancement and to observe the personal characteristics common among those who have been successful in the field. Importantly, in the workplace apprentices also are likely to learn not just how to use a piece of equipment but also how to maintain and repair it. This rarely occurs in the traditional classroom setting.

On a practical level, apprenticeships provide students who must earn income while in school with the opportunity to engage in work that supports and reinforces learning rather than distracting from it. Students who attend traditional colleges but who must also work considerable hours outside of school often feel torn between the demands of work and the demands of school, which are generally unrelated. For apprentices, on the other hand, school is work and work is school, so learning and working occupy the same space and time, rather than competing for attention.

Current U.S. programs

Apprenticeship programs do exist in the United States, but they are vastly underused, poorly coordinated, nonstandardized, and undervalued by students, parents, educators, and policymakers. The first successful federal legislative effort to promote and coordinate apprenticeships was the National Apprenticeship Act of 1937, commonly known as the Fitzgerald Act. This act treated apprentices not as students but as laborers, and it authorized the Department of Labor (DOL) to establish minimum standards to protect the health, safety, and general welfare of apprentice workers. The DOL still retains oversight responsibility through its Office of Apprenticeships, but the office receives an anemic annual appropriation of around $28 million.

Perhaps the greatest challenge in promoting wide-scale implementation of high-quality apprenticeship programs in the United States centers on reversing the public perception that only people with a college education will enjoy satisfying and financially rewarding employment.

Among its activities, the Office of Apprenticeships administers the Registered Apprenticeship program. Sponsors of registered apprenticeships include, for the most part, employers or groups of employers, often in partnership with labor unions. Sponsors recruit and hire apprentices, determine the content for training, identify partners for classroom instruction, and develop formal agreements with apprentices regarding the skills to be taught and learned, wages to be paid, and requirements of classroom instruction. In each state, the DOL supports a state apprenticeship agency that certifies apprenticeship sponsors, issues certificates of completion to apprentices, monitors the safety and welfare of apprentices, and ensures that women and minorities are not victims of discriminatory practices.

In 2007, the latest year for which data are available, there were approximately 28,000 Registered Apprenticeship programs involving approximately 465,000 apprentices. Most of the programs were in a handful of fields and industries, including construction and building trades, building maintenance, automobile mechanics, steamfitting, machinist, tool and dye, and child care.

For much of the past decade, the DOL has been working with various federal agencies and external constituencies to explore the feasibility of expanding the use of registered apprenticeships to train workers for health care fields. A recent department report indicates that since the beginning of this effort, apprenticeship programs have been developed in 40 health care occupations, with the total number of programs increasing to 350 from around 200. The DOL continues to work with several large industry partners to develop apprenticeship programs in clinical care, nursing management, and health care information technology. But the department acknowledges that many employers in the health care industry remain unaware of the Registered Apprenticeship program or do not understand the benefits that such training offers.

The DOL also sees opportunities for expanding the role of registered apprenticeships to train workers in green technologies and processes. However, stakeholders have expressed concern about the growing need to provide pre-apprenticeship training in order to increase the number and diversity of qualified applicants to apprenticeship programs. In particular, stakeholders have highlighted the need to provide early training in mathematics, science, writing, computer literacy, and customer service to many applicants, and in particular to women and to young people from impoverished communities, in order to help them qualify for apprenticeship programs.

Moving to larger scales

Some of the promise of registered apprenticeships can be seen in their track record. In a 2007 survey commissioned by the DOL, sponsors of registered apprenticeships were asked what they valued, disliked, or would like to see changed about the programs. In general, sponsors said they were largely pleased and would strongly recommend registered apprenticeships to others. In one common positive theme, sponsors reported that the programs had high completion rates on the part of their apprentices: 40% of sponsors reported completion rates of 90 to 100%, 21% reported completion rates of 70 to 89%, and 17% reported completion rates of 50 to 69%. Sponsors also said that the use of current employees to train new workers was not too costly or burdensome. Some sponsors were concerned about trained apprentices being poached by other employers, but not enough to see this as a deterrent to apprentice sponsorship.

Improvements appear to be needed, however, before registered apprenticeship programs can be moved to a larger scale. Critical issues to address include developing mechanisms to improve the rigor, quality, and consistency of the programs; elevating the status of the credentials granted; standardizing elements of the curriculum and assessment systems to ensure better transferability of credentials; developing pathways that will enable apprentices to seamlessly apply their credential toward an undergraduate or advanced degree; and perhaps most important, improving public perception of apprenticeship programs by refuting the longstanding myths that apprenticeships serve individuals with low abilities who are destined for dead-end jobs.

Apprenticeships should be developed for occupations traditionally associated with a liberal arts education. It is shortsighted to assume that only economically disadvantaged or low-achieving students can benefit from apprenticeship training.

In looking to address such challenges, the United States might look to the Swiss model, one of the most successful in the world. In Switzerland, almost 70% of students between the ages of 16 and 19 participate in dual-enrollment vocational education and training (VET) programs, which require students to go to school for one to two days per week and spend the rest of their time in paid on-the-job training programs that last three to four years. Although the Swiss VET model is primarily a secondary school program, many of the principles on which it operates could be incorporated into the U.S. postsecondary apprenticeship system.

There are numerous advantages for students enrolled in VET programs, including the ability to earn a wage while learning, experience a career before making a lifetime commitment, and learn under the guidance of a master practitioner. Beyond that, VET programs may have added social benefits in that 16-year-olds might be influenced to make better decisions when surrounded by mature and experienced mentors and colleagues, as opposed to when they are cloistered among their peers.

The Swiss VET model does not ignore the importance of developing core theoretical knowledge in addition to applied vocational knowledge. On the contrary, students are required to enroll in general education and vocational education classes taught in local vocational schools and in industry learning centers, in addition to participating in the on-the-job training programs. Critical to the success of the VET system are the careful collaboration and coordination between workplace trainers and school-based educators, who work hard to ensure alignment between what the students are learning in school and at work.

Apprentices are subjected to regular assessments in the classroom and on the job, culminating in final exams associated with certification. In 2008, the completion rate for Swiss apprentices was 79%, and the exam pass rate among program completers was 91%. One of the main benefits of the Swiss apprenticeship system is that nearly 70% of all students participate in it, which means that students of all socioeconomic and ability levels are engaged in this form of learning. Such widespread involvement prevents the social stigmatization of apprenticeship programs, unlike in the United States where social prestige is almost exclusively preserved for college-based education and training. Moreover, because students entering dual-track VET programs are frequently high performers, they are academically indistinguishable from the students who elect university education rather than vocational training or dual education. As a result, Swiss dual-track VET students are likely to enter the workplace well prepared for work by possessing strong academic skills.

Changing perceptions

With the Swiss model illustrating the promise, a number of needs stand out. Perhaps the greatest challenge in promoting wide-scale implementation of high-quality apprenticeship programs in the United States centers on reversing the public perception that only people with a college education will enjoy satisfying and financially rewarding employment. Public policy officials and education leaders are quick to tell students that a college degree practically guarantees higher lifetime earnings, when, in fact, there is no evidence that such is the case for a given individual.

Much of the rhetoric around lifetime earnings is based on the U.S. Census Bureau’s 2002 report The Big Payoff, which projected future worklife earnings based on wage data collected during 1998, 1999, and 2000. The study’s results, which suggested that those with a college degree could earn $1 million more during their work lifetime than those with just a high-school diploma, have been misconstrued by many observers to constitute a guarantee that a college degree will increase an individual’s earnings by $1 million.

People who tout the results of this survey generally neglect to disclose what the authors included in the fine print: There is a great deal of variability in earnings even among those who hold a bachelor’s degree. The majors that the students selected, their personal ambitions, the nature of the career paths they selected, their work status (full-time versus part-time and continual employment versus intermittent), and their individual efforts all have a significant impact on their actual wages. In other words, the average earning level of individuals with a bachelor’s degree in business or engineering might be well above that of a similarly educated teacher, social worker, journalist, or dancer if they remain in the field for which they trained. What the report actually shows is that the big payoff comes to those who earn a professional degree, such as a medical or law degree.

Unfortunately, the report aggregates all workers without a college degree into a single category, failing to distinguish between those in the skilled trades and craftspeople who have high earning potentials, versus unskilled laborers who tend to earn the lowest wages. A more reasoned approach would have been to disaggregate workers without college degrees by occupation and skill level, similar to the way the study authors disaggregated college-educated workers by level of degree attainment.

Another report commonly cited as justification for college completion is the Department of Commerce’s Occupational Outlook Handbook. Those who emphasize the importance of a college degree generally refer to the report’s projections of the top 20 fastest-growing careers, of which 12 require an associate’s degree or higher, as the reason that everyone should earn a college credential. However, the percentage of growth is not the important number, because small occupations might experience a high growth rate while creating relatively few new jobs. Topping the list of fastest-growing occupations is biomedical engineers, where growth of 72% is anticipated over the next 10 years. Unfortunately, that growth rate equates to only 11,600 more jobs over that period, or only 1,160 new jobs per year. This figure should not be used to encourage students to become biomedical engineers but instead should raise serious questions about why the nation is training so many biomedical engineers when so few will probably get jobs in the field.

The important data table included in the Handbook is the one that lists the top 20 occupations projected to have the largest numerical growth. One-third of all new jobs will be in these 20 occupations, so these are the jobs for which the large majority of workers should be prepared. Only three of the occupations with the largest numerical growth appear on the list of fastest-growing professions—home health aides, personal and health care aides, and computer software applications engineers—and of those professions, only one requires a college credential. In fact, of the top 20 occupations with the largest numerical growth, only 6 require an associate’s degree or higher. Therefore, certificate and apprenticeship programs may well prove to be a much more efficient and appropriate way to train workers for the majority of new jobs that will be created during the next 10 years.

If the nation wants to create a successful apprenticeship program, it also must eliminate the stigmatization of apprenticeship programs and those who work in skilled trades. To do so, steps will be needed to ensure that Registered Apprenticeship programs are attractive to all students and not just low-income students or those who are poor performers in high school. Following the model of the Swiss, apprenticeship programs should be considered as a step along an educational continuum rather than a dead end. Moreover, apprenticeships should be developed for occupations traditionally associated with a liberal arts education, such as engineering, communications, banking, and teacher professional development. It is short-sighted to assume that only economically disadvantaged or low-achieving students can benefit from apprenticeship training. However, without a concerted public awareness and information campaign, teachers and parents are unlikely to be supportive of the apprenticeship pathway, given the policy focus on college completion.

A significant barrier to the integration of Registered Apprenticeship programs into the postsecondary system is the lack of mechanisms for evaluating student achievement and assigning academic credit for the hands-on portion of an apprentice’s training program. For example, there has been no comprehensive effort to track apprentice outcomes beyond completion of the program, so it is not known if those who complete an apprenticeship, earning the title of journeyperson, enjoy improved job security, more rapid advancement, or higher earnings than those who enter their profession through some other means, including less formal on-the-job training, high-school vocational training, or having earned a college-based certificate. Following the Swiss model, the use of outside third-party evaluators to assess student competencies may add credibility to student assessments and allow for the development of standards for awarding academic credit for apprentice activities. The United States also could adopt the Swiss system of exam-based certificates that individuals can earn to demonstrate their professional competencies and their readiness for advanced academic work.

Steps to success

The bottom line is that a well-organized, well-publicized, and well-supported national system of Registered Apprenticeship programs would address a number of growing concerns regarding the shortcomings of the current U.S. system of higher education. Dual-track apprenticeship programs that actively engage both traditional educators and master practitioners in a coordinated training and education effort have tremendous benefits for all involved. Traditional educators learn from master practitioners about real-world applications of the topics they teach. Master practitioners learn from experienced teachers how best to mentor and teach young workers.

In order to raise awareness, ensure consistent quality, and enable long-term career mobility among those who are interested in apprenticeship training, the following changes are required:

  • The DOL’s Office of Apprenticeships should develop a system similar to that of national higher education accreditors to provide oversight of apprenticeship programs based on the field for which apprentices are being trained. These bodies should be involved in the development of curricula, performance standards, and assessments of learning to ensure that apprenticeship experiences do not differ from one employer or one region to another. These bodies can also provide third-party validation of quality, which will improve the value and transportability of the completion credential.
  • The Office of Apprenticeships should create a national database to improve the dissemination of information about Registered Apprenticeship programs, including the number of opportunities available in each geographic region, the requirements of each program, and the wages provided to each participant. Information about the application and selection process should also be publicly available, and students should be able to compare the various programs through a portal similar to College Navigator, the free online tool administered by the National Center for Education Statistics, part of the Department of Education, that enables students, parents, high-school counselors, and others to get information about more than 7,000 colleges nationwide.
  • The Department of Education should be required to include information about the Registered Apprenticeship program in all of its printed materials and Web sites, including College Navigator, that are intended to help students prepare for and select a college.
  • Public policymakers should be careful to include apprenticeship training in any policies they develop or statements they make to encourage participation in postsecondary education.
  • National and regional accrediting bodies should work collaboratively to develop standards by which apprenticeship experiences can be evaluated for academic credit toward a degree in a related area.
  • Workplace-based trainers and classroom-based instructors should be required to obtain certifications, based on significant professional development requirements, to address the unique aspects of vocational education. In addition, routine collaboration between classroom and workplace instructors should be required as a condition for participation in the Registered Accreditation program.
  • The federal government should conduct and support active public information campaigns to promote the benefits of apprenticeship training, in the same way that it has invested heavily in efforts to increase college access and completion.
  • The federal government should explore the use of tax incentives to encourage greater participation by private-sector firms in the sponsorship of dual-track apprenticeship programs, especially in light of the potential savings these programs would provide to taxpayers.

There is no doubt that today’s adults need far more education than did adults who completed their compulsory education just two generations ago. But the signs are clear that traditional postsecondary education is not the only way—or in some cases the best way—to prepare all individuals for the careers they are likely to pursue. Given the challenges the nation faces in the current system of higher education, it is indeed time to look back to the future in strengthening and revitalizing the age-old model of apprenticeship training in order to prepare individuals for the jobs they are likely to hold and the higher education they may wish to pursue in the future.

Promoting Research and Development: the Government’s Role

The Nobel Prize–winning economist Robert E. Lucas Jr. wrote that once one starts thinking about long-run growth and economic development, “it is hard to think about anything else.” Although I don’t think I would go quite that far, it is certainly true that relatively small diferences in rates of economic growth, maintained over a sustained period, can have enormous implications for material living standards. A growth rate of output per person of 2.5% per year doubles average living standards in 28 years—about one generation—whereas output per person growing at what seems a modestly slower rate of 1.5% a year leads to a doubling in average living standards in about 47 years—roughly two generations. Compound interest is powerful! Of course, factors other than aggregate economic growth contribute to changes in living standards for different segments of the population, including shifts in relative wages and in rates of labor market participation. Nonetheless, if output per person increases more rapidly, the prospects for greater and more broad-based prosperity are significantly enhanced.

Over long spans of time, economic growth and the associated improvements in living standards reflect a number of determinants, including increases in workers’ skills, rates of saving and capital accumulation, and institutional factors ranging from the flexibility of markets to the quality of the legal and regulatory frameworks. However, innovation and technological change are undoubtedly central to the growth process; over the past 200 years or so, innovation, technical advances, and investment in capital goods embodying new technologies have transformed economies around the world. In recent decades, as this audience well knows, advances in semiconductor technology have radically changed many aspects of our lives, from communication to health care. Technological developments further in the past, such as electrification or the internal combustion engine, were equally revolutionary, if not more so. In addition, recent research has highlighted the important role played by intangible capital, such as the knowledge embodied in the workforce, business plans and practices, and brand names. This research suggests that technological progress and the accumulation of intangible capital have together accounted for well over half of the increase in output per hour in the United States during the past several decades.

Innovation has not only led to new products and more-efficient production methods, but it has also induced dramatic changes in how businesses are organized and managed, highlighting the connections between new ideas and methods and the organizational structure needed to implement them. For example, in the 19th century, the development of the railroad and telegraph, along with a host of other technologies, was associated with the rise of large businesses with national reach. And as transportation and communication technologies developed further in the 20th century, multinational corporations became more feasible and prevalent.

Economic policy affects innovation and long-run economic growth in many ways. A stable macroeconomic environment; sound public finances; and well-functioning financial, labor, and product markets all support innovation, entrepreneurship, and growth, as do effective tax, trade, and regulatory policies. Policies directed at objectives such as the protection of intellectual property rights and the promotion of research and development, or R&D, promote innovation and technological change more directly.

I will focus on one important component of innovation policy: government support for R&D. As I have already suggested, the effective commercial application of new ideas involves much more than just pure research. Many other factors are relevant, including the extent of market competition, the intellectual property regime, and the availability of financing for innovative enterprises. That said, the tendency of the market to supply too little of certain types of R&D provides a rationale for government intervention; and no matter how good the policy environment, big new ideas are often ultimately rooted in well-executed R&D.

The rationale for a government role

Governments in many countries directly support scientific and technical research; for example, through grant-providing agencies (like the National Science Foundation in the United States) or through tax incentives (like the R&D tax credit). In addition, the governments of the United States and many other countries run their own research facilities, including facilities focused on nonmilitary applications such as health. The primary economic rationale for a government role in R&D is that, without such intervention, the private market would not adequately supply certain types of research. The argument, which applies particularly strongly to basic or fundamental research, is that the full economic value of a scientific advance is unlikely to accrue to its discoverer, especially if the new knowledge can be replicated or disseminated at low cost. For example, James Watson and Francis Crick received a minute fraction of the economic benefits that have flowed from their discovery of the structure of DNA. If many people are able to exploit, or otherwise benefit from, research done by others, then the total or social return to research may be higher on average than the private return to those who bear the costs and risks of innovation. As a result, market forces will lead to underinvestment in R&D from society’s perspective, providing a rationale for government intervention.

One possible policy response to the market underprovision problem would be to substantially strengthen the intellectual property rights regime; for example, by granting the developers of new ideas strong and long-lasting claims to the economic benefits of their discoveries—perhaps by extending and expanding patent rights. This approach has significant drawbacks of its own, however, in that strict limitations on the free use of new ideas would inhibit both further research and the development of valuable commercial applications. Thus, although patent protections and similar rules remain an important part of innovation policy, governments have also turned to direct support of R&D activities.

Of course, the rationale for government support of R&D would be weakened if governments had consistently performed poorly in this sphere. Certainly there have been disappointments; for example, the surge in federal investment in energy technology research in the 1970s, a response to the energy crisis of that decade, achieved less than its initiators hoped. In the United States, however, we have seen many examples—in some cases extending back to the late 19th and early 20th centuries—of federal research initiatives and government support enabling the emergence of new technologies in areas that include agriculture, chemicals, health care, and information technology. A case that has been particularly well documented and closely studied is the development of hybrid seed corn in the United States during the first half of the 20th century. Two other examples of innovations that received critical federal support are gene splicing—federal R&D underwrote the techniques that opened up the field of genetic engineering—and the lithium-ion battery, which was developed by federally sponsored materials research in the 1980s. And recent research on the government’s so-called War on Cancer, initiated by President Nixon in 1971, finds that the effort has produced a very high social rate of return, notwithstanding its failure to achieve its original ambitious goal of eradicating the disease.

Contrary to the notion that highly trained and talented immigrants displace native-born workers in the labor market, scientists and other highly trained professionals who come to the United States tend to enhance the productivity and employment opportunities of those already here.

What about the present? Is government support of R&D today at the “right” level? This question is not easily answered; it involves not only difficult technical assessments but also a number of value judgments about public priorities. As background, however, a consideration of recent trends in expenditures on R&D in the United States and the rest of the world should be instructive. In the United States, total R&D spending (both public and private) has been relatively stable over the past three decades, at roughly 2.5% of gross domestic product (GDP). However, this apparent stability masks some important underlying trends. First, since the 1970s, R&D spending by the federal government has trended down as a share of GDP, while the share of R&D done by the private sector has correspondingly increased. Second, the share of R&D spending targeted to basic research, as opposed to more applied R&D activities, has also been declining. These two trends—the declines in the share of basic research and in the federal share of R&D spending—are related, as government R&D spending tends to be more heavily weighted toward basic research and science. The declining emphasis on basic research is somewhat concerning because fundamental research is ultimately the source of most innovation, albeit often with long lags. Indeed, some economists have argued that because of the potentially high social return to basic research, expanded government support for R&D could, over time, significantly boost economic growth. That said, in a time of fiscal stringency, Congress and the administration will clearly need to carefully weigh competing priorities in their budgetary decisions.

Another argument sometimes made for expanding government support for R&D is the need to keep pace with technological advances in other countries. R&D has become increasingly international, thanks to improved communication and dissemination of research results, the spread of scientific and engineering talent around the world, and the transfer of technologies through trade, foreign direct investment, and the activities of multinational corporations. To be sure, R&D spending remains concentrated in the most-developed countries, with the United States still the leader in overall R&D spending. However, in recent years, spending on R&D has increased sharply in some emerging market economies, most notably in China and India. In particular, spending for R&D by China has increased rapidly in absolute terms, although recent estimates still show its R&D spending to be smaller relative to GDP than in the United States. Reflecting the increased research activity in emerging market economies, the share of world R&D expenditures by member nations of the Organization for Economic Co-Operation and Development, which mostly comprises advanced economies, has fallen relative to nonmember nations, which tend to be less developed. A similar trend is evident, by the way, with respect to science and engineering workforces.

How should policymakers think about the increasing globalization of R&D spending? On the one hand, the diffusion of scientific and technological research throughout the world potentially benefits everyone by increasing the pace of innovation globally. For example, the development of the polio vaccine in the United States in the 1950s provided enormous benefits to people globally, not just Americans. Moreover, in a globalized economy, product and process innovations in one country can lead to employment opportunities and improved goods and services around the world.

On the other hand, in some circumstances the location of R&D activity can matter. For example, technological prowess may help a country reap the financial and employment benefits of leadership in a strategic industry. A cutting-edge scientific or technological center can create a variety of spillovers that promote innovation, quality, skills acquisition, and productivity in industries located nearby; such spillovers are the reason that high-tech firms often locate in clusters or near leading universities. To the extent that countries gain from leadership in technologically vibrant industries or from local spillovers arising from inventive activity, the case for government support of R&D within a given country is stronger.

How should governments provide support?

The economic arguments for government support of innovation generally imply that governments should focus particularly on fostering basic, or foundational, research. The most applied and commercially relevant research is likely to be done in any case by the private sector, as private firms have strong incentives to determine what the market demands and to meet those needs.

If the government decides to foster R&D, what policy instruments should it use? A number of potential tools exist, including direct funding of government research facilities, grants to university or private-sector researchers, contracts for specific projects, and tax incentives. Moreover, within each of these categories, many choices must be made about how to structure specific programs. Unfortunately, economists know less about how best to channel public support for R&D than we would like; it is good news, therefore, that considerable new work is being done on this topic, including recent initiatives on science policy by the National Science Foundation.

Certainly, the characteristics of the research to be supported are important for the choice of the policy tool. Direct government support or conduct of the research may make the most sense if the project is highly focused and large-scale, possibly involving the need for coordination of the work of many researchers and subject to relatively tight time frames. Examples of large-scale government-funded research include the space program and the construction and operation of “atom-smashing” facilities for experiments in high-energy physics. Outside of such cases, which often are linked to national defense, a more decentralized model that relies on the ideas and initiative of individual researchers or small research groups may be most effective. Grants to, or contracts with, researchers are the typical vehicle for such an approach.

Of course, the success of decentralized models for government support depends on the quality of execution. Some critics believe that funding agencies have been too cautious, focusing on a limited number of low-risk projects and targeting funding to more-established scientists at the expense of researchers who are less established or less conventional in their approaches. Supporting multiple approaches to a given problem at the same time increases the chance of finding a solution; it also increases opportunities for cooperation or constructive competition. The challenge to policymakers is to encourage experimentation and a greater diversity of approaches while simultaneously ensuring that an effective peer-review process is in place to guide funding toward high-quality science.

However it is channeled, government support for innovation and R&D will be more effective if it is thought of as a long-run investment. Gestation lags from basic research to commercial application to the ultimate economic benefits can be very long. The Internet revolution of the 1990s was based on scientific investments made in the 1970s and 1980s. And today’s widespread commercialization of biotechnology was based, in part, on key research findings developed in the 1950s. Thus, governments that choose to provide support for R&D are likely to get better results if that support is stable, avoiding a pattern of feast or famine.

Government support for R&D presumes sufficient national capacity to engage in effective research at the desired scale. That capacity, in turn, depends importantly on the supply of qualified scientists, engineers, and other technical workers. Although the system of higher education in the United States remains among the finest in the world, numerous concerns have been raised about this country’s ability to ensure adequate supplies of highly skilled workers. For example, some observers have suggested that bottlenecks in the system limit the number of students receiving undergraduate degrees in science and engineering. Surveys of student intentions in the United States consistently show that the number of students who seek to major in science and engineering exceeds the number accommodated by a wide margin, and waitlists to enroll in technical courses have trended up relative to those in other fields, as has the time required to graduate with a science or engineering degree. Moreover, although the relative wages of science and engineering graduates have increased significantly over the past few decades, the share of undergraduate degrees awarded in science and engineering has been roughly stable. At the same time, critics of K-12 education in the United States have long argued that not enough is being done to encourage and support student interest in science and mathematics. Taken together, these trends suggest that more could be done to increase the number of U.S. students entering scientific and engineering professions.

At least when viewed from the perspective of a single nation, immigration is another path for increasing the supply of highly skilled scientists and researchers. The technological leadership of the United States was and continues to be built in substantial part on the contributions of foreign-born scientists and engineers, both permanent immigrants and those staying in the country only for a time. And, contrary to the notion that highly trained and talented immigrants displace native-born workers in the labor market, scientists and other highly trained professionals who come to the United States tend to enhance the productivity and employment opportunities of those already here, reflecting gains from interaction and cooperation and from the development of critical masses of researchers in technical areas. More generally, technological progress and innovation around the world would be enhanced by lowering national barriers to international scientific cooperation and collaboration.

In the abstract, economists have identified some persuasive justifications for government policies to promote R&D activities, especially those related to basic research. In practice, we know less than we would like about which policies work best. A reasonable strategy for now may be to continue to use a mix of policies to support R&D while taking pains to encourage diverse and even competing approaches by the scientists and engineers receiving support.

We should also keep in mind that funding R&D activity is only part of what the government can do to foster innovation. As I noted, ensuring a sufficient supply of individuals with science and engineering skills is important for promoting innovation, and this need raises questions about education policy as well as immigration policy. Other key policy issues include the definition and enforcement of intellectual property rights and the setting of technical standards. Finally, as someone who spends a lot of time monitoring the economy, let me put in a plug for more work on finding better ways to measure innovation, R&D activity, and intangible capital. We will be more likely to promote innovative activity if we are able to measure it more effectively and document its role in economic growth.

The Human Future

Michio Kaku is the rare individual who is both a top-flight scientist (he is a theoretical physicist who has done pathbreaking work in string theory) and a successful popularizer of science and technology. His new book, Physics of the Future, although not on par with his best effort, Hyperspace, is thoroughly enjoyable and definitely worth reading.

Physics of the Future explores what humanity can expect to occur during three time frames: the present to 2030, 2030 to 2070, and 2070 to 2100, in the fields of computing and artificial intelligence, medicine, nanotechnology, energy, and space travel. His predictions will not be stunning to those familiar with many of the scientific and technological issues raised in the book and certainly not to specialists in those areas. Nonetheless, he writes about complex matters in an easy and accessible style. The educated public will find the book highly engaging.

Any forecast 100 years into the future is inevitably going to have some predictions that seem farfetched. Examples here include the construction of space elevators (enormous structures using special ultra–high-strength materials that would extend from Earth’s surface to a distance several thousand miles into space and be used for transportation) and the beginning of terraforming projects on Mars (the creation of Earth-like conditions using genetic engineering and other technologies to transform the planetary surface) before the end of this century. Much more plausible are Kaku’s predictions of permanent lunar bases, the development of advanced propulsion systems, and human voyages to Mars. His description of nanoscale automatedspace probes—molecular-sized spacecraft that could be deployed in very large numbers and offer an extremely cost-effective way of exploring large areas of our immediate galactic neighborhood—is fascinating.

In the area of medicine, Kaku mixes the plausible and the ethically troublesome. These include almost fully computerized visits to physicians’ offices, medical treatments based on precise analysis of one’s genetic makeup, the ability to grow human organs, designer children, and most alarming, human cloning. Although he argues that cloning would be an achievement equal to that of the creation of artificial intelligence (AI), he states that clones will “represent only a tiny fraction of the human race and the social consequences will be small.” This is truly a breathtaking assertion.

For the most part, however, Kaku, a technological optimist, makes clear that he understands that technology can be used for dark as well as for good purposes. For example, he cites how the development of sonograms has led to dramatic increases in abortions, primarily of female fetuses, in certain developing countries. More of this would have improved the book. He would have done readers a service by discussing, for example, the potential implications of new health care technologies on health care costs, given that the life-extension technologies he discusses could radically extend the human life span. Indeed, throughout the book, he tends to downplay the fact that advances of the magnitude he is predicting will probably require major public investments. The political will to make these commitments seems notably absent today.

A chapter on nanotechnology covers topics ranging from the highly probable, such as the development of nanoscale diagnostic devices for use in medicine, to the more speculative use of this technology in developing quantum computers. Other possibilities, more akin to science fiction, include the creation of replicator devices à la Star Trek. Kaku also discusses the development of shapeshifting capabilities, which would seem to be one of the more implausible future wonders. Yet significant research on this subject has already been done. It may actually be possible to program various kinds of material so that they respond to changing electrical charges, leading to a rearrangement of the material into dramatically different shapes.

More generally, Kaku’s effort would have benefited from a more nuanced analysis of the possibilities for some of the technologies. Although some, perhaps many, will probably to come to fruition, others will not see the light of day.

Some projections made in Physics of the Future are fairly prosaic. For example, continued gains in computing power are prophesied, courtesy of Moore’s Law, although Kaku recognizes the fundamental physical limits to continued gains in computing power. He thinks that by 2020, Moore’s Law will run out of gas for current semiconductors, although he is relatively confident that new technologies, including carbon nanotubes, will replace them. Assuming that some successor to standard chip technology is found and computing power continues to advance, the possibilities for the future are tantalizing, with virtual reality beginning to compete in a serious way with traditional physical reality.

Even without the technological breakthroughs that would allow the continuation of Moore’s Law, the ability to access the Internet anywhere at any time through, for example, contact lenses or computers built into clothing, is not that many years away. Kaku sees a time before the end of the century when humans, interfacing with machines at a distance through computers capable of understanding human thought patterns, can lift objects through mental commands.

No book about the future of technology would be complete without a discussion of AI. Like futurists such as Ray Kurzweil, Kaku sees the steady development of machines beyond their mere computational capacity to the point, late in this century, when machines become self-aware. Kaku sees this advance, which in certain respects would be the most important achievement in human history, as being substantially more difficult and time-consuming than do Kurzweil and others, who believe that true AI is probable before 2050. Whoever is correct—and there are those such as the eminent Roger Penrose who believe true that AI is not possible—the policy and ethical concerns associated with this breakthrough will be legion. Unfortunately, Kaku only begins to scratch the surface of the implications for society posed by self-aware, superintelligent machines.

The technological future that Michio Kaku sees will require energy, and plenty of it. Moreover, the future he envisions will require that energy be generated in ways that minimize humanity’s carbon footprint. He believes this will be possible eventually through the successful development and deployment of nuclear fusion and other technologies, while in the shorter run the further development of existing technologies, such as electric cars, wind, and solar, will help bridge the gap to the future. His analysis of nuclear fission is prescient, given the recent disaster at the Fukushima nuclear plant in Japan; he has serious questions about the dangers of meltdown and the never-solved issue of nuclear waste disposal. Nonetheless, it is seems clear that despite Kaku’s doubts about the viability of nuclear fission, many countries will continue their drive toward increasing reliance on this technology, as oil and coal become more expensive during the next several decades.

Kaku thinks the successful transition to a low-carbon future is critical; he takes the threat of global warming very seriously and entertains several ideas that would at least supplement the drive toward noncarbon energy sources. These include geoengineering solutions and technologies, including the genetic engineering of plants and trees that could absorb larger amounts of carbon dioxide. Many readers will probably balk at some of these possibilities, but Kaku argues that as the cost of global warming becomes more evident, future society will be willing to consider an array of possibilities that are considered to be too far outside the box today.

Some readers will also have doubts about Kaku’s hopes regarding the ability to deploy commercially viable nuclear fusion sometime between 2050 and 2070, given the numerous false starts we have seen over the years. And his tempered optimism regarding the short-run viability of solar energy will surely be disputed by many. If renewable sources fail to do the job and if society decides to reduce carbon-based fuels over the next few decades, we may need to swallow hard and accept the reality of a long-term presence of nuclear fission as a key energy source.

Social scientists will be particularly attracted to the final chapters. Although Kaku’s optimism is evident throughout the book, that optimism is tempered by the daunting tasks facing the United States and and the rest of humanity. He speaks movingly about the U.S. education system: “The United States will eventually have to overhaul its archaic, sclerotic education system. At present, poorly prepared high school students flood the job market and universities, creating a logjam … and the universities are burdened by having to create new layers of remedial courses to compensate for the poor high school education system.”

Nonetheless, it is not evident from reading Physics of the Future that the author is fully cognizant of the extent of the problems facing the country. Of particular concern is an increasing lack of understanding of and even hostility in U.S. society toward science and its practitioners. It may be that some of the difficulties science has in communicating with the mass public involve the sense that science and the technologies it creates operate in a moral and ethical vacuum. Kaku, although very good at developing interesting scenarios about various future developments, may not fully appreciate the challenges that lie ahead in building an adequate foundation of support among the public.

The advances Kaku envisions will not occur automatically; the future is still to be written. Without support from both elites and the public, the future may not be nearly as bright as portrayed by the author. And if there is to be a future anything like that described by Kaku, it may be one in which the United States will play a far less prominent role in scientific and technological advances than it has during the past century. In short, if the scientific and technological wonders envisioned in Physics of the Future are to come about, it will in substantial part depend on the actions taken in the next decade to address profound deficiencies in our culture, economy, and educational system.

Notwithstanding these caveats, Physics of the Future is highly recommended. Kaku’s vision of our technological future and its implications are an important contribution that will appeal to a wide-ranging audience.


The Life of Feynman

The general public may recall physicist Richard Feynman for his televised demonstration when he was a member of the Presidential Commission on the Space Shuttle Challenger Accident. He dipped a rubber O-ring into a glass of ice water to show how it stiffened and thus could have leaked in the sub-freezing temperatures at the spacecraft’s fatal launch. Students revere Feynman’s widely published lectures, which cleverly explain how science advances through riotous discovery and rigorous discipline. Readers, too, have long enjoyed his spirited books about science and its wonders, with titles such as What Do You Care What Other People Think? and Surely You’re Joking, Mr. Feynman!

But although Feynman could be lighthearted and amusing when explaining science, he was starkly serious when in its thrall; once enticed by a high-school teacher to study a spin-off of Fermat’s Principle called the “principle of least action,” he was smitten. This principle, which reveals how light waves seek the most direct course through, for example, the barrier between air and water, inspired his own search for simplicity in science. It also helped that Feynman was not only strikingly brilliant at math but also ruthlessly systematic in his concentrated and seemingly hypnotic thought. “Feynman needed to fully understand every problem he encountered by starting from scratch, solving it in his own way and often in several different ways,” writes Lawrence Krauss in this engaging and instructive biography. Through such mental and personal effort, Feynman would advance and alter 20th-century physics in ways that led to a Nobel Prize and, more vital to him, the thrill of discovery. “The prize,” declared Feynman, “is in the pleasure of finding things out.”

Richard Phillips Feynman was born in Queens, New York, in 1918 and emulated both his father’s skepticism of authority and his mother’s sense of humor. He took every physics course offered when an undergraduate at MIT, then earned a physics Ph.D. at Princeton in 1942. When he was hired to work on the Manhattan Project at Los Alamos during World War II, Feynman’s mathematical flair so impressed Theoretical Division Director Hans Bethe that he made him head of the computation group. The two physicists admired and liked each other, beginning their lifelong collaboration with the Bethe-Feynman Formula for calculating the yield of a nuclear bomb. Joining Bethe at Cornell after the war, Feynman labored intensely to apply a new type of calculus he had developed to fundamental puzzles in quantum electrodynamics (QED). He also created “Feynman diagrams,” simple schematic sketches whose lines revealed how subatomic particles behave or how numerical and symbolic formulas appear in space/time.

Feynman shared the 1965 Nobel physics prize with Sin-Itiro Tomonaga and Julian Schwinger for “fundamental work in quantum electrodynamics, with deep-ploughing consequences for the physics of elementary particles.” Krauss here gives us useful perspective on how physicist Freeman Dyson was crucial in Feynman’s career and fame. In 1948 and 1949, Dyson wrote two papers that “as much as anything … provided the window through which Feynman’s ideas could change the way physicists thought about fundamental physics.” Dyson showed that although Tomonaga, Schwinger, and Feynman used different approaches to explain radiation theories in QED, all three ways were equivalent. Further, Dyson concluded, Feynman’s approach was more enlightening and useful for using QED to solve physics problems.

When at Caltech in the 1950s and 1960s, Feynman enjoyed a hyperactive collaboration with future Nobelist Murray Gell-Mann as they studied the “weak interaction” that causes subatomic particles to decay. Krauss, himself a Nobel laureate in physics at Arizona State University, clearly admires Feynman’s brilliant achievements and here uses his life and labors to trace and explain how quantum theory evolved in its many surprising applications: quantum thermodynamics, QED, and most recently, quantum computing. Like Feynman, Krauss also relishes the quirky surprises that science sometimes throws at researchers, recalling his own thrill when first seeing that Feynman’s Nobel work “explained how an antiparticle could be thought of as a particle going backward in time.” That’s right, backward and forward in time, with just the bouncy perspective Feynman personified.

In contrast to his brilliant scientific career, Feynman’s personal life was mostly tragic and troubled. After just three years of marriage to his beloved high-school sweetheart, she died of tuberculosis in 1945, a month before the A-bomb he had helped create was first exploded. Her death tipped him into years of nihilism and depression; although, as Bethe remarked, “Feynman depressed is just a little more cheerful than any other person when he is exuberant.” Feynman’s partying and womanizing included affairs with many faculty colleagues’ wives at Cornell and Caltech as well as a sabbatical in Brazil where he drummed in samba bands and calculated a set of simple rules for seducing women in bars. His second marriage in 1952, to a flame he had met at Cornell, soon ended because, as she complained during the divorce proceedings, “He begins working calculus problems in his head as soon as he awakens. He did calculus while driving his car, while sitting in the living room, and while lying in bed at night.”

Yet from this emotional turmoil came—at last—a gentle and stable life. Feynman was romancing two married women in 1958 when, at a conference in Geneva, he met a 24-year-old English au pair and wooed her away from two boyfriends to California as his live-in maid. The two eventually married, raised two children, and lived together happily in Pasadena until his death from cancer in 1988.

Ever the contrarian, Feynman was elected a member of the National Academy of Sciences (NAS) in the 1950s but then spent a decade trying to resign because, he complained, the NAS unfairly decided which scientists were “in.” He worked day and night at his science but often found the most creative settings for his calculations to be topless bars and strip joints. In their on-and-off collaborations, Gell-Mann once complained that Feynman spent his time “generating anecdotes about himself,” and even Krauss’s account of his scientific life reveals histrionic and egotistical episodes. Krauss’s only doubt about his subject’s achievements concerns his compulsion to derive proofs by himself, suggesting that had Feynman been a better collaborator, the brainstorming could have produced even more splendid discoveries.

On a personal level, Feynman gladly spent hours listening to students and colleagues as they presented their research results. Yet, like his idol the Austrian theoretical physicist Wolfgang Pauli, Feynman could also be curt and critical. Pauli is remembered for his put-down of a bad science paper as being “not even wrong,” and Feynman once groused, “I have only to explain the regularities of nature; I don’t have to explain the methods of my friends.” Feynman voiced his impatience with “string theory” by complaining that instead of being a theory of “everything,” as it aspired to be, it had become a theory of “anything” that could not be verified experimentally. “String theorists don’t make predictions, they make excuses!”

Feynman desperately sought to discover a law or theorem as true and useful as Pauli’s quantum-mechanical Exclusion Principle (explaining the behavior of subatomic particles) and eventually said that his own ideas were useful but not profound. Krauss disagrees, asserting that although Feynman was a “showman” among colleagues and with wider audiences, he nevertheless became “perhaps the greatest, and probably the most beloved, physicist of the last half of the 20th century.” A grand claim, to be sure, but one Krauss demonstrates with examples showing just how Feynman influenced and inspired contemporary and later colleagues. Here we learn that in a 1959 lecture Feynman outlined the new field now called nanotechnology, predicting how records could be stored on pinhead-sized surfaces, biologists could manipulate life on the molecular and even atomic scale, and nanomachines could run on the rules of quantum mechanics. By 1981, Feynman was thinking “deeply about the theoretical foundations of computing itself,” Krauss writes, seeking a computer that was quantum-mechanical in nature.

As the author of several books about science, Krauss is a clear writer who shares Feynman’s sense of wonder. But while skillfully explaining Feynman’s science in the expanding field of quantum physics, Krauss too often skips among decades and concepts without reminding readers just when and where they are. Some confusion can be overcome by using the book’s detailed index, but had Krauss just added dates to the text more often, this would have made for an easier read. Perhaps, Krauss could have drawn Feynman diagrams to trace the paths of modern physics? This distraction aside, Krauss has brought to life a marvelous genius whose energy and insights shaped our understanding of the world—in ways both profound and fun.

The Little Reactor That Could?

A week before Halloween 2009, John R. Deal, an entrepreneur who goes almost exclusively by “Grizz,” took the stage at the Denver Art Museum to deliver the headline talk at an evening seminar titled “The Truth About Nuclear Energy.” Though slightly less bearded, barrel-chested, and commanding than his nickname suggests, Deal’s style was exceedingly casual. A long-sleeved, amply pocketed khaki shirt included a shoulder patch with his company’s logo, eliciting an association somewhere between park ranger and scoutmaster—both of which match his cheerful, disarming demeanor. Before launching into the benefits of his company’s miniature nuclear reactor, he began with a joke.

“It turns out that most of the … mishaps [in nuclear plants] actually involve humans. So we were thinking today, what do we do to create a power plant control system to minimize that kind of impact? We came up with the following. The power plant of the future will have three control devices: a computer, a dog, and a guy. The computer runs the power plant because, as I said, most power plant mishaps happen because of human interaction. The dog keeps people away from the computer. And the guy is just there to feed the dog.”

After lingering on the title slide a moment longer—“New. Clear. Energy.” in yellow letters—he advanced the screen and gave his opening line, a message he would revisit throughout his talk. “It’s more of a battery metaphor.”

As the co-founder and president of Hyperion Power Generation, Deal was referring to his company’s starring product, which he believes will represent a radical revolution for nuclear power. He has also described the Hyperion Power Module (HPM), which is only a few feet wide and not much taller, as the iPhone of nuclear power: a compact, technologically elegant device that will be a worldwide sensation for its portability, ease of use, and applications. These first moments of a normal overview presentation contain two of Hyperion’s prominent talking points: a piece of imagery and a problem solved. HPMs are batteries that eliminate nuclear energy’s obstacles related to human error and expertise. For the latter point, his Denver talk and many others refer to the goal of taking Homer Simpson out of the equation.

When Sonja Schmid and I set out to capture the story of small modular reactors, it quickly became clear that this technological coming-of-age tale is really, at least for now, a story about stories—the imagery industry leaders use to both envision their designs and communicate them to policymakers and the public. Behind the technical fact sheets, and in the years that remain before designs become physical machinery, small reactors are a movement of metaphors.

On many topics, imagery doesn’t carry substantive weight. It is added for flavor, to simplify, clarify, or restate content in more vivid terms. But in the house of small nuclear reactors, metaphors seem to be weight-bearing walls. They also come in the context of a debate that couldn’t have higher stakes. On one hand, our world must quickly scale up new sources of carbon-neutral energy. On the other, the nuclear accident in Fukushima, Japan, reminded us that our attempts to do so in the nuclear sector may result in unforeseen complications that can spiral into disasters. In today’s proposals for a new nuclear approach, presentation matters. But how much does corporate imagery reveal about the technology itself and its implications, and how accurate are the pictures the industry paints?

Is small beautiful?

Overall, the emerging vision of small modular reactors is a major downshift from the custom-built giants of yesteryear to new railcar-ready, factory-manufactured, standardized machines with an electricity output in the range of 25 to 200 megawatts (MW), rather than the 1,000 or more MW that is typical in today’s commercial reactors. A growing faction of promoters believes that these small reactors can provide solid answers to the myriad risks nuclear energy continues to face: safety, weapons proliferation, waste management, and initial capital cost. Each small reactor design offers a unique narrative of how it will remove or reduce these risks. Recurring themes include built-in capsule-like containment, passive cooling features, pledges for more effective disposal or recycling of waste, and a kind of inverse “economies of scale”: advantages offered by small capital investment, standardization, and mass production.

Because none of these small designs has yet been licensed by the Nuclear Regulatory Commission (NRC), and all of them are still several years from market deployment in even the most optimistic scenarios, they make a convenient canvas on which to paint metaphors. In the case of radically advanced reactor designs and deployment strategies, both corporations and journalists readily put vivid colors to use. Others are cast in more muted, evolutionary tones: They are miniature versions of the world’s tried-and-true light-water reactors, with substantially improved safety features. Leading revolutionary approaches in fuel, moderation, and cooling include reactors by Hyperion, Toshiba, and GE Hitachi, whereas efforts in favor of a more incremental design change include NuScale, Westinghouse, and Babcock & Wilcox.

All leading small reactors create a modular option, which allows them to be pieced together like LEGO blocks to build up a customized power supply. Customers could potentially receive their prepackaged mini-reactors anywhere in the world, as long as the site is accessible by boat, truck, or rail.

Judging by a rising emphasis on small modular reactors within President Obama’s past two budget requests, not to mention Energy Secretary Steven Chu’s outspoken affection for the technology, small reactors are increasingly being considered a highly exportable clean energy innovation and therefore prime candidates to implement the administration’s “win the future” message.

Returning to Hyperion, the way they present their technology shows that subtlety is not a priority. In some sense, there is a space for this; the small reactor market is already revolutionary in that it allows room for entrepreneurs to join the nuclear energy ranks alongside giant, buttoned-up corporations. And some entrepreneurs have a habit of making big, bold claims—early and often.

Most recently, a February 2011 Time magazine article titled “Nuclear Batteries” prominently features the “tanned and enthusiastic” Grizz Deal. Curiously, the author of the piece uses the phrase “nuclear battery” throughout, not as a metaphor but as the default label for Hyperion’s small reactor. Along the way, Deal outlines his goals for the HPM, a commercialized design that is based on work performed at Los Alamos National Laboratory. By the end of the article, he is quoted offering to “take care” of much of the world’s nuclear fuel, precluding the need for new nations to pursue enrichment or reprocessing programs, because these countries will presumably rely entirely on leasing Hyperion’s product.

The Time article is not an outlier. In dozens of trade and popular press articles, interviews, and blog posts, the character of Grizz and his imagery shine through. In November 2008, he was quoted in the Guardian on Hyperion’s safety and nonproliferation features: “You could never have a Chernobyl-type event; there are no moving parts,” said Deal. “You would need nation-state resources in order to enrich our uranium. Temperature-wise it’s too hot to handle. It would be like stealing a barbecue with your bare hands.”

Seeking out the origins of the venture helped us fill in some of the history behind the enthusiasm. It began with an initial shared motivation, which was recounted to us in an interview with Deborah Deal-Blackwell, Deal’s sister and cofounder of Hyperion. “My brother and I—neither of us have kids,” she said. “About five years ago, we started asking, what can we do to leave a legacy in the world? After some searching, we found that clean water was the answer.”

Deal-Blackwell explained the leap from clean water to nuclear reactors. She and Deal had quickly found that providing clean water on large scales, such as through desalination, can be quite energy-intensive. So they began to explore options. After briefly looking into renewable energy sources, they decided on a nuclear solution to pursue their clean water mission. Deal had worked at Los Alamos as an entrepreneur in residence, and he knew of an advanced reactor design by the lab’s Otis Peterson that he thought would be perfect to commercialize. The HPM concept was born.

Peterson’s design was technically intriguing to say the least. It would use uranium hydride, a novel nuclear fuel with unique self-regulating features that control the core’s temperature. But in 2009, foreseeing licensing delays with such a revolutionary approach, Hyperion decided on an entirely different design Los Alamos had produced: a uranium nitride–fueled fast reactor cooled by molten lead-bismuth. In other words, instead of forcing the NRC to create a new classification, Hyperion intends, for now, to fit its reactor within the somewhat more familiar, but still far from commercial, Generation IV category. Interestingly, the only previous application of a lead-bismuth cooled reactor was in the Alfa-class Soviet submarines developed in the 1960s.

The HPM is also revolutionary in its size and its approach to spent fuel. The smallest of the leading design proposals, each unit would produce 25 MW of electricity, enough to power 20,000 U.S. homes—or considerably more homes in any other nation. Also unique is the approach of providing a factory-sealed unit that would be removed completely for refueling and waste removal every 5 to 10 years, alleviating proliferation concerns related to sensitive material accumulated in spent fuel. This is a clear innovation that, if successful, would be a positive step forward from traditional practice. As a result, the approach offers an advantage over other small reactor designs, which do not seem to contain substantively new solutions for dealing with the on-site accumulation of spent fuel.

However, returning to the notion of human expertise reveals a clear weakness. Deal-Blackwell also told a version of the “feed the dog” joke during our interview, a repetition that implies that, in Hyperion’s view, human expertise is best handled by sealing it inside an automated technology. Although concerns about human error are legitimate, neither the public nor government regulators are ready to accept that scenario. Emerging technologies such as Hyperion’s call for a new and robust regulatory plan to determine what kind of human expertise is necessary for their safe operation, as well as how relevant knowledge can be created and maintained, transferred when appropriate (such as during export), and secured from illicit applications.

For three years, the “battery” metaphor has been the centerpiece of Hyperion’s identity. Although some of this language seems to have been scrubbed from the company’s Web site, former statements are easy to find on other sites devoted to the leading edge of nuclear technology. One example, from an early Hyperion Web page, began with the text “Hyperion is different. Think Big Battery …” and ended with, “Think battery, with the benefits of nuclear power. Think Hyperion.” With this direct exhortation to nontechnical audiences on exactly how they should think about a small reactor, Hyperion is unmatched in its brazen communications. And as the Time article shows, the image has stuck.

The question is whether it fits. In one way, it does. The HPM is envisioned as a self-contained sealed unit, delivered and used until its fuel has depleted, then carefully returned to a proper facility. But the comparison doesn’t hold much further than procedural similarities. A battery is a static device that converts stored chemical energy to electrical energy. It arguably does not belong in the same conversation as harnessing a nuclear chain reaction, the results of which include highly radioactive materials. Images on Hyperion’s Web site of buried, unattended nuclear reactors would make sense if they were merely batteries, but they are not. For this reason, more than one of the nuclear energy experts we interviewed used the term “fantasy” in reference to such scenarios that deploy “walk-away-safe” nuclear reactors.

On many topics, imagery doesn’t carry substantive weight. It is added for flavor, to simplify, clarify, or restate content in more vivid terms. But in the house of small nuclear reactors, metaphors seem to be weight-bearing walls.

In the middle of Deal’s talk in Denver, he began flipping through some artist-drawn images. The most striking of all shows a small nuclear reactor, buried and unattended at what looked to be less than 15 feet below the surface. Two simple tubes snake upward from the reactor, drawing the eye to a pair of gray above-ground tanks, with the words “Potable Water” stamped on the side. The setting? An impoverished African village complete with about a dozen mud-constructed, thatch-roofed huts. A handful of people were drawn into the image, all of them walking to or from the clean water source, which is apparently powered by a $50 million HPM.

Although the humanitarian goals that launched Hyperion are admirable, this quaint portrait of a Third-world problem goes beyond vivid jokes, iPods, batteries, and barbecues to reveal a full savior narrative that casts Hyperion’s small reactor as a solution to some of humanity’s direst needs. And the message is reinforced again and again. A recent news article in South Carolina’s Aiken Standard led with the following sentence: “Nuclear power is the only thing that can save the human race, Hyperion Power Generation CEO John ‘Grizz’ Deal told a crowd of more than 150 in Augusta on Wednesday.”

A utopian narrative is not without precedent in the history of nuclear power. In fact, it harkens back to the early 1950s, when the American public first heard rumors that “atoms for peace” would soon yield “electricity too cheap to meter.” Early in our search for the story of small reactors, we began to notice something familiar: The shift to small modular reactors has the nuclear industry playing out the plot of The Little Engine That Could, a slice of mid–20th-century Americana that became a hallmark of children’s self-esteem building. Where the large have failed to try, or tried and failed, the Little Reactor will come along and prevail, pulling the heavy load of toys and goodies over the mountain. Or at least the Little Reactor thinks he can.

An emphasis on evolution

The Little Reactor character appears in many forms, most of which are far less colorful than Hyperion’s version. We spoke to Bruce Landrey, chief marketing officer at NuScale Power, a small-reactor startup based in Corvallis, Oregon. Landrey has spent his career communicating information about nuclear reactors for various companies. The story of his experiences, at its end, harmonizes well with his current employer’s approach.

When Landrey graduated from the University of Oregon in the mid-1970s, he didn’t have a job, and he wasn’t necessarily looking to go into the energy sector. But soon his father was paired on the golf course with a stranger from an electric company that happened to be seeking new communications talent for the rollout of a new nuclear power plant. Eighteen holes later, Landrey’s father had positioned him, without his knowledge, as a prime candidate for the job. He applied, and was hired.

“I was thrown into the deep end,” he said, remembering how little he knew about nuclear power. He also encountered an odd phenomenon related to public perception in his region. “We had a lot of protesters and demonstrations at the plant, people chaining themselves to the fence and so on,” he remembers. “But it was ironic, because the protesters were the same people I was drinking beer with the previous year at the university. But here I was, on the other side of the issue.”

Landrey decided that if he would be earning his living speaking in favor of nuclear power, he would use his first six months on the job to learn everything he possibly could about the technology and its implications. He did so, becoming immersed in the technical side of nuclear reactors enough to make him confident discussing them from an environmental and safety perspective.

“But what I was never comfortable with was the tremendous business risk a large nuclear power plant poses to an electric company, its customers, and its shareholders,” he said. And over the next several years, he had a front-row seat to the downsides of this risk. “The company I worked for tried to build two additional nuclear plants, which became caught up in licensing delays. Then, after the Three Mile Island accident, they were finally just abandoned.”

Emerging technologies such as Hyperion’s call for a new and robust regulatory plan to determine what kind of human expertise is necessary for their safe operation, as well as how relevant knowledge can be created and maintained, transferred when appropriate (such as during export), and secured from illicit applications.

Three decades later, Landrey still finds himself speaking up for nuclear energy, but now for NuScale. He is as risk-averse as ever when it comes to the financial challenges presented by nuclear power. So is NuScale, and this perspective guides both its technical approach and its communications. As the company sees it, their strategy builds on proven market-ready technology, familiar to regulators and the community of existing experts. Compared to revolutionaries such as Hyperion, the essence of NuScale’s metaphor is much less splashy: Our small reactor is really an improved version of the reactor down the road. It is a light-water design, which means it uses normal water as its coolant, and it shares this feature, along with standard fuel rods, with the majority of active nuclear power reactors in the world.

Landrey explained some differences between NuScale and its larger predecessors, while also evoking a metaphor: a Thermos. Rather than a large concrete containment building, each reactor module comes inside its own steel vessel, which performs the containment’s safety purposes while also forming a Thermos-like vacuum between the vessel and the reactor module. This enables the reactor’s passive cooling feature, which uses natural circulation by a convection process, eliminating the need for a normal light-water reactor’s mechanical equipment or backup power generation to cool the reactor. Of course, backup power generation was the key failure that set off the Fukushima disaster and is the Achilles heel of all existing nuclear power plants.

When we asked about Hyperion and other small reactor designs, Landrey was quick to draw a line in the sand between NuScale and a less traditional approach. “You have to be very careful with small modular reactors,” he said, “to distinguish what goes in the near-term commercialization category and what continues to remain a concept in a laboratory someplace. There is a big gulf—it’s really apples and oranges.”

He also mentioned key differences on the topic of human expertise. Rather than automation, Landrey spoke of the importance of education and training in any context that will use NuScale reactors. The company’s plans call for an expert staff to operate the facility. For example, the top image on the company’s “Our Technology” Web page is an overhead view not of a reactor itself, but of the control room and user interfaces for plant operators.

For Landrey, the evolution-versus-revolution question is a central issue to explore when looking into small reactors: Which designs, or aspects of the design, grow out of widely used commercial power reactors, and which represent completely new attempts? The unstated perspective is that the evolutionaries represent realistic near-term solutions, whereas the revolutionaries are still far more futuristic than their promoters will admit.

Dusting off a design

Also quick to emphasize this gulf is Babcock & Wilcox, one of the world’s preeminent suppliers of nuclear reactors. B&W is now partnering with engineering and construction giant Bechtel to develop and produce the “mPower,” a compact new light-water reactor similar in many ways to the NuScale design. Last summer, Christofer Mowry, president of B&W, told the Wall Street Journal, “Bechtel doesn’t get involved in science projects. This [agreement] is a confidence builder that the promise of this small reactor is going to materialize.” Of course, as with Landrey’s comment, such a quote cleverly forces the question into the reader’s mouth: Which of today’s small reactors should be dismissed as mere “science projects”?

Although the mPower is certainly an advanced project, its first draft has been around for quite a while; our interviewees spoke of their small-reactor effort beginning by “dusting off a technology from the early eighties.” Compared to a conventional pressurized water reactor, the mPower reactor has the distinction of integrating the entire primary system (the reactor vessel, the steam generator, and the pressurizer) in one containment structure, which, according to one of the B&W engineers, “gives us a lot of inherent safety features that the large reactors don’t have.”

The tendency to look backward before moving forward arose not only from B&W’s vast experience with light-water designs. First, it was a conscious response to its perception of the market. Many potential mPower customers are utilities that run today’s fossil fuel plants (not exactly the most venturesome bunch), who will perhaps one day need to turn their turbines using a carbon-neutral technology. Hypothetically, a significant number of these utilities that would be priced out of a large reactor would, in fact, be interested in a more manageably sized, and priced, option. This thinking was the result of an executive saying flatly “show me a customer” when the company’s technical leaders approached him with their idea about a small-sized, budget reactor. But a related and perhaps greater motivation for B&W’s design conservatism is the current regulatory gatekeeper.

“The Nuclear Regulatory Commission… is a light water reactor regulatory agency,” one of our B&W interviewees said. “It takes a very long time to come up with a regulatory framework to be able to license another type of technology, and we wanted to get the technology to market as quickly as we could.”

Another interjected, “The idea was to come up with a design that capitalizes on the tremendous knowledge base that surrounds light water reactors, and then make some evolutionary changes. But when you get into revolutionary changes, the market isn’t looking for that right now.”

The design includes a plan to bury the mPower underground. Although this feature is widely shared across the small-reactor industry, B&W offered an interesting reason when we asked why. They first referred to aesthetics; their initial rationale had been to avoid the stigma associated with the physical appearance of a nuclear power plant. The typical cooling towers and containment structures have acquired almost emblematic status among opponents of nuclear energy. Only after having volunteered these reasons did they add that the underground placement also earned them safety advantages with regard to earthquakes and missile impact.

Like Mowry’s reference to “science projects,” B&W’s presentation is subtle but quick to make use of the public’s associations. Rather than taking a direct approach to force positive associations through imagery, B&W and others find the negative associations we already hold, and offer just the opposite. As they do so, the message comes back to their historical credentials, familiar technology, and the inclusion of credible players such as Bechtel. And the continuity of mPower’s design sends its loudest message to the regulatory community: This is a well-known, mastered technology, but upgraded to add significant improvements.

The appeal to history

Our foray into the light-water approaches coalesced in one question: Does inertia trump innovation in the U.S. nuclear industry? It would seem so, at least judging by NuScale’s and B&W’s carefully chosen paths. To some extent, even Hyperion’s shift in reactor fuel for its initial small reactor sends a similar signal. A familiar picture emerges, where the very entities that serve as the guarantor of safety also represent an obstacle to new, potentially better ideas. Perhaps unintentionally, they provide incentives for companies to continue down the well-trodden path, in exchange for faster licensing approval and shorter time to market.

In terms of accounting for human expertise, evolutionary approaches do have a marked advantage. They do not seek a technical fix that eliminates the operator’s crucial role and ignores organizational and educational structures. On the downside, however, slow incremental innovations tend to neglect nuclear energy’s historical problems. The known hurdles with traditional light-water reactors, including low efficiency and unresolved waste management concerns, will arguably continue to live on for another generation, and if their industrial promoters get their way, these problems will be mass-produced and widely exported.

Other potentially valuable lessons from history are also ignored; for example, why there is so little commercial experience with small nuclear reactors. In the past, small reactors have been used in research settings, for naval propulsion, and, rarely, to power research or industrial facilities at remote locations. But until recently, most small reactors for research and on submarines and icebreakers operated on highly enriched uranium, material that in sufficient quantities could be used to produce a nuclear weapon. When converted to fuel with lower enrichment, these reactors require more frequent refueling. Furthermore, the United States abandoned small reactors altogether in the 1970s to take advantage of the anticipated economies of scale to be achieved with larger power reactors. As the story has gone, in many cases the word “economy” hasn’t proven to apply.

In the 1970s and 1980s, the U.S. nuclear industry was embroiled in a debate over the safety of scaling up. Would substantially increasing the size of nuclear reactors allow extrapolation from existing safety protocols, or would it in fact produce qualitatively new problems? Similar questions should be asked in today’s opposite scenario. It is far from self-evident that a compressed scale automatically produces smaller risks or that the data gathered from similarly fueled and cooled large reactors transfers down.

And if the evolutionary approach does lower the risk of a given small modular reactor, who can say whether reduced risks in individual power plants are outweighed by an overall global risk of dispersing a much greater number of nuclear reactors across the planet? The Fukushima disaster has inconveniently shown a problem inherent to installing multiple reactors at one plant. After a scenario of unique failures within several reactors at once, is the prospect of a dozen or more interrelated small modular reactors on one site still as attractive?

An overarching question is whether any of these risks are significantly curbed by an approach that offers familiarity, or whether this would encourage complacency. Pyotr Neporozhni, who served as the Soviet minister of energy and electrification for three decades, is reported to have dismissed concerns about nuclear safety with the quip: “A nuclear reactor is just another boiler.” Neporozhni retired in 1985, one year before Chernobyl. Although it is true that the end task is to boil water, it would be a mistake to ignore the intricate, wholly new ways in which small modular reactors will attempt to go about that task, even if widely known materials are used. A small design is not “just another light water-reactor.”

Even if, as one B&W representative said, the NRC has traditionally been a “light-water–reactor agency,” its leadership does not seem to be glossing over the novel questions small modular reactors are raising. During a summer 2010 keynote address at a conference devoted to small reactors, William Ostendorff, a current member of the NRC, indicated that the question is open regarding how much history counts toward confidence about new small reactors.

“There are substantial differences between the proposed concepts for SMRs [small modular reactors] and the large, light-water reactors that the NRC’s regulations were based upon,” he said. “How will prototype reactors be licensed? How will risk insights be used? How do SMRs fit into the Price-Anderson nuclear liability framework? Questions like these are not easy ones to answer.”

Mixing metaphors

During her dissertation research on the Soviet nuclear industry, Schmid spent a year in Moscow, mostly penned inside musty archival reading rooms. But with a single tape recorder and without a quiet office at her disposal, she also set out to preserve a primary resource that was, and is, dying out. Former dons of the Soviet-era nuclear power program spoke with her on trains and buses, in homes and coffee shops, and over sometimes-obligatory shots of vodka. One of these interviews yielded an image that stuck with her, a counterweight to the simplifying metaphors we had encountered.

Like her other interviewees, “Yuri” had been eager to speak to Schmid, but visibly relieved when she offered not to use his real name. For an elderly Russian nuclear engineer whose Cold War career had comprised stints in both military and civilian reactors, secrecy fell somewhere between a reflex and a superstition.

After two terse hours with her microphone on a desk between them, they shared a cigarette break. They stood in a stairwell, holding cigarettes over the public ashtray, a large metal trash bin painted, rather sternly, the same gray as the walls. Then, in two sentences separated by a narrow downward stream of smoke, Yuri abandoned his technical talking points.

“The reactors are like children; each one is different,” he said, as if suddenly remembering something he had forgotten, the central point. “You come to know their peculiarities by spending time with them; you begin to feel how each reactor breathes.”

The large traditional reactors he had operated during his career were supposedly identical in design, but as he said, their personalities were quite distinct, as if there was something immeasurably complex happening beyond the components of these machines, something relational.

Historically, nuclear energy has been entangled in one of the most polarizing debates in this country. Promoters and adversaries of nuclear power alike have accused the other side of oversimplification and exaggeration. For today’s industry, reassuring a wary public and nervous government regulators that small reactors are completely safe might not be the most promising strategy. People may not remember much history, but they usually do remember who let them down before. It would make more sense to admit that nuclear power is an inherently risky technology, with enormous benefits that might justify taking these risks. So instead of framing small reactors as qualitatively different and “passively safe,” why not address the risks involved head-on? This would require that the industry not only invite the public to ask questions, but also that they respond, even—or perhaps especially—when these questions cross preestablished boundaries. Relevant historical experience with small compact reactors in military submarines, for example, should not be off limits, just because information about them has traditionally been classified.

The examples we discussed show that metaphors always simplify the complex technical calculations underlying nuclear technologies. Vivid illustrations often obscure as much as they clarify. Small reactors are not yet a reality, and the images chosen to represent them are often more advertisement than explanation. What information do we need to navigate among the images we are presented with?

Clearly, some comparisons are based more on wishful thinking than on experience. A retrievable underground battery and a relationship with a child, for example, invoke quite different degrees of complexity. Carefully scrutinized, the selection of metaphors often reveals the values that go into the design of these new reactors: why one approach is safer than another, which level of risk is acceptable, and whom we should trust.

Ultimately, the images offered by our interviewees are based on projections. Although it may make intuitive sense that smaller plants will be easier to manage, nuclear power involves non-nuclear, and even nontechnical, complexities that will not disappear with smaller size, increased automation, or factory-assembled nuclear components. For instance, nuclear reactors, and by extension nuclear power plants, need reliable organizations to train experts, provide everyday operation and maintenance, address problems competently when they arise, and interact effectively with the public in case of an emergency. This is not a trivial list even for high-tech nations like the United States, and it presents an even larger challenge for prospective importers of small modular reactors, particularly the developing countries with no domestic nuclear infrastructure that are clearly a major target of Hyperion’s efforts.

The same goes for the projected cost of small reactors. If there are any numbers publicized at this point at all, they tend to increase monthly, not least because of the recent events in Japan. The nuclear industry may need to rethink nuclear safety issues, revisiting problems it had considered long resolved. Small modular reactors do not offer easy solutions to multiple point failure. In fact, the modular arrangement of multiple cores at one site might increase this particular risk. These questions remain, regardless of whether a new reactor follows an evolutionary or a revolutionary track.

Whether the “nuclear battery” or the “just another light-water reactor” message appeals to us, we would be well advised to keep in mind the connotations of familiarity and controllability they offer in the face of unpredictable novelty. That should make us suspicious. Is what we are being sold as advantageous in fact the biggest vulnerability of these small designs? Easy transportability may look less like an asset when considered from the standpoint of proliferation. Multiple small cores might not necessarily turn out to be safer than one large one. We may remember that taking apart a machine, looking inside, and trying to figure out what is wrong ourselves can be more appealing than a machine that, like an iPod, needs to be shipped back to the factory for repair. Distributed generation sounds like a good idea when we talk about solar roof panels, but may not be as attractive when it requires highly trained expertise and accumulates radioactive waste.

We don’t know all the answers yet, but we should avoid being drawn in too quickly by these metaphors, even those that are more muted than Hyperion’s. Yuri’s realization that reactors are like children, an image based on profound experience and devoid of any marketing bias, presents a different and competing picture. Rather than simplifying, Yuri’s image goes in the opposite direction. Thinking of small reactors as more like children offers a lesson in humility in the face of complexities, both technical and nontechnical. Reactors, like children, may come with their own complicated personality; they can be quite unpredictable, but they also hold the promise of a better future.

Today’s small-reactor narrative isn’t a children’s story but an immensely complex novel, rife with layers of context, relationships, and flawed characters. But even children’s literature can temper itself against its own oversimplifications, as we are urging the nuclear industry to do. In 1974, Shel Silverstein published his reaction to The Little Engine That Could, flipping the empowerment narrative to a cautionary tale. The last stanza of his poem “The Little Blue Engine” warns against allowing confidence and optimism to become hubris:

He was almost there, when — CRASH! SMASH! BASH!

He slid down and mashed into engine hash

On the rocks below … which goes to show

If the track is tough and the hill is rough,

THINKING you can just ain’t enough!

For the small-reactor movement to truly come of age, the metaphors we use to describe it must also mature. Convenient images, save-the-day narratives, and a we-think-we-can reliance on a purely technical fix must be balanced by a broader examination of a full range of metaphors, the complexities they capture or ignore, and the social, political, and organizational contexts in which these machines will ultimately be used.


This is the third in the New Voices, New Approaches series of articles that have emerged from the “To Think, To Write, To Publish” workshop at Arizona State University. Funded by the National Science Foundation and directed by Lee Gutkind, the program pairs young academic scientists with professional writers to produce articles that use narrative to communicate more effectively and more engagingly with a broad readership.

Archives – Summer 2011

LEE LAWRIE, Floor Cover Medallion for the Great Hall of the National Academy of Sciences (detail), Bronze, 1924. Photo by Mark Finkenstaedt.

This bronze medallion is one of many sculptural and bas-relief elements created by Lee Lawrie (1877-1963) for the National Academy of Sciences building on Constitution Avenue in Washington, D.C. It is based on a map of the solar system published in the Harmonia Macrocosmica star atlas by Andreas Cellarius, Amsterdam, 1660. In this image detail, one finds symbols for the Sun, Earth, Moon, Mercury, and Venus.

Known for his Atlas statue in New York City’s Rockefeller Center, Lawrie has been called the dean of American architectural sculptors. Over the course of his career, he collaborated with NAS Architect Bertram Grosvenor Goodhue on many projects. They believed that sculpture should be integrated into the architecture of a building and not merely applied into it.

This medallion functions as a floor cover when the Foucault pendulum and spectroscope base (no longer in operation) is lowered below the Great Hall’s floor. The pendulum and spectroscope were among several scientific exhibits maintained by the NAS until WWII for the general public. Several galleries adjacent to the Great Hall were also used for scientific exhibitions. The NAS building is currently undergoing a major restoration project. When it reopens in 2012, the galleries will once again be used for exhibitions.

Forum – Summer 2011

The climate/security nexus

Richard A. Matthew has published “Is Climate Change a National Security Issue?” (Issues, Spring 2011) at just the right time. I would answer his question with a resounding “yes”; however, his piece clarifies why the emerging field of analysis on the climate/security nexus is in need of fresh thinking.

The critics Matthew cites are correct in saying that the past literature linking climate change and security suffered from weak methods. For those of us analyzing this topic years ago, it was a conceptual challenge to define the range of potential effects of the entire world changing in such rapid (on human time scales) and ahistorical ways.

We relied (too) heavily on future scenarios and therefore on the Intergovernmental Panel on Climate Change’s consensus-based and imprecise scientific projections. Much early writing was exploratory, as researchers attempted simply to add scope to a then-boundless debate. Serious consideration of causal relationships was often an afterthought. Even today, the profound lack of environmental, social, and political data needed to trace causal relationships consistently hampers clear elucidation of climate/security relationships.

Yet these weaknesses of past analysis are no cause to cease research on climate change and security now, in what Matthews accurately describes as its “imperfect” state. Today’s danger is that environmental change is far outpacing expectations. Furthermore, much of the early analysis on climate change and security was probably too narrow, and off track in looking primarily at unconventional or human security challenges at the expense of more precisely identifying U.S. national security interests at stake. This is reflected in Matthew’s categorization of previous research as focused on three main concerns: those affecting national power, diminishing state power, or driving violent conflict. The author accurately describes the past literature and his categories still resonate, especially in cases in which local-level unrest is affecting U.S. security goals in places such as Somalia, Afghanistan, and Mexico.

Still, the author’s categories are not broad enough to capture the climate change–related problems that today are the most worrisome for U.S. security (a weakness not in Matthew’s assessment, but in past research). Climate change is contributing to renewed interest in nuclear energy and therefore to materials proliferation concerns. Environmental change has already affected important alliance relationships. Natural resources top the list of strategically important issues to China, the most swiftly ascending power in the current geopolitical order. Fear of future scarcity and the lure of future profit from resources are amplifying territorial tendencies to the point of altering regional stability in the Arctic, the South China Sea, and elsewhere.

Matthew’s article makes an important contribution in serving as the best summary to date of what I’d call the first wave of climate change and national security research. I’m hopeful that it will further serve as a springboard for launching a much-needed second wave—one more methodologically rigorous that includes greater consideration of conventional U.S. security challenges.

CHRISTINE PARTHEMORE

Fellow

Center for a New American Security

Washington, DC


A national energy plan

The nation is fortunate to have Senator Jeff Bingaman, an individual who is both knowledgeable and committed to energy policy, as chairman of the Senate Energy and Natural Resources Committee. In “An Energy Agenda for the New Congress” (Issues, Spring 2011), Bingaman identifies key energy policy initiatives, and he is in a position to push for their adoption.

Bingaman highlights four initiatives that any informed observer would agree deserve government support: robust energy R&D, a domestic market for clean energy technologies, and assistance to speed commercialization of new energy technologies and related manufacturing technologies. However, the recent debate on the 2011 continuing budget resolution, the upcoming deliberation on the 2012 budget, and the discussion of how to reduce the deficit highlight Bingaman’s silence on how much money he believes the government should spend on each of these initiatives, and the extent of the public assistance he is advocating. For example, there is a vast difference in the financial implications of supporting deployment rather than demonstration of new energy technologies and of supporting building manufacturing capacity rather than developing new manufacturing technologies.

I am less enthusiastic than Bingaman about a renewable energy standard (RES) for electricity generation. The RES will increase electricity bills for consumers, but without comprehensive U.S. climate legislation and an international agreement for reducing greenhouse gas emissions, will do little to reduce the risks of global climate change.

Effective management is essential to realize the benefits of each of these initiatives. Bingaman proposes a Clean Energy Deployment Administration as “a new independent entity within DOE” to replace the current loan guarantee programs for the planning and execution of energy technology demonstration projects. Here it seems to me that Bingaman wants to have it both ways: an independent entity that has the flexibility, authority, and agility to carry out projects that inform the private sector about the performance, cost, and environmental effects of new energy technologies but is also part of the government, with its inevitable personnel and procurement regulations and annual budget cycles that remains susceptible to the influence of members and their constituencies. I advocate the creation of a quasi-public Energy Technology Corporation, funded by a one-time appropriation, because I believe it would be more efficient and produce information more credible to private-sector investors.

But, in my view, something is wrong here. Congressional leaders should not be expected to craft energy policy; that is the job of the Executive Branch. Congressional leaders are supposed to make judgments between different courses of action, whose costs and benefits are part of a comprehensive energy plan formulated by the Executive Branch and supported by economic and technical analysis. The current administration does not have such a national energy plan, and indeed no administration has had one since President Carter. The result is that members of Congress, depending on their philosophies and interests, come forward with initiatives, often at odds with each other, with no action being the probable outcome. Bingaman’s laudable article underscores the absence of a thorough, comprehensive, national energy plan.

JOHN DEUTCH

Institute Professor

Massachusetts Institute of Technology

Cambridge, Massachusetts

The author is a former undersecretary of the U.S. Department of Energy.


A better process for new medical devices

The U.S. Food and Drug Administration’s (FDA’s) public health mission requires a balance between facilitating medical device innovation and ensuring that devices are safe and effective. Contrary to Paul Citron’s assertion in “Medical Devices: Lost in Regulation” (Issues, Spring 2011), applications to the FDA for breakthrough technologies increased 56% from 2009 to 2010, and FDA approvals for these devices remained relatively constant.

In fact, our device review performance has been strong: 95% of the more than 4,000 annual device applications that are subject to performance goals are reviewed within the time that the FDA and the device industry have agreed on.

In the few areas where we don’t meet the goals, our performance has been improving. Part of the problem lies with the quality of the data submitted to the FDA. For example, 70% of longer review times for high-risk devices involved poor-quality clinical studies, with flaws such as the failure to meet primary endpoints for safety or effectiveness or a significant loss of patients to follow up. This submission of poor-quality data is inefficient for the FDA and industry and unnecessarily diverts our limited resources.

Citron attempts to compare the European and U.S. systems. But unlike the FDA, Europe does not report review times, provide a basis for device approvals and access to adverse event reports, or have a publicly available database of marketed devices, making it difficult to draw meaningful comparisons.

Some high-risk devices do enter the market first in Europe in part because U.S. standards sometimes require more robust clinical data. The FDA requires a manufacturer to demonstrate safety and effectiveness, a standard Citron supports. Europe bases its reviews on safety and performance.

For example, if a manufacturer wishes to market a laser to treat arrhythmia in Europe, the manufacturer must show only that the laser cuts heart tissue. In the United States, the manufacturer must show that the laser cuts heart tissue and treats the arrhythmia. This standard has served U.S. patients well but represents a fundamental difference in the two systems.

The FDA sees data for all devices subject to premarket review and can, therefore, leverage that information in our decisionmaking; for example, by identifying a safety concern affecting multiple manufacturers’ devices. In Europe, manufacturers contract with one of more than 70 private companies to conduct their device reviews, limiting the perspective of individual review companies.

Just as the European Commission has recognized shortcomings of their regulatory framework, so too has the FDA acknowledged limitations in our premarket review programs, and we are addressing them. Earlier this year, we announced 25 actions we will take this year to provide industry with greater predictability, consistency, and transparency in our premarket review programs and additional actions to facilitate device innovation.

The solution is not to model the U.S. system after that of Europe—both have their merits—but to ensure that the U.S. system is both rigorous and timely, striking the right balance between fostering innovation and approving devices that are safe and effective. Ultimately, this will best serve patients, practitioners, and industry.

JEFFREY SHUREN

Director, Center for Devices and Radiological Health

U.S. Food and Drug Administration

Washington, DC


Disingenuous corporate rhetoric in “Medical Devices: Lost in Regulation” proposes that medical manufacturers’ main goal in their race to market inadequately tested products is to “save lives” in America.

It is not.

Rather, it is to decrease testing time and the expense of new products and to increase sales sooner. Yet to state the obvious, that this is at the expense of first fully proving safety for patients, would be an unacceptable argument for them to make.

Lax FDA oversight due to political pressure from multibillion-dollar companies (such as Medtronics, the company Paul Citron retired from as Vice President of Technology Policy and Academic Relations) has successfully allowed patient safety to take a back seat to profits, at the cost of patient injury and deaths. Downplaying “occasional device recalls”—as Citron calls them—is thoughtless and belies the truth. They are, in fact, a major problem. The current “recall” of the Depuy ASR hip replacement, unlike a car recall, in which a new part is merely popped in, will result in thousands of revision operations with undisputedly worse results, if not the “occasional” deaths from reoperation on the elderly. The “recalls” of Vioxx and Avandia were only after an estimated 50,000 deaths occurred.

The expense of testing should burden industries, not patients. Devices should be tested for three to five years, not two. As the joke goes, “it’s followup that ruins all those good papers,” though Citron would have us believe Voltaire’s “perfect is the enemy of good” to justify substandard testing for patient safety, based on European experiences. (Does he mean like Thalidomide?) Complications recognized early should be bright red flags and not minimized statistically to keep the process moving. Study redesigns with delays are better than injuring or killing patients. In the past, numerous products with early complications were pushed through, resulting in catastrophic problems and ultimately recalls. This is unacceptable.

The FDA should not cater to industry’s push for shortcuts. FDA advisory panels should be free of any industry consultants. Nonconflicted experts are not hard to find. Their recommendations should be followed by the FDA. The spinal device X-Stop was voted down by the FDA advisory panel yet nevertheless approved by the FDA without explanation. The revision rate is now 30% after longer followup. Also, devices approved for a use such as cement restrictors for hip replacements via the 510K pathway should not be allowed to be used for other purposes such as spinal fusion cages despite specific FDA warnings not to do so.

Corporate scandals abound and Citron’s Medtronics has had more than its fair share of headlines. Such occurrences support the notion that patient safety is not every company’s highest priority, despite the public relations rhetoric.

All should remember that the purpose of the FDA first and foremost is the safety of patients—nothing else—and the process of approval needs to be more rigorous, not less.

CHARLES ROSEN

President, Association for Medical Ethics

Clinical Professor of Orthopaedic Surgery

School of Medicine

University of California, Irvine

Irvine, California


Paul Citron’s thoughtful perspective resonated with me. Indeed, I see a recent destructive and hyperbolic demonization of innovators and the medical “industry” that seems to have influenced the way devices are evaluated and regulated in the United States. I am concerned that goodwill is less a factor in creating the regulatory morass that Citron details than are narcissistic compulsion and political machination.

To be sure, Citron never contends that the FDA and the regulatory process are unnecessary, illegitimate, or inappropriate. Also never stated is that “all things and all folks FDA are evil.” His contention is simply that the current state of affairs is too “complex and expensive” and this leads to unnecessary delay in getting new devices and innovation to our sick patients. Certainly some demonize the FDA, and just as we should stomp out that type of rhetoric leveled at the “medical-industrial complex,” we should level only constructive criticism at the FDA. However, having participated in the development of many implantable devices, ranging from pacemakers to defibrillators, hemodynamic monitoring systems, left ventricular remodeling devices, and artificial hearts, I agree with

Citron that the regulatory process has become a huge and overly burdensome problem in the United States. This has resulted in the movement of device R&D outside of the United States. Citron’s argument that this is detrimental to our patients in need of new, novel, perhaps even radical, devices as well as to our academic and business community is on target. Citron is not arguing for ersatz evaluation of devices but for a more reasoned development perspective. An approach that gives the utmost consideration to all parties, including the most important one, the ill patient, needs to be developed.

There is a way forward. First, rational, thoughtful, and fair evidence-based (as Citron has done) critique of the present system with reform in mind is mandatory. Next, adopt the concepts and approach to device study and development that the INTERMACS Registry has. Though perhaps as the Study Chair for this Interagency Registry of Mechanical Circulatory Support I have a conflict of interest, I see this as an exemplary model for academic and clinical cooperation with our federal partners, the National Institutes of Health/National Heart Lung and Blood Institute, FDA, and Centers for Medicare & Medicaid Services, to develop common understanding of the challenge. A constructive and collegial, though objectively critical, environment has been created where the FDA has worked closely with the Registry and industry to harmonize adverse event definition, precisely characterize patients undergoing device implantation, and create high-quality data recovery and management that are essential to decisionmaking during new device development and existing device improvement.

Under the expert management of the Data Coordinating Center at the University of Alabama, Birmingham, and the watchful eyes of Principal Investigator James K. Kirklin (UAB) and Co-Principal Investigators Lynne Stevenson (Brigham and Women’s Hospital, Harvard University), Robert Kormos (University of Pittsburgh), and Frank Pagani (University of Michigan), about 110 centers have entered data from over 4,500 patients undergoing mechanical circulatory support device insertion (FDA-approved devices that are meant to be long-term and allow discharge from the hospital).

The specific objectives of INTERMACS include collecting and disseminating quality data that help to improve patient selection and clinical management of patients receiving devices, advancing the development and regulation of existing and next-generation devices, and enabling research into recovery from heart failure. The Registry has, as an example of constructive interaction with the FDA, provided device-related serious adverse event data directly to the FDA, allowed post-marketing studies to be done efficiently and economically, and was able to create a contemporary control arm for the evaluation of a new continuous-flow ventricular assist device.

Yes, there is a way forward, but it requires commonness of purpose and teamwork. Unfortunately, this is sometimes difficult when industry, academics, politicians, and the FDA are main players. We all too often forget that it is, in the end, simply about the patient. However, I am encouraged by the INTERMACS approach, philosophies, and productivity.

JAMES B. YOUNG

George and Linda Kaufman Chair

Professor of Medicine and Executive Dean

Cleveland Clinic Lerner College of Medicine

Case Western Reserve University

Cleveland, Ohio


Moving to the smart grid

Lawrence J. Makovich’s most fundamental point in “The Smart Grid: Separating Perception from Reality” (Issues, Spring 2011) is that the smart grid is an evolving set of technologies that will be phased in, with modest expectations of gains and careful staging to prevent backlash. We agree. However, we think that he too strongly downplays the ultimately disruptive nature of the smart grid and some near-term benefits.

First, we stress that the smart grid is a vast suite of information and communications technologies playing vastly different roles in the electricity system, stretching all the way from power plants to home appliances. Many “upstream” technologies are not visible to the consumer and improve the reliability of the current system; they are nondisruptive and are slowly and peacefully getting adopted. “Downstream” smart grid elements, which involve smart meters and pricing systems that customers see, have been more controversial and disruptive.

Makovich is correct when he says that the smart grid will not cause rates to reverse their upward trend and that near-term benefits do not always outweigh costs. Smart Power (Island Press, 2010) and our report to the Edison Foundation document the factors increasing power rates, such as decarbonization and high commodity prices—factors that the smart grid cannot undo. However, the near-term economics of some downstream systems are not as black and white as Makovich suggests. Our very recent report for the Institute for Energy Efficiency examines four hypothetical utilities and finds that downstream systems pay for themselves over a 20-year time horizon, the same horizon used to plan supply additions.

We also agree with Makovich that the smart grid is not a substitute for a comprehensive climate change policy, including a price on carbon and strong energy efficiency policies. Although dynamic pricing can defer thousands of megawatts (MW) of new capacity (up to 138,000 MW in our assessment for the Federal Energy Regulatory Commission) and smart grid systems enable many innovative energy efficiency programs, a robust climate policy goes beyond these features.

Finally, as argued in Smart Power, the downstream smart grid will ultimately be highly disruptive to the traditional utility business model. Physically, the smart grid will be the platform for integrating vastly more customer-and community-sited generation and storage, sending power in multiple directions. It will also enable utilities and other retailers to adopt much different pricing than today’s monthly commodity tariffs. It isn’t a near-term development, but we think that the downstream smart grid will ultimately be seen as the development that triggered vast changes in the industry.

We agree with Makovich that the United States is not poised to move toward dynamic pricing of electricity any time soon, but we believe that such a change is inevitable and it is just a question of time before we will see widespread deployment of dynamic pricing. We have been working with regulatory bodies and utilities in North America at the state, provincial, and federal levels and are optimistic that once regulatory concerns are addressed, it will be rolled out, perhaps initially on an opt-in basis.

The Puget Sound example cited by Makovich deals with an early case where the specific rate design provided very little opportunity for customers to save money. The rate could have been redesigned to promote higher bill savings, but a change in management at the company prevented that from happening. More recently, in the Olympic Peninsula, dynamic pricing has been shown to be very successful. In fact, pilot programs across the country and in Canada, Europe, and Australia continue to show that consumers do respond to dynamic pricing rates that are well designed and clearly communicated. They also show that customer-side technologies such as programmable communicating thermostats and in-home displays can significantly boost demand response. The Maryland Commission has approved in principle the deployment of one form of dynamic pricing in an opt-out mode for both BGE and Pepco and is likely to approve other forms once more data have been gathered.

PETER FOX-PENNER

Principal and Chairman Emeritus

AHMAD FARUQUI

Principal

The Brattle Group

Washington, DC


The smart grid has been hailed on many fronts as a game changer: Its very name evokes a desirable Jetsons-like future, where advanced technologies help consumers make well-informed choices about the flow of electrons into (and in some cases out of) their homes.

Lawrence J. Makovich predicts that consumers will eschew the interactive aspects of demand reduction, deterred by the fluctuating pricing and overall effort required for relatively small economic gains, thus pushing the benefits of the smart grid largely to the supply side. A fundamental shift from a “business as usual” mentality to dynamic pricing and engaged consumers he avers, is simply not in the offing.

Although Makovich may be right about consumers’ lack of interest in benefiting from real-time dynamic pricing, we cannot be certain. And while he may also be right about the slower than expected pace of smart grid implementation, there is a bigger point to be made here: that no system should be designed or deployed without an understanding of the human interface and how technology can best be integrated to enhance the human experience in a sustainable, reliable, and efficient manner.

Incentives and motivation play no small role in the adoption and acceptance of new technologies. Research at the University of Vermont, for example, indicates that human beings respond remarkably well to incentive programs. Behavioral studies conducted by my colleagues on smoking cessation for pregnant women show that in contrast to health education alone, a voucher-based reward system can be highly motivating. Although not exactly analogous to electricity usage, these studies suggest that consumer interest in the electric grid can be evoked by properly framing mechanisms that can find their basis in real-time pricing.

One such incentive might be the associated environmental benefits of consumer-driven load management. Smart meters that allow people to visualize the flow of electrons, and its imputed environmental costs, to specific appliances and activities could raise awareness of the environmental impact of electricity generation. Consumers might even come to view electricity not as an amorphous and seemingly infinite commodity but as an anthropogenic resource with an environmental price tag linked to fuel consumption and carbon emissions. As stewards of the planet, this may in fact be one of the smart grid’s most important contributions: making possible a fundamental change in how we view energy.

There is a long and vibrant history of resistance to technological innovation. Indeed, a recent article in Nature (3 March 2011) titled “In Praise of Luddism” argues that skeptics have played an important role in moving science and technology forward. Consider the telephone, which took a technologically glacial 70 years to reach 90% market penetration. In addition to requiring the construction of a new and significant physical infrastructure, many people viewed the telephone as intrusive and at odds with their lifestyles. The cell phone needed only about one-seventh of the time to reach similar market penetration. The land-line telephone was a new technology requiring entirely new infrastructure and behavior and encountering stiff skepticism, but the cell phone was an adaptation of a familiar technology, eliciting a behavior change that was arguably welcome and resulted in positive externalities (such as more efficient use of time). Presuming that cybersecurity and privacy issues are addressed satisfactorily, smart meters, building on an already partially supply-side smart grid, are much more likely to see a cell phone-like trajectory.

In any case, Makovich is right to acknowledge that the issues are complex. But rather than discount the value of the smart grid to the consumer or assume that its deployment will be slow, we need to better understand how consumers will interact with a smarter grid. A holistic approach is needed that can transcend the technological challenges of grid modernization to include related disciplines such as human behavior, economics, policy, and security. In addition, we must frame the opportunity properly—over what time frame do we expect measurable and significant demand-side change? And what are the correct incentives and mechanisms to catalyze this behavior?

In Vermont, we are working toward just such a goal, forming a statewide coalition of stakeholders, involving university faculty from a range of disciplines, utilities professionals, government researchers, energy executives, and policymakers. Although still in the early stages, this collaborative approach to statewide deployment of a smart grid appears to be a sensible and effective way forward.

Recalling the words of Thomas Edison, who proclaimed that society must “Have faith and go forward,” we too must have the courage to move toward an electricity game-changing infrastructure befitting the 21st century.

DOMENICO GRASSO

Vice President for Research

Dean of the Graduate College

University of Vermont

Burlington, Vermont


Improve chemical exposure standards

Gwen Ottinger’s article “Drowning in Data” (Issues, Spring 2011), written with Rachel Zurer, provides a personal account of a fundamental challenge in environmental health and chemical policy today: determining what a safe exposure is. As Ottinger’s story details, not only are there different standards for chemicals because of statutory boundaries (workplace standards set by the Occupational Safety and Health Administration, for example), but standards are also often based on assumptions of exposure that don’t fit with reality: single chemical exposure in adults as opposed to chemical mixtures in children, for example. A regulatory standard sets the bar for what’s legally safe. However, most standards are not based on real-life exposure scenarios nor determined based on health risks of public health concern, such as asthma, cardiovascular disease, diabetes, and cancer. Who then are such standards protecting? The polluters, communities living alongside the chemical industry might argue.

The discordant and fractured landscape of safety standards is reflective of statutory boundaries that in no way reflect the reality of how molecules move in the environment. We are not exposed to chemicals just in the workplace, in our neighborhoods, in our homes, or through our water, but in all of the places where we live and work. Communities living alongside and often working in chemical plants understand this perhaps better than anyone. Similarly, we’re not exposed just at certain times in our lives. Biomonitoring of chemicals in blood, breast milk, amniotic fluid, and cord blood conducted by the Centers for Disease Control and Prevention tells us that humans are exposed to mixtures of chemicals throughout our life spans. What is safe for a 180-pound healthy man is not safe for a newborn, but our safety standards for industry chemicals, except for pesticides, treat all humans alike.

I agree with Ottinger’s call for improvements in health monitoring. I would add that in addition to air monitoring, personal monitoring devices provide improvements in capturing real-life exposure scenarios. The National Institutes of Environmental Health Sciences are developing monitoring devices as small as lapel pins that could provide real-time 24-hour monitoring.

The frustration of the communities living and working alongside polluting industries detailed by Ottinger is a far too familiar story. It doesn’t take more exposure monitoring data to state with certainty that the regulatory process has failed to adequately account for the exposure reality faced by these communities. So too has chemical regulation failed the rest of the public, who are silently exposed to chemicals in the air, water, consumer products, and food. For some of these chemicals found in food or air, there might be some standard limit to exposure in that one medium. But for thousands of other chemicals, there are no safety standards. It is exciting then to see environmental justice communities joining with public health officials, nurses, pediatricians, and environmentalists to demand change to the way in which we monitor chemical safety and set standards in this country through reform of the Toxic Substances Control Act.

SARAH A. VOGEL

Program Officer

Johnson Family Foundation

New York, New York


More public energy R&D

The nation’s energy and climate challenge needs new thinking and approaches, like those found in William B. Bonvillian’s “Time for Climate Plan B” (Issues, Winter 2011). Unfortunately, when dealing with any policy area, particularly energy policy, one must necessarily confront conventional wisdom and ideological blinders from the left and the right alike. Case in point is Kenneth P. Green’s response to the Bonvillian article (Issues, Spring 2011), which makes several egregious errors.

First, Green asserts that public R&D displaces private R&D and goes so far as to say that this is “well known to scholars.” Yet if one takes the time to examine the literature in question, the story is much different. Although a full review is impossible here, several studies over the years have found that public R&D tends to be complementary to private R&D rather than substitutive, and can in fact stimulate greater private research investment than would otherwise be expected. One of the most recent is a study published last year by Mario Coccia of the National Research Council of Italy, using data from several European countries and the United States. Others include a 2003 study of France by the University of Western Brittany’s Emmanuel Duguet, multiple studies of German industry by Dirk Czarnitzki and coauthors for the Centre for European Economic Research, and studies of Chilean industry by José Miguel Benavente at the University of Chile. These are just a few from the past decade, and there are many others that predate these. This is not to say that consensus has been reached on the matter, yet the evidence that public R&D complements and stimulates private R&D is strong, contrary to Green’s statement.

Green also argues that there is “plenty of private R&D going on.” But the evidence suggests that greater investment is sorely needed from both the public and private sectors. The persistent long-term declines in private energy R&D are of ongoing concern, as those who follow the issue will know. Princeton’s Robert Margolis, Berkeley’s Daniel Kammen, and Wisconsin’s Greg Nemet have all demonstrated the sector’s shortcomings. And many leading thinkers and business leaders, including the President’s Council of Advisors on Science and Technology and the industry-based American Energy Innovation Council, have called for major increases in R&D investment. Further, Green estimates private energy R&D spending to be about $18 billion—a substantial overestimate, to the point of being outlandish. A study sponsored by R&D Magazine and Battelle reported domestic expenditures at less than $4 billion, with a rate of only 0.3% of revenues, far less than the figure Green uses. This fits data recently compiled by J. J. Dooley for the Pacific Northwest National Lab, which indicates that private-sector R&D spending has never risen above its $7 billion peak from 30 years ago. So Green is off by quite a bit here.

Green also points to public opposition to high energy costs as a reason not to act. I’d argue that Green’s use of polling data is a selective misreading. Gallup has 20 years of data demonstrating at least moderate levels of public concern over climate change, and polls also show big majorities in favor of clean alternatives to fossil fuels and even in favor of Environmental Protection Agency emissions regulation. Green rightly points out that the public is wary of energy cost increases, but an innovation-based approach would inherently have energy cost reduction as its overriding goal, which is an advantage over other regulatory approaches. Green’s main error here is mistaking public unwillingness to shoulder high energy costs for a public desire for federal inaction.

Green closes his response with, to borrow his own phrase, a “dog’s breakfast” of the usual neoclassical economics tropes against government intervention in any form. Neoclassical thinkers may not care to admit it, but history is replete with examples of the positive role government has played in technology development and economic growth. Public investment is critical to accelerate innovation, broaden the menu of energy technology options, and facilitate private-sector takeup and market competition for affordable clean new energy sources. Bonvillian’s piece represents a terrific step toward this end.

MATT HOURIHAN

Clean Energy Policy Analyst

The Information Technology & Innovation Foundation

Washington, DC


Helpful lessons from the space race

I very much enjoyed reading “John F. Kennedy’s Space Legacy and Its Lessons for Today” by John M. Logsdon in your spring 2011 edition of Issues. As usual, my friend has turned his sharp eye toward the history of space policy and produced an incisive and provocative analysis. Although I find myself agreeing with much of what he has to say, there is one point on which I would take some exception. Logsdon notes that “the impact of Apollo on the evolution of the U.S. space program has on balance been negative.” This may seem true from a certain perspective, but I think that this point obscures the broader truth about the space program and its role in our society.

For those of us with great aspirations for our space program and high hopes for “voyages of human exploration,” he makes a clear-eyed and disheartening point. I am one of the many people who expected that by the second decade of the 21st century I’d by flying my jetpack to the nearest spaceport and taking Eastern or Pan Am to a vacation in space. The sprint to the Moon and the Nixon administration’s decision to abandon the expensive Apollo technologies as we crossed the finish line certainly crushed the 1960s aspirations of human space exploration advocates. From a 2011 point of view, it is easy to marvel at the folly of the huge financial expenditures and the negative long-term impact of the expectations that those expenditures inspired.

However, I can’t help but think that, from a broader perspective, going to the Moon was far from a “dead end.” Much as it may be hard for any of us to conceive of this now, in the Cold War context of 50 years ago, President Kennedy faced a crisis in confidence about the viability of the Western capitalist system. It was an incredibly bold stroke to challenge the Soviet Union in the field of spaceflight; a field in which they had dominated the headlines for the four years since Sputnik. Yet by the end of the 1960s, serious discussion about the preeminence of the Marxist model of development (and of the Soviet space program) had vaporized. Instead, human spaceflight had become the icon of all that was right with America. So from a larger geopolitical perspective, the Apollo program was a dazzlingly bold success. Moreover, consider the broader related impacts of the space race on our educational system, technology, competitiveness, and quality of life. Certainly, the ripple effects of our investment in the Apollo program have radically changed our lives, though perhaps not in the ways we had originally dreamed.

Nonetheless, Logsdon is correct when he observes that although we face difficult space policy choices in 2011, we are not (nor seem ever likely to be) in a “Gagarin moment.” President Kennedy’s call for a Moon mission was not about space exploration, it was about geopolitics. We can choose to emphasize the negative impact of that decision on our aspirations, but I am heartened by the prospect that President Obama and our elected representatives might draw a different lesson from the space legacy of President Kennedy. That broader lesson is that investment in space exploration can have an enormous positive strategic impact on our country and our way of life and, most of all, that we should not be afraid to be bold and imaginative in pursuing space exploration.

WILLIAM P. BARRY

Chief Historian

National Aeronautics and Space Administration

Washington, DC


John Logsdon has provided a valuable service in his detailed and insightful analysis of President Kennedy’s decision to launch the Apollo program. In his Issues article and more completely in his new book John F. Kennedy and the Race to the Moon, he helps us understand a conundrum that we “space people” have lived with for the past 35 years: How could something as great and significant as the American achievement of Apollo yield so little to build on for further achievements?

It is, as Logsdon repeatedly observes, because the lunar landing and the politics that enabled it were peculiar products of their time and circumstances and not about the Moon, space, or even science. This understanding is important not just as history but as policy. The failure of the United States to come up with human space goals worthy of the program’s risk and cost may indeed be because we have never come to grips with what the Apollo legacy is rather than what we wish it could have been. Certainly NASA has never shaken its Apollo culture or its infrastructure; a problem Logsdon recognizes all too well, no doubt strongly influenced by his service on the Columbia Accident Investigation Board.

I would like to see more attention paid to other parts of Kennedy’s legacy. The space program of the 1960s didn’t just send humans to the Moon; it created a robotic program of exploration that has taken us throughout the solar system with many adventures and discoveries of other worlds.

Logsdon hints at possible alternate histories, and I hope for his next work, he might go more deeply in such directions. What if Kennedy had lived and we had shifted to a cooperative U.S.-Soviet human lunar initiative? Conversely, what if Eisenhower’s lower-key, no-grand-venture approach had been taken and NASA had been built up more gradually? Would we be further out in space with humans by now or more deeply hidebound on Earth? The space program has brought us many achievements. Is it as good as we should have expected, or should we have expected more?

Logsdon’s history and his links to today’s policy questions should help those in the political system as they try to fix the mess they have created for future space planning. I hope it does.

LOUIS FRIEDMAN

Executive Director Emeritus

The Planetary Society

Pasadena, California


Where once a wag said that the only thing we learn from history is that we never learn from history, my esteemed colleague John M. Logsdon in all seriousness now asserts that the lessons of the Apollo program are that there is “little to learn” from the Apollo program, except Kennedy’s admonition shortly before his assassination that the only sustainable space strategy is “making space exploration a cooperative global undertaking.” And this perhaps reminds us of another principle we already know, that in general, people learn from experience mostly what they already believe, whether actually true or not.

The uniqueness of Apollo was that it was a marshalling of existing technological skills into an awesome engineering task that would be an unambiguous marker of preeminent hi-tech capability, in a world where that status had become uncertain due to Sputnik (and reinforced by Gagarin), even as that quality retained profound scientific, commercial, military, diplomatic, and societal significance. That motivation for the space race a priori doomed any significant U.S.-Soviet joint exploration, the dreams of diplomats notwithstanding. And the prestige gained by the United States and its endeavors in the Apollo triumph turned out (exactly as Kennedy and his advisors hoped in 1961) to be an immense multiplier factor to U.S. strengths in the decades that followed, up to the ultimate collapse of the Soviet Union and its replacement by a government in Moscow with which the United States and its allies could at last genuinely cooperate.

The real benefits of international cooperation in big integrated space projects such as the International Space Station (ISS) were slow to materialize, even as the original promises of cheaper, faster, better all turned out to be delusions. This is despite the way those myths continue to be dogmatically and defiantly touted as motives for future space cooperation, by historians who ought to have learned better and by diplomats who want to constrain the United States from any unilateral space activities unapproved by the international community.

For the ISS, the operational robustness, mutual encouragement of each nation’s domestic commitment, reassuring transparency, and inspirational aspects of the ultimate configuration may indeed validate the project’s expense. This is along with the gamble that attaining access to new environments has almost always paid off immensely in unpredictable ways, eventually.

There are conceivable future space projects, including major ”crash” projects, that could benefit from an understanding of the significant lessons of the Apollo program regarding leadership (which country is in charge and which partners take on subtasks), team staffing (civil servants in the minority), duration (long enough to accomplish something, short enough to allow the best people to sign on for the duration and then return to their original careers), reaping of outside capabilities and intuition (no “not invented here” biases), creative tension between realistic deadlines and an experience-based (not wish-based) safety culture, resonance with national culture (helping define who we are and our degree of exceptionalism), and a well-defined exit strategy or finish line.

The U.S. response started with Sputnik, from my personal experience, because as an 8th-grade “space nut” and inattentive student, I was recruited for an enriched math class within weeks, began Russian classes within months, and within two years was taking Saturday calculus classes—all before Gagarin.

JAMES E. OBERG

Galveston County, Texas

www.jamesoberg.com

The author worked for 22 years at NASA’s Mission Control in Houston and has written a dozen books on aspects of world spaceflight.


As usual, John Logsdon accurately recounts the principal facts surrounding John F. Kennedy’s space legacy. As he points out, the 1957 Sputnik moment referenced by modern officials did not occur. Kennedy seriously considered alternatives to his famous 1961 goal, notably a joint mission with the Soviet Union. After Apollo, the National Aeronautics and Space Administration (NASA) “entered a four-decade identity crisis from which it has yet to emerge.”

One could quibble over whether Project Apollo “required no major technological innovations and no changes in human behavior.” Many technological developments needed to reach the Moon were under way in 1961, as Logsdon says. Some, such as the J-2 engines that powered the Saturn V rocket’s second and third stages and the integrated circuits in the Apollo Guidance Computer, needed further work. Others, such as orbital rendezvous, had never been tried before.

One change certainly occurred: Project Apollo transformed NASA. In 1961, NASA Administrator James Webb oversaw a collection of relatively independent research laboratories adept at managing small projects. To get to the Moon, he had to alter the way in which people in the agency did their work. Against significant resistance, Webb and his top aides imposed an integrating practice known as Large Scale Systems Management. The technique had been developed by the U.S. Air Force for the crash program to build the first fleet of intercontinental ballistic missiles but was new to NASA.

What similar transformational changes might accompany a new Apollo-type space mission? Most significantly, the mission will not resemble the national big science projects of which government employees have grown so fond. The current period of fiscal austerity and the dispersion of aerospace talent around the world and into the commercial sector preclude that.

Logsdon is right when he identifies the prospect of global cooperation as one of the prime reasons why Kennedy continued to support a Moon landing. That sort of cooperation could provide a rationale and a method for a new Project Apollo. So might commercial partnerships and less costly types of mission management.

Based on the experience of Apollo, one thing is sure. The NASA that completes such an undertaking will not resemble the agency that exists today.

HOWARD E. MCCURDY

American University

Washington, DC


Science for Natural Resource Management under Climate Change

Emerging applications of climate change research to natural resource management show how science provides key information for agencies to take action for vulnerable ecosystems.

Climate change poses a fundamental challenge for natural resource management: Climate patterns are shifting in space and time, but national parks, national forests, and other natural areas remain at fixed locations. Research shows that climate change has shifted the ranges of plant and animal species and biomes (major vegetation types). Warming has also altered the timing of events such as plant flowering and animal migration. Climate change has even driven some frog species to extinction. Intergovernmental Panel on Climate Change (IPCC) assessments and other research indicate that unless we substantially reduce greenhouse gas emissions from motor vehicles, power plants, and deforestation, the resulting warming may overwhelm the ability of many species to adapt. Climate change could convert extensive land areas from one biome to another, increase wildfire, transform global biogeochemical cycles, and isolate or drive more species to extinction.

Climate change affects the 2.6 million square kilometers of land owned by the people of the United States and managed by the federal government. This is nearly a third of the country’s total land area and managed mainly by, in order of land area, the Bureau of Land Management (BLM), the Forest Service (FS), the Fish and Wildlife Service (FWS), and the National Park Service (NPS). The missions of these agencies all seek to manage ecosystems for future generations. They are stewards of places of national and often global significance, ranging from Yellowstone National Park (NPS) to the Arctic National Wildlife Refuge (FWS) to Tahoe National Forest (FS) to Grand Staircase-Escalante National Monument (BLM).

Presidential Executive Order 13514 (October 5, 2009) directed Executive Branch agencies to develop adaptation approaches. Department of the Interior Secretarial Order 3289 (September 14, 2009) established department-level climate change response programs that include the BLM, FWS, and NPS. Each of those agencies and the FS has issued a climate change strategy or plan.

Natural resource managers are attempting to move from general written strategies toward specific field actions to improve the resilience of species and ecosystems to climate change. Because the Executive and Secretarial orders have established strong enabling conditions and because existing agency policies generally support actions that promote resilience, policy does not constitute the primary obstacle for resource management agencies to take action on climate change. Rather, existing workloads, limited budgets, and lack of targeted climate change science information constrain full integration of climate change into natural resource management. Concerning the last factor, emerging experience at the NPS offers insight on how science can provide key information for agencies to manage natural resources under climate change. Certain specific science activities merit continued emphasis.

Focus on adaptation

Climate change science should ideally aim to answer resource management questions and contribute to scientific knowledge. Answering resource management questions will directly support the stewardship of land and water. Contributing to scientific knowledge will improve the rigor of the information. In the case of climate change, questions from resource managers and gaps in scientific knowledge point to the need to analyze the vulnerability of species and ecosystems to climate change and to develop and implement adaptation measures.

Adaptation, as defined by the IPCC, is an adjustment in natural or human systems in response to climate change in order to moderate harm or exploit new conditions. The IPCC identifies three types of adaptation: anticipatory (proactive adjustment before climate change occurs), autonomous (spontaneous, unplanned response to climate change), and planned (deliberate adjustment to observed or projected climate change). Adaptation occurs through diverse mechanisms. Natural selection of plants and animals with resilient characteristics will, as individuals pass their genes to off-spring, drive the evolution of species more adapted to changed climate conditions. Agencies and individuals can adapt management practices at specific sites to help individual species undergo the first type of adaptation. Also, agencies can adapt management plans across broad landscapes.

Numerous general reports on adaptation exist. For U.S. natural resource management agencies, the most relevant is the U.S. Global Change Research Program 2008 report Preliminary Review of Adaptation Options for Climate-Sensitive Ecosystems and Resources, which reviewed the experience and policies of each agency. Although this and other reports describe numerous case studies of work in progress, it seems that resource management agencies have only implemented a small number of adaptation measures that were developed using climate change science information and specifically targeted to respond to climate change. Also, only a very small number of the official management plans for operational field units explicitly examine climate change and adopt climate change adaptation measures.

NPS is advancing through a process of resource management with modifications that take account of climate change. Science supports the entire process. Although the NPS manages natural, cultural, and historical resources, infrastructure, and visitor experiences and seeks to develop adaptation measures for each, this article focuses on natural resources. The steps of the resource management process under climate change move end to end from science to specific adaptation actions.

Reduce emissions and naturally store carbon. Eliminating the cause of a problem is the most effective way to attack it. Reducing the greenhouse gas emissions that cause climate change will reduce the need for adaptation. Human activities have raised carbon dioxide (CO2), the principal greenhouse gas, to its highest level in the atmosphere in 800,000 years. The accumulation of greenhouse gases has raised global temperatures to their warmest levels in 1,300 to 1,700 years. IPCC analyses confirm that orbital cycles and other natural factors account for only 7% of observed warming. Motor vehicles, power plants, deforestation, and other human sources emit twice the amount of greenhouse gases that vegetation, soils, and the oceans can naturally absorb. That is the fundamental imbalance that causes climate change.

The world can avoid the worst impacts of climate change by improving energy efficiency, expanding public transit, installing renewable energy systems, conserving forests, and using other currently available measures to reduce greenhouse gas emissions. The important science component of any emissions reduction effort is the use of the IPCC National Greenhouse Gas Inventory Guidelines to quantify emissions. These guidelines provide the scientific methods that parties to the United Nations Framework Convention on Climate Change use to report their emissions.

The NPS Climate Friendly Parks program is reducing greenhouse gas emissions from park operations. As a first step, NPS staff have been conducting emissions inventories using methods based on the IPCC Guidelines. These show that visitors’ cars account for two-thirds of estimated emissions within parks. Many national parks and other federal areas have increased shuttle bus services and installed renewable energy systems.

Vegetation naturally reduces global warming by removing CO2 from the atmosphere and storing it in biomass. Forests in Redwood National Park and Sequoia National Park contain carbon at some of the highest densities in the world. To assess the carbon balance (the difference between storage and emissions) of fire management and other resource management actions, managers need information on the spatial distribution of carbon across the landscape over time. Scientific research that integrates field measurements of trees and satellite remote sensing data can map the spatial distribution of vegetation carbon. The United States, however, currently does not have a time series of spatial data that shows the distribution of vegetation carbon over time across the country.

The FS Forest Inventory and Analysis program has estimated forest carbon in individual plots at 5 to 10-year intervals since the 1980s. An experimental FS effort has also combined field inventory and MODIS remote sensing data to map U.S. forest carbon in 2001. The U.S. Geological Survey (USGS) land cover maps of the United States for 2001 and 2006 potentially provide the basis of a vegetation carbon change estimate. The lack of spatial data on vegetation carbon over time for the entire United States contrasts with Australia, where the National Carbon Accounting System has produced 17 nationwide Landsat mosaics and analyzed data from hundreds of field plots to generate a time series of vegetation carbon across the country since 1972. Such a system in the United States would enable land managers to estimate carbon implications of resource actions.

Identify management questions. The importance of national parks, national forests, and other natural areas should prompt scientists to conduct research that answers questions on the effective management of those areas. Each NPS unit has identified management questions that require scientific input, either implicitly in a General Management Plan (the official master plan for a park) or explicitly in Resource Stewardship Strategies, Wilderness Plans, or other management plans for specific resources.

Individual NPS units often form direct relationships with individual scientists and discuss management questions with them. For example, Saguaro National Park works with University of Arizona researchers on the management of buffel grass, an invasive species that may be favored by climate change. In addition, a consortium of government agencies operates Cooperative Ecosystem Studies Units, a network of universities directly connected to resource managers. The Department of the Interior, through USGS, its main science agency, is establishing eight Climate Science Centers to connect agencies with universities specifically to conduct climate change research.

Academic research can successfully connect theoretical science and resource management applications. For example, Anthony Westerling of the University of California, Merced, and colleagues (Science, 2006) conducted statistical analysis of climate and fire that documented an increase in fire across western federal lands since 1970, coincident with warming. This work advanced scientific knowledge and contributed to resource managers’ understanding of wildfire. Peer review for scientific publication improves the rigor of information that is also used for resource management.

Detect changes and attribute causes. Detection of changes and attribution of causes provide basic information on whether or not a species or ecosystem is changing and whether or not climate change is the cause. Detection is the measurement of historical changes that statistically are significantly different from natural variability. Attribution is the determination of the relative importance of different factors in causing detected changes. Field measurements from national parks have contributed to the detection and attribution to climate change of warmer winters, decreased snow, and earlier spring snowmelt in western U.S. parks, upslope shifts of vegetation and small mammal species in Yosemite National Park, and northward shifts of vegetation in Alaska and winter bird ranges in the lower 48 states.

Attribution can guide resource management toward the predominant factor that is causing change. Whereas resource managers have developed many measures that address invasive species, overharvesting, urbanization, wildfire, and other nonclimate factors, ecological changes because of climate change might require the development of new adaptation measures.

Existing workloads, limited budgets, and lack of targeted climate change science information constrain full integration of climate change into natural resource management.

Analyze vulnerabilities. Vulnerability to climate change is the degree to which a system is susceptible to and unable to cope with adverse effects. Design features of robust vulnerability analyses include:

  • Examination of all three components of vulnerability: exposure, sensitivity, and adaptive capacity. Exposure is the extent of climate change experienced by a species or ecosystem: for example, degrees of annual temperature change per century. Sensitivity is the change in a species or ecological variable for each increment of change in climate: for example, increased tree mortality of 5% per degree of average temperature increase. Adaptive capacity is the ability of a species or ecosystem to adjust: for example, increased germination to compensate for the increased tree mortality.
  • Detection and attribution of historical changes.
  • Analyses of observed and projected data. Because of time lags between the emission of greenhouse gases, the expression of changes in climate, and ecological responses, vulnerability is a function of historical and future climate changes.
  • Quantification of uncertainties. Computer model errors, future emissions scenario assumptions, field measurement errors, and statistical variation all combine to create a range or probability distribution of possible values for any calculation.
  • Identification of vulnerable areas and potential refugia. Spatial analyses that map patterns of vulnerability will identify the locations of the most vulnerable areas and potential refugia. This provides the scientific data needed to prioritize areas for adaptation.

In a published analysis of the vulnerability of ecosystems around the world to vegetation shifts because of climate change (see figure), colleagues and I employed these design features. We conducted a meta-analysis of published research literature for cases of biome shifts detected in the field and attributed to climate change. We examined exposure through spatial analyses of 20th century observed and 21st century projected climate. We analyzed sensitivity and adaptive capacity through spatial analyses of observed and projected vegetation biomes. Using IPCC criteria, we quantified uncertainties and classified areas into five vulnerability classes. We found 15 historical cases of biome shifts detected and attributed to climate change. Spatial analyses indicated that one-tenth to one-half of global land is highly vulnerable to further vegetation shifts.

In another vulnerability analysis, Kenneth Cole of the USGS and colleagues analyzed observed and projected climate and vegetation data on the Joshua tree in the U.S. Southwest. They identified potential refugia for Joshua trees but found high vulnerability in Joshua Tree National Park. In addition, NPS and its partners are conducting other vulnerability analyses across the country for species such as pika and bristlecone pine and ecosystems such as salt marshes.

The use of computer models and simulations in vulnerability analysis requires care. In the design stage, the accurate calibration of models, especially climate downscaling models, depends on observed field data. After the generation of results, the validation of the accuracy of models requires an independent set of field data.

Scenario planning. Scenario planning is a method to consider potential future effects of uncertain driving forces that affect a system and to determine which decisions may offer a better chance of meeting future goals. Herman Kahn of RAND Corporation developed the method in the 1960s for military planning. Pierre Wack of Royal Dutch Shell, Peter Schwartz of Global Business Network, and others refined the method for oil-company planning. The Global Business Network is assisting NPS in applying the method to resource management planning for climate change.

Scenario planning starts with the organization of interdisciplinary groups of resource managers and scientists who work in a specific landscape. Using data from vulnerability analyses and other scientific sources, the group examines pairs of climate variables that are important for resource management and exhibit large uncertainties. Each pair of climate variables defines four possible future management scenarios that are plausible, divergent, and challenging. Groups formulate qualitative descriptions so that each management scenario becomes a story about the future that can help with decisions today. Ideally, groups develop adaptation measures that can respond to each situation, generating a set of options available for managers as conditions unfold.

NPS has conducted scenario planning training workshops in landscapes covering most of the 50 states. Assateague Island National Seashore in Maryland and Sequoia National Park in California are developing adaptation options for key ecosystems based on scenario planning results.

Develop adaptation measures. Applied scientific research provides important information to guide the development of new adaptation measures. Some adaptation measures in development include:

  • Propagation of coral for reef restoration in the Florida Keys. The Coral Restoration Foundation, University of Miami, and partners have established staghorn coral nurseries in Biscayne National Park and other parts of the Florida Keys National Marine Sanctuary. The nurseries propagate coral that has survived recent ocean warming and bleaching episodes. Experimental planting of resilient corals at two dozen sites seeks to restore bleached areas and increase reef resilience.
  • Fire management in the southern Sierra Nevada. The Southern Sierra Conservation Cooperative, which involves the NPS, FS, USGS, University of California, Davis, and others, is developing adaptation measures for fire management. Vulnerability analyses of observed and projected climate, fire, and vegetation data are identifying areas vulnerable to future wildfire regime changes and potential refugia. Scenario planning is providing management response options. The group will provide scientific information that NPS and FS fire managers will use to modify official fire management plans and to implement measures such as wildland fire and prescribed burning based partly on climate change information.
  • Resource management on the Olympic Peninsula. Olympic National Forest, Olympic National Park, and the University of Washington conducted vulnerability analyses and scientist-manager discussions. As an adaptation measure to maintain fish habitat, road culverts are being enlarged or replaced with bridges to reduce erosion and prevent road failure from possible increases in storms. Possible adaptation measures to manage forest ecosystems include creation of forest gaps for the generally shade-intolerant species projected to increase under climate change and reforestation with seed selected for resistance to bark beetles, also projected to increase.
  • General Management Plan for Assateague Island National Seashore. NPS, in consultation with the FWS and the state of Maryland, has drafted a General Management Plan with one alternative designed specifically for climate change. IPCC sea-level rise estimates and a scenario planning workshop provided information for the plan. Proposed adaptation measures include flexible placement and light construction of infrastructure threatened by sea-level rise and adjustment of visitor zones in case of increases in storms.

Prioritize locations. To identify geographic priorities under climate change, managers can broadly consider three options: areas of high, medium, or low vulnerability. For acquisition of new areas, it may be prudent to prioritize areas of low vulnerability, known as refugia, and avoid areas of higher vulnerability. Conversely, for management of existing areas, it may be necessary to prioritize places of higher vulnerability because those locations may require more intensive management. Areas of unique ecological or cultural value may continue to merit high priority.

Implement actions. Resource management agencies have reached this step in only a few cases. For example, Black-water National Wildlife Refuge in Maryland is using local sediment to raise and restore wetlands inundated by rising sea level. Alligator River National Wildlife Refuge in North Carolina is building up oyster reefs and planting flood-tolerant trees in coastal areas vulnerable to sea-level rise. Examples of other possible site-specific adaptation measures include wildland fire and prescribed burning to avert catastrophic wildfires, natural regeneration and enrichment planting of adapted plant species, and reforestation of native riparian tree species along stream banks to provide shade and cool water for fish. At the landscape scale, agencies can adjust large area management plans, establish corridors to facilitate species dispersal and migration, and plan land acquisitions in potential climate change refugia.

The Northwest Forest Plan demonstrates how cooperative efforts can manage for habitats across a landscape rather than managing for individual species at specific sites. In the context of climate change, managing for habitats rather than individual species would involve the identification and conservation of functional groups, such as perennial grasses in a grassland ecosystem, or habitat types, such as a subalpine forest. Assuring the vibrant functioning of an ecosystem could perhaps more effectively conserve more species than dedicating scarce resources to the conservation of a few individual endangered species.

Recognizing the importance of conservation planning based on ecological landscapes rather than administrative boundaries, the Department of the Interior is now establishing 21 Landscape Conservation Cooperatives across the country to bring resource management agencies together to develop adaption strategies.

Because national forests surround many national parks, the FS is one of the most important partners for the NPS in many landscapes. The two agencies collaborate closely on six landscape-scale science and adaptation projects in the Cascade Range, Olympic Mountains, Rocky Mountains, and the Sierra Nevada.

Monitor effects. Monitoring permanent ecological plots can provide essential data to track the effectiveness of adaptation measures. The NPS inventory and monitoring program tracks key physical and ecological characteristics of parks, such as glacier extent and animal populations. NPS did not establish its system to trace effects of individual management actions at specific locations. Also, most sites do not have the 30 years of data needed for a statistically significant sample to examine temporal trends. Agencies will need to address these types of issues in existing monitoring programs, or in some cases establish new monitoring programs, to track whether or not adaptation measures increase the resilience of species and ecosystems.

Adjust adaptation measures. Adaptive management uses the lessons of the past to redesign management for the future. As NPS pursues end-to-end science and adaptation projects in Sequoia National Park, Assateague Island National Seashore, and other areas, changes in climate, ecological response, and future emissions will necessitate changes in adaptation measures.

Policy implications

General reports on adaptation are informative but not adequate for the management of specific natural areas. The most effective approach for natural resource management agencies is to work through a complete process of science and adaptation in specific landscapes. Policy initiatives on climate change can facilitate this work by supporting particular types of applied science. These include detection and attribution of historical change, analysis of vulnerability of species and ecosystems, and quantification of vegetation carbon over time across the United States using field and remote-sensing data. Detection and attribution of historical change guide resource management toward the predominant factor that is causing change. Analyses of vulnerability provide the scientific data needed to prioritize areas for adaptation. Spatial data on vegetation carbon over time could enable land managers to estimate carbon effects of resource management actions.

The Executive and Secretarial orders that established greenhouse gas emission reduction and climate change adaptation as priorities have set strong enabling conditions for future action. Other policies could further facilitate action. For example, integration of climate change information into Resource Management Plans (BLM), National Forest Plans (FS), Comprehensive Conservation Plans (FWS), and General Management Plans (NPS) would adapt these official management plans for operational field units to future change. Moreover, because separate agencies in the Departments of Agriculture, Commerce, Defense, and Interior manage substantial natural areas, further collaboration within and outside of the Department of the Interior Landscape Conservation Cooperatives would facilitate landscape-scale adaptation. Also, the National Climate Assessments of the U.S. Global Change Research Program are useful analyses of climate change science whose continuation is essential for adaptation.

Science and policy are only two of many factors determining resource management decisions. Resource managers combine scientific information and national policies with other considerations: financial costs, human resource requirements, community needs, ethics, and values. Natural resource management under climate change will balance exigencies of the present and the needs of future generations. Science must provide robust and objective information. Ultimately, people will use climate change science to help make decisions on the survival of species and ecosystems.


Patrick Gonzalez () is the climate change scientist for the National Park Service and a lead author on two reports of the Intergovernmental Panel on Climate Change.

From the Hill – Summer 2011

Budget bill cuts R&D spending

R&D funding escaped major cuts in the bill signed by President Obama on April 15 that funds the federal government for the remainder of fiscal year (FY) 2011. However, the budgets of most key R&D funding agencies suffered reductions.

In a budget that cuts spending by $38.5 billion below FY 2010 levels, R&D investment was cut by 3.5% or $5.2 billion to $144.4 billion. However, $4.7 billion of that cut comes from defense R&D programs, with most the cuts coming from demonstration, testing, and evaluation programs at the Department of Defense. Nondefense R&D received cuts of just 0.9%.

R&D spending will increase at the National Aeronautics and Space Administration (NASA) (up $605 million to $9.9 billion), but other agencies lost ground, as follows:

  • National Institutes of Health, down $260 million to $30.7 billion
  • National Science Foundation, down $67 million to $6.8 billion
  • Department of Energy’s (DOE’s) energy programs, down $357 million to $2.1 billion
  • DOE’s Office of Science, down $20 million to $4.9 billion
  • Department of Agriculture, down $501 million to $2.1 billion
  • Department of Homeland Security, down $175 million to $712 million

In a budget that favored basic over applied research, DOE’s Energy Efficiency and Renewable Energy (EERE) program, an applied research effort, was cut by 18.4% or $408 million to $1.8 billion. EERE had been slated for a 35% or $775 million cut in the original House-passed budget bill. DOE’s Advanced Research Projects Agency-Energy, largely an applied research program with some development spending, received $180 million in the funding bill, less than the $300 million requested by the president. These funding levels do not include a 0.2% across-the-board cut for all nondefense agencies.

The budget bill includes some controversial policy riders. One amendment inserted by Rep. Frank Wolf (R-VA), a vocal critic of Chinese policy and especially its lack of progress on human rights, prohibits the White House Office of Science and Technology Policy (OSTP) and NASA from using federal funds to “develop, design, plan, promulgate, implement, or execute a bilateral policy, program, order, or contract of any kind to participate, collaborate, or coordinate bilaterally in any way with China or any Chinese-owned company.” Wolf is concerned that the Chinese space program is being led by the People’s Liberation Army (PLA), stating that “there is no reason to believe that the PLA’s space program will be any more benign than the PLA’s recent military posture.”

At a May 4 hearing of Wolf ’s appropriations panel, however, OSTP Director John Holdren stated that the prohibition does not apply to the president’s “constitutional authority to conduct negotiations.” Holdren said that applications of the provision to administration policy would be considered on a case-by-case basis, a statement that did not satisfy Rep. John Culberson (R-TX), who warned that OSTP and NASA funding might be jeopardized by efforts to collaborate or coordinate in any way with China.

Another provision included in the bill removed gray wolves in Idaho, Montana, and parts of Oregon, Utah and Washington from protection under the Endangered Species Act. The provision was championed by western members Rep. Mike Simpson (R ID) and Sen. John Tester (D MT) and opposed by many environmental groups because it circumvented the usual delisting process. On May 4, the U.S. Fish and Wildlife Service published a rule to comply with the directive.

The budget bill did not include a provision, pushed strongly by Republicans, to strip the Environmental Protection Agency (EPA) of its authority to regulate greenhouse gas emissions.

Although the budget fights on FY 2011 have just ended, they have already begun for FY 2012. A House-passed budget resolution approved on April 15 includes cuts in discretionary spending of about $100 billion below the president’s budget request.

In an April 13 speech, the president discussed his long-term budget vision. Although he showed signs of flexibility in his desire to “keep annual domestic spending low by building on the savings that both parties agreed to,” he said he was committed to “not sacrifice the core investments that we need to grow and create jobs.” He reaffirmed his State of Union address pledge to invest in medical research, clean energy technology, new roads and airports, broadband access, education, and job training to “do what we need to do to compete.”

Court rules in favor of funding for human embryonic stem cell research

The U.S. Appeals Court for the D.C. Circuit on April 29 vacated a preliminary injunction imposed by a district court judge last summer that blocked federal funding of human embryonic stem cell research, temporarily causing a shutdown of National Institutes of Health (NIH) stem cell projects.

The injunction was issued in August 2010 by U.S. District Judge Royce C. Lamberth in response to a lawsuit involving two adult stem cell researchers who argued that the funding of embryonic stem cell research would cause them “irreparable injury” by increasing competition and therefore potentially taking funds away from their work. Lamberth agreed and deemed human embryonic stem cell research illegal under the Dickey-Wicker Amendment, an annual feature in NIH’s appropriations bill that prohibits the use of federal funds for research that destroys an embryo. NIH was forced to shut down its intramural human embryonic stem cell experiments and halt any grants or renewals that had not yet been paid out.

Judges Thomas Griffith and Douglas Ginsburg of the Appeals Court disagreed with Lamberth’s contention that the harm to stem cell researchers caused by the injunction was “speculative.” Instead, they said the harm would be “certain and substantial. … Their investments in project planning would be a loss, their expenditures for equipment a waste, and their staffs out of a job.” They also concluded that the Dickey-Wicker Amendment “is ambiguous and the NIH seems reasonably to have concluded that although Dickey-Wicker bars funding for the destructive act of deriving an ESC [embryonic stem cell] from an embryo, it does not prohibit funding a research project in which an ESC will be used.”

Although the Appeals Court has ruled on the injunction, Lamberth still must rule on the underlying lawsuit in the case, which means that for now legal questions will continue to hamper the stem cell field.

FDA’s medical device approval process scrutinized at Senate hearing

The Food and Drug Administration (FDA) has made “limited” progress in implementing recommendations made in a 2009 Government Accountability Office (GAO) report on its procedures regarding medical devices, according to testimony at an April 13 hearing of the Senate Special Committee on Aging.

Marcia Crosse, director of the GAO’s Health Care Team, outlined the preliminary findings of an ongoing GAO investigation into the FDA’s management of medical device review, postmarket monitoring, and recall processes.

“Concerns persist about the effectiveness of the 510(k) process in general, including its ability to provide adequate assurance that devices are safe and effective,” Crosse said. “Gaps in FDA’s postmarket surveillance show that unsafe and ineffective devices may continue to be used, despite being recalled.”

Of the 27 types of devices classified as high risk in the 2009 GAO report, FDA has issued new final rules on just one, Crosse said.

William Maisel, the chief scientist of the FDA Center for Devices and Radiological Health, acknowledged at the hearing that the agency is “strained” by limited funds but is working to improve adverse-event reporting. He expects the FDA to issue rules on the remaining 26 device categories by the end of 2012. The FDA has already issued strategic goals to improve its medical device approval process. It has also commissioned the Institute of Medicine to study the issue, and the results of the study are expected this summer.

Sen. Herb Kohl (D-WI), chairman of the Senate committee, said “I am encouraged by the numerous initiatives that FDA is implementing for more effective medical device approval and postmarket surveillance. Nevertheless, I’m concerned that the agency’s oversight of medical products still remains on the GAO’s ‘high risk’ list… and that is unacceptable.”

The hearing highlighted the story of Katie Korgaokar, a Denver resident who received a DePuy ASR hip implant to treat a congenital condition called Perthes disease. In 2010, the DePuy hip was recalled, and Korgaokar endured a second hip-replacement surgery in early 2011. Korgaokar was one of 96,000 patients affected by the DePuy hip recall.

“FDA has had over 20 years to tackle these high-risk devices,” Kohl said. “As we have seen with the Johnson & Johnson hip implant today, it’s high time to protect patient safety and correctly classify these devices.”

Kohl also suggested that FDA develop a “more robust postmarket surveillance program,” signaling his interest in addressing this concern in the Medical Device User Fee and Modernization Act reauthorization next year.

A recent study led by Diana Zuckerman and published in the Archives of Internal Medicine, found that “from 2005 through 2009, the 113 highest-risk device recalls involved 112.6 million recalled products.” Zuckerman testified at the hearing that “In the first six months of 2010, the FDA recalled more than 437 million additional products because of high risks, including death.”

The medical device industry is concerned that management problems at the FDA have slowed medical device innovation. In his testimony, David Nexon, senior executive vice president of the Advanced Medical Technology Association, said that there are “inefficiencies at FDA that delay patient access to new treatments and cures and erode U.S. global competitiveness in the development of medical technology.”

Hydraulic fracturing debated in House

The House Science, Space and Technology Committee held a May 5 hearing to examine whether additional studies need to be conducted to determine the safety of hydraulic fracturing (also called fracking), a method use to extract natural gas from underground.

The hearing took place in the wake of a natural-gas well eruption and leak, a report published in the Proceedings of the National Academy of Sciences stating that hydraulic fracturing can contaminate drinking water with methane, and a report from House Democrats asserting that chemicals used in hydraulic fracturing could contaminate drinking water.

Chairman Ralph Hall (R-TX), who is opposed to additional government studies, called an Environmental Protection Agency (EPA) study that is being drafted “yet another example of this administration’s desire to stop domestic energy development through regulation.”

Paul Anastas, assistant administrator of EPA’s Office of Research and Development, defended the EPA’s hydraulic fracturing study: “The study is designed to examine the conditions that may be associated with the potential contamination of drinking water resources and to identify the factors that may lead to human exposure and risks. The scope of the proposed research includes the full lifespan of water in hydraulic fracturing, from acquisition of the water, through the mixing of chemicals and actual fracturing, to the postfracturing stage, including the management of flow back and produced water and its ultimate treatment and disposal.” Anastas tried to assure the committee that the EPA would not presuppose any results of the study.

Most witnesses on the first panel were not concerned that hydraulic fracturing could be a danger because fracking has never resulted in unsafe drinking water. They said that there were enough studies for regulators to analyze and additional studies are unnecessary. In response to a question from Rep. Ben Ray Lujan (D-NM), every witness on the first panel, except for Robert Summers of the Maryland Department of the Environment, said that states should create their own standards and that it was unnecessary to create a set of nationwide baseline hydraulic fracturing safety standards.

Although Democrats, for the most part, acknowledged the need for natural gas, they also thought that studies need to be conducted to prove that natural gas extraction using hydraulic fracturing is safe. Ranking Member Eddie Bernice Johnson (D-TX) reminded members that although hydraulic fracturing has been used in the past, the technology and the drilling processes have evolved rapidly in the past few years. Although not disagreeing with Johnson, Harold Fitch, the director of the Office of Geological Survey for the Michigan Department of Environmental Quality, stated that when aquifers have been contaminated, it was because of the well construction, not because of the practice of hydraulic fracturing.

Patent reform bill moves ahead

On March 30, three weeks after the Senate passed its version of a patent reform bill by a 95 to 5 vote, House Judiciary Chairman Lamar Smith (R-TX) introduced the House version of the bill, H.R. 1249, which was similar to the Senate bill. On April 14, after the bill was amended to look even more like the Senate version, it was approved by the House Judiciary Committee 32 to 3.

Like the Senate bill, the House bill would create a first-to-file system that would align the U.S. patent system with that of other countries. Inventors would still have a one-year period from the time they publish information to file a patent without their invention being considered prior art.

Moreover, the bill would allow the U.S. Patent and Trademark Office (PTO) to set its fees and keep the income. Previously, Congress would allocate excess PTO income to other programs, rather than reinvest it in the PTO. This change in funding is anticipated to result in expedited patent reviews and shrink the patent backlog, now at about 700,000 applications. Both bills would create three or more satellite patent offices.

The House and Senate bills allow patent holders to correct misfiled patents that either inaccurately portray or omit information. They also give third parties the opportunity to petition the validity of a patent once it is awarded. The rationale for this review is to weed out weaker patents or ones that should be considered prior art.

Science and technology in brief

  • Sens. Sheldon Whitehouse (D-RI) and Olympia Snowe (R-ME) introduced legislation (S. 973) to establish the National Endowment for the Oceans. The bill would establish a permanent funding source for ocean research and restoration funded primarily by interest accrued from the Oil Spill Liability Trust Fund and revenues from off-shore energy development.
  • On April 7, Sen. Al Franken (D-MN) introduced legislation to improve science, technology, engineering, and mathematics (STEM) education training for teachers. The STEM Master Teacher Corps Act (S. 758) would implement a recommendation outlined in a report on STEM education by the President’s Council of Advisors on Science and Technology. It would create a mentoring program and a financial reward system for high-performing teachers and schools.
  • The Senate Energy and Natural Resources Committee held a May 12 hearing on legislation to encourage the development of carbon capture and storage (CCS) technology. Committee Chairman Jeff Bingaman (D-NM) and Sen. John Barrasso (R-WY) introduced The Carbon Dioxide Capture Technology Act (S. 757), which would create incentives for development of new CCS technologies through a prize system. The senators also introduced S. 699, which would create a long-term liability program that would provide incentives for large-scale, early-mover deployment of integrated geologic CCS.
  • On March 30, Sens. Charles Schumer (D-NY), Susan Collins (R-ME), and Joseph Lieberman (I-CT) introduced legislation (S. 679) to eliminate the need for Senate confirmation of almost 200 executive branch positions, including all members of the National Science Board, the four associate directors of OSTP, and the chief scientist of the National Oceanic and Atmospheric Administration. The bill to streamline the confirmation process is supported by Majority Leader Harry Reid (D-NV) and Minority Leader Mitch McConnell (R-KY).
  • On May 5, director John Holdren announced that OSTP has requested that every covered agency provide its draft scientific integrity policy within 90 days. At the time of the announcement, 30 executive departments or agencies had submitted progress reports to OSTP on their scientific integrity policies, and six of them had submitted draft policies.
  • The Office of Government Ethics (OGE) issued a proposed rule that would allow federal employees to serve or participate in an official capacity in nonprofit organizations, including professional scientific societies. The OGE proposal reflects a recommendation made in the OSTP Scientific Integrity Memorandum.
  • EPA announced that it would delay the effective date of its new standards for major source air boilers and commercial incinerators to allow the agency to seek additional public comment. EPA will accept additional data and information on these standards until July 15, 2011.

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Investing in Perennial Crops to Sustainably Feed the World

The dramatic increases in yields of annual crops are approaching their limits. But similar advances are possible in hundreds of underused perennial species.

The world’s food supply is insecure and inadequate and growing more so. But that gloomy prospect could be altered dramatically if the world adopted a novel but simple strategy: supplement the annual food crops that will soon be unequal to the task of sustaining us with improved perennial plants such as food-bearing and bioenergy-producing trees, shrubs, forbs, and grasses. Making this shift will not be easy and will require significant additional research. But we believe it is a practical and relatively inexpensive approach that will not only increase food and energy security, but will also improve soil quality, protect water resources, reduce floods, harvest carbon dioxide (CO2), and provide jobs for millions of people.

During the past half-century, the world’s population has doubled, increasing by more than three billion people. Fortunately, grain production has more than doubled during that time. Availability of nutritious food at the lowest cost in history underlies whatever peace, prosperity, and progress human society has enjoyed.

In the next half-century, however, the planet is scheduled to add another three billion people. We will once again need to increase food production by an equivalent amount just to stay in place. But staying even will not be enough. Today there are millions of people who don’t have enough to eat and millions more who yearn to move away from mainly plant-based diets. The recent push to produce crops for energy use complicates the situation.

Increases in the yields of the annual crops on which we now rely are certainly possible and should be pursued vigorously. However, these efforts are unlikely to provide a complete and sustainable solution to the growing scarcity of food and energy. Much of the land on which we depend is losing productivity because of deforestation, development, overgrazing, and poor agricultural practices. Erosion, pollution, and the expansion of deserts are among the consequences.

Water tables are falling as aquifers are pumped at rates exceeding their ability to recharge. Even the water in deep-fossil aquifers, laid down millions of years ago and which can’t be recharged, is being depleted. Nearly 90% of all fresh water used by humans goes for irrigation. According to the United Nations Food and Agriculture Organization (FAO), just 16% of the world’s cropland is irrigated, but this 16% produces 36% of the global harvest.

The stripping of forest and grassland and the cultivation of sloping land have led to rapid runoff of rainwater that normally would help recharge near-surface aquifers. In many regions, inadequate drainage has increased the salt content of the soil, leading to a loss of productivity and sometimes abandonment of agriculture altogether. The once-fertile crescent of the Middle East is a striking example, and similar salinization is accelerating in the United States, China, and elsewhere. It is certainly possible and imperative to increase the efficiency of agricultural water use, but it is not clear whether this will fully compensate for water losses or increase yields of annual crops enough.

Dust bowls and desertification are serious in many parts of the world. Depletion of the fossil aquifer under the North China plain, for example, has led to huge dust storms that choke South Koreans every year. Increasingly frequent storms from Africa routinely drop irreplaceable soil into the Caribbean, endangering the coral and thus the ecosystem there while depleting African lands.

Worldwide cereal crop production appears to have leveled off during the past few years, and per capita cereal crop production has been declining since the 1980s. One result of this decline has been steadily increasing pressure to convert environmentally sensitive land to annual crop production. A vicious cycle of burgeoning population and diminishing soil productivity has led to farming marginal land.

In addition, tropical forests are being cut for timber and burned to create grassland for cattle and farmland for soybeans and other annual crops. But many tropical and subtropical soils are fragile and cannot sustain nontree crops even with regular supplies of increasingly expensive nutrients from petroleum-based fertilizers.

When rainforests are depleted, the climate is altered because moisture, no longer retained in the standing biomass and soil, runs back to the ocean. Whereas evaporation and transpiration from tall, deep-rooted trees normally lead to further rainfall inland, depleted forests become more and more dry, reducing productivity and increasing fire danger.

The problems above are compounded by the increasing diversion of land and water to nonagricultural uses such as factories, residences, and other development. Countries that had been self-sufficient in grain production or even exported it now face declining harvests and need to import food. It takes roughly 1,000 tons of water to grow a ton of grain, so a country that imports another’s grain is also importing its water. Saudi Arabia, China, and other countries are buying or leasing large tracts of land in South America, Africa, Australia, and elsewhere to grow food, often displacing local people who depend on that land.

Social unrest and increased political conflict over shortages of food, energy, and water will be among the likely results. There is evidence that a significant contributing factor to the genocide in Rwanda was the pressure put on the land by rising population and diminished productivity, leading to social decay and murderous political instability. Food riots in a number of countries are now reported frequently. Rising food prices have been cited as contributing to the revolutions in Egypt and Tunisia and to unrest in other countries, and the price increases appear to be driven by long-term trends rather than caused by one or a few unusual events.

The perennial solution

We propose a conceptually simple approach that could make a serious dent in these problems: plant more perennial crops, whose genetic potential and agronomic possibilities have barely been tapped. There are hundreds of species of perennial grasses, shrubs, grains, legumes, and trees that could be selected, improved, and grown on hundreds of millions of hectares of damaged land. The result would be a dramatic increase in high-protein food for people and livestock and also wood and other biomass for fuel and construction. Many perennial plant species could be converted to biodiesel, ethanol, biochar for soil enrichment, and other useful products, even plastics.

The benefits would not stop with more food and biomass. Perennial crops would increase soil organic matter, reduce pollution, and stabilize soils against erosion. They would help fields, forests, and rangelands retain water, thereby reducing flooding and helping aquifers recharge. Perennials would also sequester large quantities of CO2, helping to slow climate change.

Our approach is essentially an adaptation of the Green Revolution. Its advances, which have largely held food insecurity at bay in recent decades, were based on hybrid seeds of high-yielding annual cereal grains, plus pesticides, synthetic fertilizers, and irrigation. The ecological price has been high, however, and current agricultural practices are not sustainable.

Because of their high concentrations of nutritious protein and healthy fats, tree nuts are excellent substitutes for or complements to meat in the human diet.

To be sure, the further development of existing annual crops is not only possible but imperative, especially in Africa. R&D should be intensified. Still, the Green Revolution’s dramatic increases in crop yields and feed efficiency may be approaching their limits. Fortunately, similar advances in productivity could be made in hundreds of underused perennial species, as already demonstrated in oil palm, rubber, and eucalyptus.

Many Green Revolution techniques could be applied to perennial crops, such as classical plant breeding to improve yield, stress tolerance, disease resistance, and other characteristics. But Green Revolution plants typically achieved their high yields via fertilizer, irrigation water, and pesticides. The aim of our proposal would be to develop perennial crop plants that require as little external input as possible.

If perennial plants have so much potential, why have they not received more attention? One reason is that the Green Revolution’s massive improvements in the food supply made it seem best to invest in further development of annuals. Another is that it takes relatively few years to develop improved strains of annual plants. New varieties of perennials take longer. It is often necessary to observe the results of crossbreeding for several years to know whether a new plant strain is really an improvement. Technology can speed up that process, fortunately. The newer laboratory methods, including high-throughput sequencing and comparative genomics, can more rapidly determine whether a given cross has the desired characteristics.

What to plant

Tree plantings would be of three types: nut and other food-bearing trees, oil palm, coconut, and other perennial oil-producing plants for fuel and food, and fast-growing species such as poplar and eucalyptus for rapid production of woody biomass. Nut-producing trees include species adapted to many different climates. Because of their high concentration of nutritious protein and healthy fats, tree nuts are excellent substitutes for or complements to meat in the human diet. (At current yields, one hectare of walnuts alone could supply 10% of a 2,000-calorie daily diet for 47,000 people.) Nuts also offer opportunities for improvement via classical plant breeding, complemented as appropriate by biotechnology. Research should focus on improvements in yield, nutrition, adaptation to different soils and environments, pest resistance, stress tolerance, and timber qualities. It is important to realize that food security comes not from one single source of nourishment but from a multiplicity, especially in the face of rising prices and growing climatic instability.

Some substitution of nuts for meat could have significant environmental benefits. Current world production of more than 75 million metric tons of meat from beef, sheep, pigs, and goats using current livestock management techniques often results in overgrazing, causing rangeland deterioration, especially in developing countries. This deterioration has been associated with severe erosion, soil compaction, violent sandstorms, desertification, and reduced harvest. Rapid population growth and the resulting increase in meat consumption will accelerate these problems, although improved varieties of livestock and grasses and better management practices, including rational and rotational grazing, would greatly increase grassland productivity and soil quality.

More than two billion of the world’s people acquire their energy for heating and cooking from the burning of wood, crop residues, and animal manure, and that is unlikely to change any time soon. In addition to nut trees, therefore, fast-growing nonfood trees would be needed. These could be intercropped among food-bearing plants, promoting a genetically diverse perennial polyculture. More fuel trees also would mean that crop residues and manure that are now burned for fuel could instead be used for enriching the soil.

Wide implementation of constantly improving strains of oil palm and improved agronomy could help supply fuel as well as food.

Perennial grasses, shrubs, and forbs should be planted as polycultures on land unable to sustain trees and would also be used as ground cover and among tree plantings. Because some of these plants are legumes, they would enrich the soil with nitrogen. Perennial legumes and grasses are also being developed to yield edible grains that could be harvested every year without replanting.

Perennial grasses and in many cases trees can be planted on mountainsides and other sloping land that is not suitable for annual crop cultivation but has been cultivated and degraded nonetheless. There is, in fact, more area suitable for production of perennial food, pasture, and bioenergy-producing crops available on steep and marginal land than on the land currently being used for sustainable production of cultivated annual crop plants. Grasses, legumes, and hardy native “weeds” could in many cases serve to restore degraded soil to the point where other perennial plants, including trees, could grow.

Perennials can also help reduce CO2. Fuel from biomass is often said to be carbon neutral because CO2 converted into biomass is subsequently returned to the atmosphere. But it is really better than carbon neutral because some of the carbon fixed by plants, along with other nutrients, is added to the soil, thereby enriching the productivity of the land.

By planting improved perennials, it would be possible to produce many times the biomass originally harvested from plants on degraded lands and thus reduce atmospheric CO2. This would be achieved by harvesting existing perennial plants at intervals of several years and replacing them with much more productive varieties developed by plant breeders. Most important, harvesting plants just at the peak of their growth would add greatly to biomass production because mature plants generally show little net growth and harvest of CO2. Planned harvesting and replanting, including soil enrichment by cycling some of the harvest back into the soil, would thus maximize yield and the fixation of atmospheric carbon. These ideas could be implemented with currently available technologies and are sustainable into the distant future.

Currents efforts

There are successes in the development of perennial crops that we can build on. In a set of reports entitled “Trees Outside Forests,” the FAO has described, for example, how farmers in the Sahel in West Africa leave trees scattered in their fields of annual crops in what is called parkland agriculture. The trees provide food during the dry season and longer-term droughts, as well as other products. The sheanut tree (also known as karaté) is prized for producing oil used in cooking. The oil is also used in expensive face creams sold on international markets. Fruit and seeds from baobab trees provide vitamin C, and the leaves are used in sauces.

But the sheanut and other trees grown by the Sahel farmers are hard to propagate. Few seeds germinate, and there is much potential scope for improvement in yield, drought tolerance, and other characteristics. Because these trees are already valued by the local people, integrating improvements should, with reasonable preparation, go smoothly.

Wherever successful development has taken place, researchers have worked closely with the local farmers and farmers’ associations, local governmental and university staff, and with NGOs. We think the same model would apply to developing perennial crops. Parkland improvement is only one area to be explored. Forest and orchard work, windbreaks, village-owned plantations, and many other forms of development have received attention and could use more.

Change along the lines suggested here is taking place in Europe, the United States, New Zealand, Iceland, South Korea, and China among other places. For example, large-scale plantations of perennial sea buckthorn shrubs on more than 1.2 million hectares in northwest China have reduced soil erosion and land degradation and have formed a new sustainable sea buckthorn industry for the local economy. Buckthorn berries are high in vitamin C and are used to make nutritious juice and jam, as well as face creams and medicinal products. The plant also fixes nitrogen.

Wide implementation of constantly improving strains of oil palm and improved agronomy could help supply fuel as well as food. No additional tropical forests would need to be taken for the production of added food and biodiesel from palm oil or coconuts. Intercropping oil palm trees with other perennial crops to avoid large-scale monocultures would increase the land area needed to produce a given amount of oil, but there is more than enough degraded land available. In addition, polyculture would mimic some of the diversity of healthy ecosystems and could provide improved habitat for wildlife. What is more, palm-oil production yields more than 20 tons of additional dry biomass per hectare per year, some of which could be used as an additional bioenergy source and some to enrich the soil.

The benefits would be more than ecological. As fossil fuels become more expensive, food costs will rise because current agricultural practices require large inputs of fossil fuel energy. In fact, food costs are already rising because of competition between the use of agricultural output for food and energy.

The approach described here would provide employment for large numbers of people, reducing poverty in the developing and developed worlds. It would also reduce dependence on petroleum. In addition, biofuels produced in developing countries would reduce fuel imports, stimulate the countries’ economies, and reduce the price of crude oil worldwide.

Bioethanol and biodiesel from annual crops such as corn, wheat, and oil rape often use nearly as much energy as they save because of the large amounts of fossil fuels required for production, transportation, and processing. In addition, variable amounts of soil carbon are lost to oxidation and erosion, and nitrogen oxides (potent greenhouse gases) are emitted.

In contrast, biofuels and other bioproducts produced from perennial crops such as oil palm, jatropha, switchgrass, miscanthus, sugar cane, poplar, hybrid willow, and eucalyptus use much less energy in their production, add organic carbon to the soil, reduce or eliminate soil erosion by maintaining ground cover, and have a longer period of photosynthesis during the year, thereby fixing more carbon over time.

Oil palm cultivation for biodiesel has recently been criticized because of destructive practices such as monocrop plantings, cultivation on unsuitable land leading to ecological deterioration, rapacious cutting or burning of virgin forests, and other practices that release significant amounts of heat-trapping gases. Like any agricultural activity, oil palm cultivation can be done well or badly, but it need not be done badly.

How to start

The first step toward increasing the proportion of perennial crop plants is to establish several agricultural research stations to study varieties of trees, grasses, and other perennial plants adapted to the local climate, soil, and people’s cultural practices. Studying local soil types and finding ways of improving them are critical.

Three to five stations each would be needed in China, South Asia, Africa, South America, Europe, Japan, Australia, North America, and the Middle East, or 27 to 45 stations worldwide. An initial endowment of $20 million to $40 million for each station would yield investment income of between $1 million ($20 million at 5%) to $2.8 million ($40 million at 7%) per year. This would be enough to initiate and run a station up to the point at which implementation of the plan it develops could become a reality.

The total cost of between $540 million and $1.8 billion would be incredibly inexpensive by world budgetary standards. All of it could be supplied over several years by non-governmental sources such as foundations and individuals.

We aim for private self-sustaining support because government support is often unreliable and cannot be counted on for long term R&D, and private industry is focused on short-term return. Support from governments in many countries would eventually be necessary as large-scale implementation begins, but initial support from foundations and individuals would give the research stations time to develop political networks that could eventually garner government support.

Many of the stations could be additions to or outgrowths of existing research stations and educational institutions, where infrastructure is already in place. Some could be newly created freestanding entities. One healthy byproduct of that approach would be to build R&D capacity in countries that need it.

In addition to creating useful plant varieties through long-term breeding programs, stations would initiate the agronomic, ecological, cultural, and economic analyses needed to implement sustainable local programs. They would work closely with local populations, along the lines of U.S. agricultural extension services. Local people should be brought into the planning process early, because without them any plans will probably founder. The effectiveness of local farmers’ participation in Africa and elsewhere has been well documented.

We suspect that nut and other tree cultivation would yield ecological, dietary, and economic benefits, as in the case for walnuts, but the staff of the stations envisaged here would have to examine questions such as these and many others in detail. The oil palm example illustrates some of the factors that would have to be considered for any crop in any location.

In addition, many dozens to hundreds of species of tree, grass, and forb would have to be evaluated; we have mentioned only a few of the possibilities. No one station could work on hundreds of species. A particular station might work on perhaps a dozen, depending on local soil conditions, topography, and other factors. That is one reason why many stations would be necessary. These species would have to be studied in groups to avoid monocultures, groups whose compositions would vary from place to place. This would enhance biodiversity and maximize economic and environmental benefits. By comparison, annual crop agriculture depends primarily on a small number of species. Worldwide coordination to facilitate the exchange of germplasm as well as information would be mostly informal among the stations, with some help from international organizations such as the FAO, the World Food Program, the Consultative Group on International Agricultural Research (CGIAR), universities, and national governments. No new worldwide bureaucracy need be created.

A major area of inquiry for the stations would be ecological questions such as what the yield might be on hillsides and other types of land and would vary with locale. It might be desirable to accept lower yields on hillsides, for example, if that would help restore the soil or reduce flooding and landslides. Yields lower than those on the best bottomland soils would be better than no yield at all, and yields would rise as soils improve and the land is replanted with superior varieties developed at the stations. Throughout their work, the stations would employ the principles of restoration ecology.

We are aware of the potential ecological damage that could arise from ill-considered plantings of potentially invasive species or contaminated seed. However, nearly all of the annual crops on which we currently depend for food are exotics in most of the places they are grown. This is also true of agriculturally valuable perennials in current use worldwide. The fact that a species is nonnative does not by itself disqualify it as a potential component of a sustainable perennial cropping system. Potential ecological risks and other environmental problems would be a major area of investigation.

As we have said, the amount of money these projects would need is not large by world standards, and the funds can be raised over several years. When stations are set up, especially in poor countries, the principal dangers will be corruption and political interference. Some countries may have to be avoided, at least at first. Networking to find people who are knowledgeable about local conditions will have to take place. Ties to local universities and agricultural agencies will have to be forged to encourage joint work by faculty, staff, and students. The stations could also help train people from other countries through formal programs, internships, and the like. Existing organizations, such as the CGIAR, the World Agroforestry Centre, and others can assist. Close and continuing contact would have to be established with them, with other NGOs, and with ministries of national governments. Fortunately, they are open to such contact. It would be useful, especially in the beginning, to augment the work of these and other centers directly by adding staff and programs to develop perennial plants as mentioned above. As funds are raised and specific regional needs are identified, additional stations could be set up or existing ones expanded.

There are many possible models of how to set up these research stations and how they might operate. Here is one potential scenario: With money available, a station would need a few people trained in plant breeding and in agronomy. These could be either local people or expatriates recruited through networking. Local farmers could also be hired to work. They would provide not just labor but empirical expertise and cultural knowledge on which the scientific staff could draw. A person trained in laboratory work would be essential to speed determination of whether a given cross of two plant strains has the characteristics desired. Work would begin with efforts to improve a few species already in local use while simultaneously testing and adapting a few from outside the local area. The latter would be chosen because the local soil, topography, and climate appear suitable and because they are likely to fit within local dietary and economic needs. Soil and water surveys would be undertaken from the start. One or two people trained in agricultural economics and others who understand the area’s cultures and politics would also be hired. The working staff of a functioning station might thus involve two plant breeders/agronomists, a laboratory worker unless one of the breeders has those skills, perhaps half a dozen farmers full and/or part time, an economist, and a cultural/political analyst. Some of the staff could be shared with other local organizations. This comes to perhaps 10 full- or part-time people, about $500,000 at an average cost of $50,000 per person. That is well within the per station annual income of between $1 million and $2.8 million we have already described. The remainder would go for supplies, equipment, and land.

Land would have to be leased or purchased for test plantings. It is difficult to be detailed here, as one cannot know without examining a particular area whether a single contiguous farm or plots distributed over various types of terrain and soil would be best. Local farmers might be willing in some cases to lend plots for testing, as is the case in the United States. Local rules and laws on land use, of course, would have to be followed.

The world’s food supply is precarious today, exacerbated by intensifying environmental deterioration. It’s time to add improved perennial crops to the worldwide food and energy agendas.

Recommended reading

G. Conway, The Doubly Green Revolution: Food for All in the Twenty-First Century (Ithaca, NY: Cornell University Press, 1997).

R.H.V Corley and P.B.Tinker, The Oil Palm (Malden, MA: Wiley-Blackwell, 2003).

T.S. Cox, J.D. Glover, D.L. Van Tassel, C.M Cox, L.R. De-Haan, “Prospects for Developing Perennial Grain Crops,” Bioscience 56:649-659 (2006).

L.T. Evans, Feeding the Ten Billion: Plants and Population Growth (Cambridge, MA: Cambridge University Press, 1998).

J.D. Glover and J.P. Reganold, “Perennial Grains: Food Security for the Future,” Issues in Science and Technology, Winter 2010

D. Hillel, Out of the Earth: Civilization and the Life of the Soil (Berkeley, CA: University of California Press, 1991.

W.F. Laurance, L.P. Koh, R. Butler, N.S. Sodhi, C.J.A Brad-shaw, J.D. Neidel et al., “Improving the performance of the Roundtable on Sustainable Palm Oil for nature conservation,” Conservation Biology, 24:377-381 (2010).

R.R.B. Leakey and A.C. Newton, “Tropical Trees: The Potential for Domestication and Rebuilding of Forest Re-sources,” Midlothian, Scotland: Institute of Terrestrial Ecology, Edinburgh Center for Tropical Forests, 1996.

L.H. MacDaniels and A.S. Lieberman, “Tree Crops: A Neglected Source of Food and Forage from Marginal Lands,” Bioscience 29:173-175 (1979).

R.L. Naylor, “Energy and resource constraints on intensive agricultural production,” Annual Review of Energy and the Environment Vol. 21 (1996): 99-123.

S.L. Postel, Pillar of Sand: Can the Irrigation Miracle Last? (New York: W.W. Norton, 1999).

M.W. Rosegrant and S.A Cline, “Global Food Security: Challenges and policies,” Science 302:1917-1919 (2003).

M.W. Rosegrant, X. Cai, and S.A. Cline, World Water and Food to 2025: Dealing with Scarcity. Washington, DC: International Food Policy Research Institute, 2002.

J.R. Smith, Tree Crops, A Permanent Agriculture (New York: Devin-Adair 1953).

D.G. Tilman, J. Hill, and C. Lehman, “Carbon-Negative Bio-fuels from Low-Input High-Diversity Grassland Bio-mass,” Science 314:1598-1600 (2006).

M. Williams, Deforesting the Earth: from Global Prehistory to Global Crisis (Chicago, IL: University of Chicago Press, 2003).


Peter C. Kahn () is professor of biochemistry, Thomas Molnar () is assistant professor of plant biology, and C. Reed Funk () is professor emeritus of plant biology at Rutgers University. Gengyun G. Zhang () is general manager, Department of Agriculture and Bioenergy, Beishan Industrial Zone, Shenzhen, China.

Agriculture’s Role in Cutting Greenhouse Gas Emissions

Agriculture is responsible for 7% of total emissions of greenhouse gases into the atmosphere in the . Although agriculture is not the major source of greenhouse gas emissions—that title belongs to industrial plants that burn fossil fuel—it is nevertheless an important one and deserves increased attention. The good news is that useful remedies are at hand. Using various best management practices in a number of agricultural operations can reduce greenhouse gas emissions by nearly a third.

The federal government therefore should include in its next farm bill, scheduled to be debated in 2012, incentives to promote the use of these practices. The potential rewards are considerable. Cutting emissions of greenhouse gases by whatever means will help to minimize the risk of climate change. Cutting emissions from agricultural sources also will help to improve air and water quality in a number of important ways. For example, some of the practices will help reduce the runoff of nitrogen and phosphorus compounds that can contribute to the eutrophication of aquatic systems and subsequent harm to aquatic organisms. In an extreme case, large quantities of these compounds carried by rivers throughout the middle of the country have found their way into the Gulf of Mexico and created seasonal “dead zones” where commercial fish, oysters, and other organisms experience increased mortality.

The main source of greenhouse gases from agriculture is the emission of nitrous oxide (N2O) from soils treated with nitrogen-based fertilizers to aid in growing crops and grazing livestock. The next leading source is emission of methane (CH4) by ruminant livestock, especially cattle, through their burps and other digestive outgassings. Collectively, soils and livestock (at 40% and 25%, respectively) are responsible for nearly two-thirds of all agricultural greenhouse gas emissions. Other sources include carbon dioxide (CO2) from the operation of farm equipment (13%), CH4 and N2O from manure management operations (11%), and CO2 from cropped and grazed soils (5%). The rest (6%) results from a variety of minor sources.

N2O emitted from soils is particularly significant, because it has a heat-trapping greenhouse effect that is approximately 310 times greater than that of CO2. N2O emissions result from the biological processes of nitrification and denitrification. Put simply, nitrification occurs when a nitrogen-containing compound called ammonium, which is a main ingredient in many fertilizers, is transformed by microbes in the soil into another compound called nitrate. (To a lesser degree, manure applied to fields is also a source of ammonium.) Denitrification occurs when microorganisms metabolize the nitrate and convert it to N2O (and other byproducts). This process proceeds especially fast when soils are wet and nitrate levels are high. Ultimately, 1 to 5% of the nitrogen added to agricultural fields in fertilizer and manure is lost to the atmosphere via soil N2O emissions.

New tools can help

Recent advances in fertilizer technology could help to reduce N2O emissions. Timed-release fertilizers and fertilizers with nitrification inhibitors provide a gradual supply of nitrogen to the crop, synchronous with plant demand for nitrogen. When the supply of nitrogen in soils coincides with the demand by plants for nitrogen, less nitrogen is available to be converted to nitrogen gas or leached from the system. These stabilized fertilizers are an improvement over traditional fertilizers that provide a large, immediate supply of ammonium upon application.

IF FEDERAL AND STATE GOVERNMENTS WANT TO ENCOURAGE THE ADOPTION OF BEST MANAGEMENT PRACTICES, THEY WILL NEED TO TAKE INTO CONSIDERATION THE RISK AVERSION STRATEGIES AND CULTURAL BARRIERS THAT LIMIT THEIR USE BY THE FARMING COMMUNITY.

Research conducted by the U.S. Department of Agriculture (USDA) has shown that stabilized fertilizers can produce similar yields with lower emissions of greenhouse gases. As a case in point, use of these fertilizers in western regions of the nation has resulted in a 60% reduction in N2O emissions from soils in irrigated cropping systems and a 30% reduction for nonirrigated crops. Recent results also suggest that combining no-tillage agriculture (in which soils are not turned over with farm implements) with the use of slow-release fertilizer can result in a reduction of 75% or more in soil N2O emissions from irrigated systems. However, the use of improved fertilizers in central and eastern regions of the country has demonstrated inconsistent results, with some studies showing little impact on emissions and others showing reductions of 30% or more.

The planting of winter cover crops also is beneficial. Cover crops absorb the nitrogen that remains in the soil after the previous crop has been harvested, making it unavailable for conversion into N2O that could be emitted into the air. In addition, the crops reduce the volume of water that flows over and into the soil during the winter months, thereby reducing the leaching of nitrates from the soil into waterways and slowing soil erosion, which is an important problem in some locations.

The potential for successful growth of winter cover crops is greatest in the middle and southern sections of the nation, due to the longer growing seasons and milder winters. Winter wheat and barley are both cover crops that are planted in the fall and harvested in the spring. is planted in the fall and then plowed into the soil or killed with herbicide before the planting of summer crops, such as corn and soybeans.

To address CO2 emissions, a variety of best management practices are available. For example, tractors and many other types of farm implements run on fossil fuels, so reducing the number of times that a tractor passes over a field or using the most efficient implements available will immediately reduce CO2 emissions. No-tillage agriculture and other forms of conservation agriculture, in which fewer tractor passes are needed because the previous year’s crop residue, such as corn stalks or wheat stubble, is left on fields before and after the next crop, are potentially useful in this regard.

Another strategy is to adopt management practices that promote soil’s role as a carbon sink. As a sink, soil currently offsets about 15% of agricultural greenhouse gas emissions. Among ways to boost this capacity, planting winter cover crops can add organic matter that locks in more carbon. Other options include planting marginal croplands with perennial grasses, planting hay or pasture in rotations with annual crops, and changing grazing practices on environmentally sensitive lands. Beneficial grazing practices may include keeping livestock from stream banks, properly resting pastures to restore degraded land, and determining the proper duration and season for grazing pastures.

Reducing tillage intensity also can help. Recent studies suggest that reduced tillage intensity is most effective in the more arid agricultural systems, as well as in warmer regions of the nation. Reduced tillage farming in the has the additional benefits of increased soil water storage and water use efficiency, because of the improved soil structure and reduced evaporation when mechanical soil mixing is reduced. In cold, humid agricultural lands, however, reduced tillage agriculture can lead to decreased plant production because of lower soil temperatures. In addition, the wetter soil conditions in these regions may promote denitrification and increase N2O emissions.

These various practices may offer environmental advantages beyond limiting CO2 and N2O emissions. Agriculture is a major source of many types of air and water pollutants. Soil conditions conducive to N2O emissions also make agricultural soils an important source of nitrogen dioxide and nitric oxide emissions. These gases, collectively called NOx, are health threats in their own right, as well as precursors for smog formation. In addition, inefficient plant nutrient management is considered to be the leading source of the nitrates and phosphates responsible for eutrophication in aquatic systems, as unused fertilizer nutrients are eroded or leached from soils. Excessive soil tillage degrades soil structure, increases erosion, and decreases water use efficiency. Most of the best management practices that address greenhouse gas emissions also address these other environmental concerns.

Barriers to adoption

Despite their potential benefits, the adoption of best management practices by agricultural producers is lagging. In general, the capacity of producers to adopt the practices is influenced by farm size, income, and available capital. Within this context, a number of barriers commonly stand in the way.

For example, stabilized fertilizers cost approximately 30% more than conventional fertilizers. In addition, fall applications of nitrogen fertilizer are common in some northern agricultural regions, leading to the potential for increased soil N2O emissions and leaching of nitrates during the winter and early spring before crops are planted. Fertilizer applied during the early growing season is more likely to be available when crop demand for nutrients is high. Even though spring fertilizer applications are more efficient from an environmental perspective, farmers sometimes apply fertilizer in fall because fertilizer prices and opportunity costs are often lower at this time.

Similarly, growing a winter rye cover crop involves energy and labor costs associated with fall planting, in addition to the extra costs incurred in the spring by plowing the rye, applying herbicide, or doing both. Often, the cover crop is not harvested and there is no benefit of substantially increased summer crop yields, so farmers do not recoup these additional costs.

Reduced tillage practices sometimes require farmers to purchase new equipment, which can be a significant investment. In the long run, reduced tillage can be expected to reduce fuel expenses and personnel time while maintaining or increasing crop yields, but the upfront cost of converting tillage equipment may prohibit farmers from adopting the practice.

Cultural barriers also impede the adoption of best management practices among some agricultural producers. A farmer’s age, education level, and access to information can influence the choices made. Even with adequate capacity and access to information, decisionmaking is influenced by individual perceptions, attitudes, and past experiences. The business of agriculture is inherently risky, and farmers tend to make decisions to minimize that risk.

Supporting the movement

If federal and state governments want to encourage the adoption of best management practices, they will need to take into consideration the risk aversion strategies and cultural barriers that limit their use by the farming community. One approach is to link use of these practices to farm subsidies included in the upcoming 2012 farm bill. The bill is the federal government’s primary agricultural and food policy tool and is updated every five years or so.

For example, USDA can expand promotion of and support for best management practices through its Conservation Reserve Program (CRP), which has proved quite popular and successful in many regions. Funded through the Commodity Credit Corporation and administered by the Farm Service Agency, CRP helps farmers to convert idle or highly erodible cropland or other environmentally sensitive acreage to grasses or other vegetative cover. Participants sign program contracts for 10 to 15 years. Cost sharing is provided to establish the vegetative cover practices, and participants receive an annual rental payment for the term of the contract.

The Environmental Quality Incentives Program, administered through USDA’s Natural Resources Conservation Service, also deserves to be expanded. The program provides technical and financial assistance to farmers and ranchers to address soil, water, and related natural resource concerns on their lands in an environmentally beneficial and cost-effective manner. It covers up to 75% of the costs of a variety of implementation practices and up to 90% for historically underserved producers, such as economically or socially disadvantaged producers or tribes. Among projects covered are nutrient management practices that can improve the use of nitrogen fertilizers and practices that increase levels of crop residues left in the field, thus sequestering carbon in the soil.

Partnerships among government and nongovernmental organizations can be promoted to encourage adoption of best management practices. For example, local and state governments in Maryland, Pennsylvania, and Virginia, along with the federal government, have joined with a number of private organizations in a program intended to decrease runoff of nitrates from agricultural lands into the Chesapeake Bay. The program, which incorporates a combination of regulations and incentives, aims specifically at optimizing the timing of fertilizer applications and reducing the rates of fertilization on winter cover crops. Although its main goal is decreasing nitrate leaching, the program has realized other benefits as well, such as increased carbon sequestration in soils. Such joint programs should be encouraged elsewhere, too.

In modifying the farm bill, one general need will be to increase incentives for practices that reduce greenhouse gas emissions, rather than using punitive measures. That is, agricultural producers should not be charged per unit of emissions, but rather should be paid per unit of reduction. For example, some observers advocate regulations to reduce fertilizer application as a mitigation option, but for most farmers, a significant reduction in fertilizer application rates would likely lead to reduced yields. A better approach would be to give farmers economic incentives to use stabilized nitrogen fertilizers with an optimal timing of application. In this way, farmers could maintain current yields while substantially reducing soil N2O emissions and nitrate leaching.

Programs supported under the farm bill also should be tailored to be region-specific, to encourage best management practices that make sense on the ground. One-size-fits-all approaches are ineffective and create skepticism among agriculturalists. The differences in where reduced tillage and stabilized fertilizers work best and where they do not illustrate the importance of such regionalization. Fortunately, considerable information is already available on where particular best management practices will be most effective, and policymakers and agricultural producers can draw on a number of recently developed support tools in matching practices to regions. For example, the COMET-VR decision-support tool provides farmers and ranchers an easy way to estimate expected greenhouse gas reductions achieved by different mitigation options and is freely available on the Web (http://www.comet2.colostate.edu/). Preliminary analysis using the COMET tool suggests that the universal use of improved fertilizers across the United States would reduce nitrous oxide emissions by about one-third.

The good news, then, is that many of the programmatic building blocks for promoting best management practices in agriculture are already in place. A focused expansion of these efforts in the 2012 farm bill could stimulate widespread adoption of the practices and reduce greenhouse gas emissions, without alienating the agricultural community. Indeed, with proper incentives, many agricultural producers are likely to be willing volunteers. As these efforts expand, the nation would achieve greater climate security and enjoy co-benefits such as improved soil, water, and air quality.


William J. Parton () is professor emeritus at Colorado State University (CSU) and senior research scientist at CSU’s Natural Resource Ecology Laboratory. Stephen J. Del Grosso () is a soil scientist for USDA’s Agricultural Research Service. Ernie Marx () and Amy L. Swan () are research associates at CSU’s Natural Resource Ecology Laboratory.

Science and the Arab Spring

As we all know, this has been the “Arab Spring.” Ordinary citizens have toppled autocrats and still battle dictators, armed with little more than their convictions. Ultimately, they cannot be denied, for as Victor Hugo has said: “No army can defeat an idea whose time has come.” And freedom, human rights, and democracy are ideas whose time has come for even the most remote corners of the globe.

Sparked by the successes of Tunisia and Egypt, the people speak. From the Syrian demonstrators of Damascus and Deraa to the embattled Libyan defenders of the encircled Misrata to the chanting Yemeni crowds in Sanaa, they are the embodiment of the unconquerable spirit described by William Ernest Henley’s “Invictus”:

It matters not how strait the gate,

How charged with punishments the scroll,

I am the master of my fate,

I am the captain of my soul.

This surge for freedom, reminiscent of the best in American history, from the founding fathers to Lincoln to Martin Luther King, will face setbacks to be sure. But ultimately it must triumph.

Today there are those who fear that the Arab Spring will give way to the Islamist winter, that the idealism of the revolutionary democrats will only pave the way for theological autocrats. Yes, Islamist sentiment is rising, and zealotry is expanding in parts of the public realm. But the defense against extremism is not censorship or autocracy; it lies in embracing pluralism and defeating ideas with ideas.

And here science has much to say. Science has much to say to the Islamist zealots who preach an intolerant doctrine. It has much to say to young democrats enamored of the new technologies. It has much to say to those who yearn for a better economic future. And more important, it has much to say about the kind of values we must adopt if our societies are to be truly open and democratic, for these are the values of science.

To the Islamists, who yearn to return to their particular vision of the Muslim past, we say that there is a great Arab and Muslim tradition of science and tolerance that you must be aware of. Indeed, throughout the Dark Ages, it was the Muslims who held up the torch of rationality and reason, while Europe was in the throes of bigotry and intolerance.

Centuries before Bacon, Descartes, and Galileo, in the 10th century, Ibn Al-Haytham laid down the rules of the empirical approach, describing how the scientific method should operate through observation, measurement, experiment, and conclusion: “We start by observing reality … We then proceed by increasing our research and measurement, subjecting premises to criticism, and being cautious in drawing conclusions … In all we do our purpose should be … the search for truth, not support of opinions.”

Likewise, listen to the voice of Ibn Al-Nafis from the 13th century on accepting the contrarian view, subject only to the test of evidence and rational analysis: “When hearing something unusual, do not preemptively reject it, for that would be folly. Indeed, horrible things may be true, and familiar and praised things may prove to be lies.”

This is the Muslim tradition that must be revived if the Arab World, Muslim and non-Muslim alike, will indeed join the ranks of the advanced societies of our time. Rejecting politicized religiosity and reviving these traditions would promote the values of science in our societies.

To the youth, enamored with new technologies or simply seeking a better economic future, we say: Remember science and the scientific method, for it is scientific insight and knowledge that give birth to technology. We must be the producers of knowledge, not just the consumers of technology. That will not happen unless we open our minds to science and the scientific approach and open our hearts to the values of science.

What are these values of science that I keep returning to as the basis for enhancing human capabilities and ensuring the public welfare? As Jacob Bronowski observed more than half a century ago, the enterprise of science requires the adoption of certain values: Truth, honor, teamwork, constructive subversiveness, engagement with the other, freedom, imagination, and a method for the arbitration of disputes. The values of science are adhered to by its practitioners with a rigor that shames other professions.

Truth. Any scientist who manufactures data is ostracized forever from the scientific community. She or he may err in interpreting data, but no one can accept fabrication of data. In no other field of human activity is this commitment to truth so absolute.

Honor. Scientists reject plagiarism. To give each his or her due is essential, a sentiment well captured in Newton’s statement that “if I have seen farther than most, it is because I have stood on the shoulders of giants.”

Teamwork. Collaboration has become essential in most fields of science. And the essence of teamwork is to ensure that all the members of the team receive the recognition that they deserve.

Science advances by overthrowing the existing paradigm, or at least significantly expanding or modifying it. Thus there is a certain constructive subversiveness built into the scientific enterprise, as a new generation of scientists makes its own contribution. And so it must be. Without that, there would be no scientific advancement. But our respect and admiration for Newton are not diminished by the contributions of Einstein. We can, and do, admire both. This constant renewal and advancement of our scientific understanding are features of the scientific enterprise. They require tolerant engagement with the contrarian view and a willingness to arbitrate disputes by the rules of evidence and rationality.

Science requires freedom: freedom to enquire, to challenge, to think, to imagine the unimagined. It cannot function within the arbitrary limits of convention, nor can it flourish if it is forced to shy away from challenging the accepted.

The content of the scientific work is what is discussed, not the person who produced it, regardless of their nationality or the color of their skin or the god they choose to worship or the ethnic group they were born into or their gender. These are societal values worth defending, not just to promote the pursuit of science, but to have a better and more humane society. These are the central core of universal values that any truly modern society must possess.

The Public Welfare Medal is not just a great honor, it is an inspiration for me and for others to redouble our efforts to spread these humane values that I have called the values of science. This is especially true for the young people who sparked our revolution, just as other young people transformed societies, reinvented business enterprise, and redefined our scientific understanding of the world we live in.

To our youth I say: You have been called children of the Internet or the Facebook generation, but you are more. You are the vanguard of the great global revolution of the 21st century. So embrace the values of science, and go forth into the journey of your lives, to create a better world for yourselves and for others. Think of the unborn, remember the forgotten, give hope to the forlorn, include the excluded, reach out to the unreached, and by your actions from this day onward lay the foundation for better tomorrows.


Ismail Serageldin is founding director of the New Library of Alexandria, Egypt. This article is adapted from the speech he gave after being awarded the 2011 National Academy of Sciences Public Welfare Medal.

Disappearing Bees and Reluctant Regulators

Imagine this: You’re a commercial beekeeper, who relies entirely on keeping honeybees for making a living. You head out one morning to examine your bees and find that thousands of your previously healthy hives have “collapsed” mysteriously, after your bees pollinated crops in the fields of one of the farmers with whom you contract. Your bees have abandoned their hives, and they’ve not returned.

Beginning in the winter of 2004–2005, many US beekeepers, especially commercial ones, saw this happening. Several commercial beekeeping operations lost between 30 and 90% of their hives, a figure significantly higher than the roughly 15% that is common when hives are afflicted with parasitic mites or common diseases or when bees suffer from poor nutrition. Half a decade later, losses have remained troublingly high, hovering around 30% in each subsequent year.

Bee researchers dubbed this new phenomenon colony collapse disorder (CCD), and more than a half decade after beekeepers first saw their bees ravaged by it, controversy and uncertainty remain about what causes it. The field observations of commercial beekeepers suggest a causal role for systemic agricultural insecticides such as imidacloprid. However, the Environmental Protection Agency’s “sound science” approach to regulation does not permit the use of informal observational data such as that gathered by beekeepers in federal rulemaking. And traditional scientific research consistent with the EPA’s Good Laboratory Practice policy has thus far not established a definitive role for imidacloprid in causing CCD. Accordingly, the EPA has refused to take imidacloprid and other similar agrochemicals off the market. Importantly, the laboratory research on which the EPA based its determination is premised on a preference for type II (false negative) over type I (false positive) errors. A false negative result incorrectly labels as safe a substance that is dangerous; a false positive incorrectly labels as dangerous a substance that is safe. We suggest that given the commercial stakes for beekeepers and the health impacts on bees, the regulatory preference for false negative over false positive results is misguided, and serious consideration should be given to precautionary regulatory policy.

The term CCD was coined by bee researchers to refer to a phenomenon in which managed honeybees abandoned their colonies en masse, leaving behind the queen, young bees, and large stores of honey and pollen. CCD threatens the viability of over 90 different US fruit, nut, and vegetable crops, whose quantity and quality of production depend on the pollination services provided by managed honeybees. Emerging scientific investigations of CCD suggest that microbial pathogens such as viruses are causally involved. However, the fact that different studies identify different sets of associated microbial pathogens has led CCD researchers to surmise that the discovered pathogens are secondary infections. The identity of the primary causal factor(s) that render honeybees susceptible to such secondary infections is a flashpoint within and between groups of beekeepers, researchers, agrochemical representatives, regulatory officials, and environmentalists.

CCD was first discovered by commercial beekeepers, who travel around the country renting out their colonies for pollination purposes to farmers. Several beekeepers observed CCD unfolding in the fields of the commercial growers with whom they contract. They consistently noted connections between the occurrence of CCD and the proximity of their hives to fields treated with relatively new systemic insecticides such as the neonicotinoid imidacloprid. Affected beekeepers reported that CCD occurred in colonies several months after initial exposure to neonicotinyl insecticides. This suggested to the beekeepers that foraging bees, instead of dying immediately (as experienced in bee kills resulting from exposure to more traditional pesticides), were bringing back pollen and nectar contaminated with low levels of the systemic insecticide to the colony. This, the beekeepers surmised, had long-term progressive effects on developing bees that were chronically exposed to accumulating insecticidal stores. To date, US regulators have dismissed beekeepers’ on-the-ground evidence. Government officials view beekeeper evidence as anecdotal, and they will not consider it in promulgating regulations, since beekeepers do not isolate causal variables in the way done in formal laboratory and field experiments. From the perspective of many commercial beekeepers, however, with high stakes in maintaining strong and healthy colonies, their hypothesis provides sufficient justification for developing regulations that lead to limiting bee exposure to imidacloprid while more-conclusive evidence is sought. Theirs is a precautionary approach predicated on a false positive error norm.

Lab and field studies

Some ecotoxicological laboratory studies of the influence of the newer systemic insecticides on honeybees have shown adverse effects that can potentially culminate in CCD. Chronic feeding of neonicotinyl insecticides to honeybees at sublethal doses comparable to levels found in pollen and nectar of treated field crops had deleterious effects on learning, memory, behavior, and longevity. Lab studies also suggest that synergistic interactions between the newer systemic insecticides and other environmental toxins and pathogens could enhance the toxicity to honey bees.

EPA officials recognize that these data on the ecological effects of the newer systemic toxins is a cause for some concern but maintain that it is too inconsistent to restrict the use of these toxins. And although regulatory officials point to the agency’s own risk assessments conducted during the registration process in order to support the claim that these insecticides pose minimal risks to honeybees, they also acknowledge that their current risk assessments do not systematically consider the effects of either short-term or chronic exposure to sublethal doses of these insecticides on honeybees. Neither do they assess the effects of multiple interactions between insecticidal toxins and other environmental variables on honeybees. Insecticidal effects on younger honeybee brood are not part of the EPA’s evaluation scheme either. In effect, the EPA’s sound science approach permits the release of the newer systemic insecticides based on experimental practices that tend to ignore the findings highlighted by some laboratory- and many beekeeper-initiated studies. EPA officials note that indirect laboratory findings on individual bees do not necessarily translate to what is actually occurring to whole colonies in the field. The agency persists in demanding more direct causal experimental evidence from field studies on colonies. The direct causal experimental evidence available to date is inconclusive.

Experimental field studies typically impose conditions whereby one set of colonies receives no pesticide while other sets receive known doses, with other variables of interest ideally controlled. But the actual environmental settings in which commercial beekeepers work expose honeybees throughout their life cycle to a multitude of local environmental variables such as nutrition, other toxins, pathogens, and parasites, many of which are known to interact with the newer systemic insecticides. Contemporary field study designs, which tend to focus on only one or two toxins, do not test real-life scenarios in which low levels of the toxins by themselves may not cause CCD but may do so through intricate interactions with multiple other environmental variables across the life cycle. Additionally, the statistical norm for accepting field experiment findings (95% confidence that a result is not a product of chance) is an academic convention with no intrinsic justification. It is predicated on a preference for false negative conclusions, and this in turn reflects a predilection to overlook potentially valuable findings rather than suffer the embarrassment of having to withdraw results later determined to be incorrect. These are matters of social history, not nature.

Instead of the EPA’s “sound science” approach to pesticide regulation, we advocate a broadly precautionary orientation. This entails a regulatory preference for false positives over false negatives. Regulators must accept suggestive data when all uncertainties are not resolved.

Following this logic, field experiments tend toward finding no significant difference between pesticide-treated and untreated colonies, when in fact there might be. These historically established biases in field studies are further compounded by the fact that the EPA gives greater weight to studies that comply with the regulatory standards of good laboratory practice (GLP) than those that do not. GLP standards specify how a study should be constituted, performed, recorded, and interpreted, and by whom. In order to be GLP-compliant, an investigation has to be validated by regulatory bodies composed of academic and agrochemical company researchers. GLP requires traditional standards of isolating causal factors and establishing experimental controls. As a result, cutting-edge studies on the effects of sublethal chronic doses of the newer systemic insecticides on honeybee adults and brood, which are academically sound but have yet to be validated as GLP, are typically not considered in federal rulemaking. Moreover, the exorbitant expenditure required to meet GLP standards means that public researchers and beekeepers will have difficulty undertaking investigations that are GLP-compliant.

Although ecotoxicological field study designs may appear sound from the standpoint of established regulatory standards, they bear little resemblance to the reality that beekeepers and honeybees face. Consequently, we should not take their policy relevance for granted. It is time for the EPA to take seriously innovative ecotoxicological practices that push at the very limits of what is seen as experimentally feasible. Of course, because such studies will probably not be able to sharply isolate and control for the effects of the myriad factors plausibly at play in CCD, these kinds of investigation are likely to produce only suggestive results. Virtually inevitably, they will not provide the kind of unambiguous proof that the EPA’s regulators demand as part of their sound science approach. Instead of dismissing such studies, however, we suggest that the CCD epidemic should prompt us to revisit the bases for pesticide regulation.

The precautionary approach

Instead of a sound science approach to pesticide regulation, we advocate a broadly precautionary orientation. This entails a regulatory preference for false positives over false negatives. Regulators must accept suggestive data when all uncertainties are not resolved. Government decision makers would need to seriously value a much broader array of knowledge forms, practices, and actors, both certified and noncertified, in discussions that frame research questions, study designs, data interpretations, and policy decisions regarding pesticides than the EPA currently considers. This approach shifts the onus of showing no harm from at-risk groups, such as commercial beekeepers, to those who produce or deploy the technology of concern, which in this case would be the manufacturers of systemic agricultural insecticides such as imidacloprid.

In 1999, the French government set the precautionary precedent for the regulation of newer systemic insecticides in the case of honeybee exposure. French policymakers decided to limit the use of Gaucho (imidacloprid) and Regent TS (fipronil) in the face of uncertainty surrounding the risks they pose to honeybee health. They drew on a preponderance of indirect evidence from observations in actual crop settings by French beekeepers and followup studies by researchers affiliated with the government. This research suggested that sublethal levels of the systemic insecticides were available in the pollen and nectar of treated crop plants and were retained in soils over multiple years and reentered crops during subsequent cultivations. These studies also provided evidence that chronic exposure to systemic insecticides in laboratory and semi-field settings significantly impaired honeybee foraging, learning, and longevity.

Advocates for the established sound science approach to pesticide regulation tout it as unbiased. In fact, all research requires choices and thus has biases. There is nothing inherently superior about type II (false negative) over type I (false positive) errors. There is nothing intrinsically better about the preference for higher levels of certainty on more narrowly construed problems as against greater uncertainty in understanding more complex relationships. These matters are value-laden, political, and in the case of CCD, they affect different stakeholders differently. The current approach to sound science–based regulation benefits the short-term interests of agrochemical producers by treating the absence of conclusive evidence of pesticide harm as justification for allowing a given chemical to remain on the market. A precautionary approach in the case of CCD, in contrast, could hurt agrochemical companies, because indirect evidence of the sublethal effects might justify removing certain systemic insecticides from the market or, more likely, restricting their use in some fashion. For commercial beekeepers, on the other hand, sound science regulatory policy in the case of CCD offers no immediate advantage. If certain agricultural systemic insecticides contribute to CCD, then beekeepers are helped by restricting bee exposure to these chemicals. If it turns out that the toxins of concern are not involved in CCD, beekeepers will be harmed less by the move to remove it from use than they would be if it transpired that they contributed to CCD, but exposure had not been restricted.

There are those who express fears that removing or limiting the use of the newer systemic insecticides, which are categorized by the EPA as reduced risk, would force growers to revert to older pesticides considered more harmful to human and environmental health. These fears are not entirely unreasonable, given the current structure of US agriculture, with its predilection for large monoculture crops, which depend heavily on pesticides and herbicides in order to survive. Consequently, any significant reduction in the use of these insecticides will not ultimately be effective without a broader shift toward more sustainable forms of agriculture, including an increase in smaller-scale farm production, polycultures, and ecological strategies of pest management. Perhaps the case of CCD can serve as an opportunity to prompt broad dialogue about the future of US agriculture and lead to experiments on the advantages and drawbacks of a wide array of alternative agricultural practices.

At a minimum, the complicated knowledge landscape surrounding CCD should lead the EPA to consider supporting methodologically innovative research that would improve our understanding of CCD and the multitude of factors that may interact in complex ways to cause it. The decision to seek an understanding of real-world environmental complexity and not to base regulation on artificially reductive experimental designs requires different standards of statistical rigor and experimental control than those that are currently practiced. This research would monitor real-time effects on long-term colony health from chronic exposure to toxins used in commercial beekeeping and farming practices. Crucially, it would be transdisciplinary in incorporating traditional honeybee research with beekeepers’ on-the-ground knowledge, along with sociologists and humanists versed in the social, economic, and political dimensions of scientific and agricultural practices.

More generally, the CCD case should lead us to consider the value and drawbacks of EPA’s sound science approach to pesticide regulation. If sound science is not inherently superior to a precautionary approach, why should we use it? Should the federal government have regulatory policies whose scientific foundations systematically support the interests of some economic actors over others? If not, then debates that inform policy on pesticide regulation need to represent more equitably the methodological and epistemological commitments and values of a broader range of actors than what is currently occurring under the paradigm of sound science. A precautionary approach, broadly along the lines of what we have outlined, would allow scientifically justifiable and fairer means of serving environmental health and the interests of those involved in agricultural production.

Reforming Regulation of Research Universities

In recent years, research universities and their faculty have seen a steady stream of new federal regulations and reporting requirements imposed on them. These new requirements, in combination with other factors, have exacerbated already significant institutional financial stress and diverted faculty time from research and education.

The oversight of research that uses human subjects or animals, involves select agents, chemicals, or other potentially dangerous substances, or involves export-controlled technologies is necessary and important. Universities and researchers take seriously their responsibilities to comply with requirements and account for their use of federal resources. However, increasing regulatory and reporting requirements are not only costly in monetary terms; they also reduce faculty productivity and result in inefficient use of federal research dollars.

Quantifying the monetary and productivity costs of regulations is often difficult. Whereas the cost of each individual regulation may not appear to be significant, the real problem is the gradual, ever-increasing growth or stacking of regulations.

The fiscal situation of our universities requires a reexamination of regulatory and reporting requirements to ensure a proper balance between accountability and risk management and to ensure that federal and institutional resources, as well as researchers’ time and effort, are being used effectively and efficiently.

The current climate of fiscal austerity has sparked a renewed interest in reforming and streamlining government regulations to eliminate waste and improve productivity. In January, President Obama released Executive Order 13563 (“Improving Regulation and Regulatory Review”), along with two presidential memoranda focused on regulation. These documents require federal agencies to develop plans for regulatory review to ensure that regulations become more effective and less burdensome.

Congress is also interested in regulatory reform. Rep. Darrell Issa (R-CA), the chairman of the House Committee on Oversight and Government Reform, sent a letter to nearly 200 companies, trade associations, and other organizations, requesting information on existing and proposed regulations that have negatively affected job growth, and soliciting suggestions on reforming regulations and the rule-making process. The committee received nearly 2,000 pages of responses.

Universities deserve attention

Higher education has largely been absent from recent governmental discussions of regulatory reform, despite evidence contained in a report prepared for the U.S. Commission on the Future of Higher Education that “there may already be more federal regulation of higher education than in most other industries.” As documented by Catholic University of America’s Office of General Counsel, more than 200 federal statutes affect higher education, and the list keeps growing. Sen. Lamar Alexander (R-TN) recognized this when he asked the National Research Council’s (NRC’s) Committee on Research Universities, at their November 2010 meeting, to identify ways to improve the health of U.S. research universities that would not cost the federal government money, pointing specifically at the problem of overregulation.

In addition to research, regulatory issues extend into universities’ educational activities. For example, the Government Accountability Office said in a 2010 report that the Department of Education underestimated the burdens placed on universities associated with mandatory reporting for the Integrated Postsecondary Education System. A 2010 survey of financial aid administrators by the National Association of Student Financial Aid Administrators found that 85% of respondents at institutions with enrollments of more than 1,000 identified greater regulatory compliance workloads as a major cause of current resource shortages.

Increasing regulatory burdens are occurring during a period of severe financial pressure on universities. State educational appropriations per full-time student in 2010 constant dollars were 21% lower in 2010 than they were two decades earlier and 25% lower than a decade ago. Endowments have yet to recover from the substantial losses incurred in the recent financial crisis. Gifts and donations have declined. Raising tuition is not a realistic option for filling this gap, especially for public universities facing heightened scrutiny from state legislators or bound by state constitutions to minimize tuition rates.

At the same time that other funding sources have become constrained, the cost of performing research has become increasingly expensive for universities, in part because of the expanded costs of federal compliance. Between 1972 and 2009, the proportion of total academic R&D expenditures drawn from institutional funds nearly doubled from 11.6% to 20.4%. At the same time, the proportion funded by federal, state, and local governments decreased from 78.5% to 66%. Because of White House Office of Management and Budget (OMB) rules, universities are restricted in how much they can be reimbursed by the federal government to pay for compliance costs.

Heavy compliance burdens affect not only institutions, but also the morale and productivity of researchers within them. According to an often-cited and illustrative figure from the 2007 Federal Demonstration Partnership (FDP) Faculty Burden Survey, 42% of faculty time relating to the conduct of federally funded research is spent on administrative duties. Some of this additional time is the result of increased activities relating to compliance with federal regulations. In effect, at a time of limited resources, compliance requirements are taking researchers out of the laboratory and reducing their ability to perform the research that leads to the innovations that improve our quality of life.

Numerous research institutions provided us with data indicating that compliance costs have grown during the past decade. Recovery of these costs is determined by rules by set by OMB. Most of the research compliance costs are accumulated in a pool of costs classified by OMB as “sponsored projects administration” (SPA), and analysis of SPA can be insightful in measuring the growth of research compliance costs. One private institution in the midwest estimated that its SPA costs increased from $4.2 million in 2002 to $7.3 million in 2008. A prominent medical school in the southeast reported that its compliance and quality assurance costs increased from about $3 million in 2000 to $12.5 million in 2010.

More telling than the increases in SPA and associated research compliance costs are trends showing that these costs have increased more rapidly than the associated direct research expenditures, such as salaries, lab supplies, and research equipment. For example, the medical school mentioned above had a cumulative increase in compliance and quality-assurance costs of more than 300% between 2001 and 2010, whereas sponsored expenditures associated with the direct costs of research increased by only 125% during the same period. A private university in the south told us that its SPA-related costs associated with research increased by nearly 120% between fiscal year 2002 and 2010, whereas its direct research expenditures increased by less than 100%. No data that we received ran contrary to these trends.

Heavy compliance burdens affect not only institutions, but also the morale and productivity of researchers within them.

It is important to note that this is not a case of administrative inefficiency. University-wide administration and department and school-specific academic administration rates have fallen over the past decade, due mainly to drastic cuts in state appropriations and a strong emphasis on administrative efficiency and effective management. At the same time, SPA costs, which are closely linked to the cost of research compliance, have increased. The onslaught of research compliance regulation and unfunded mandates has overwhelmed the strong downward pressures of budget cuts and emphasis on administrative efficiency.

Precisely answering the seemingly simple question “How much does it cost universities to comply with any particular regulation?” is difficult. The cost of compliance frequently results from the time that faculty, staff, and administrators spend in fulfilling compliance and reporting responsibilities. This results in both monetary costs and the diversion of faculty time away from research and teaching, reducing productivity.

Productivity declines are a challenge to measure, with the 2007 FDP survey providing perhaps the best data. With regard to monetary costs, estimates of compliance for the same regulation or research area may range widely among different universities. This is not unexpected; the range reflects variability among universities in the size and nature of their research endeavors, as well as the differing degree to which institutional research engages in areas requiring compliance. For example, one university may conduct more human subjects studies, whereas another has more researchers working with hazardous materials or select agents.

Universities account for compliance costs in different ways. Compliance burdens are spread across many offices and units at an institution, and in many cases compliance costs are difficult to separate from other associated research operating costs. Finally, new compliance requirements, even when they seem small, can strain university systems. For instance, new regulations on export controls have added considerable burden to the usually one or two employees who deal with such matters, in some cases requiring the hiring of additional personnel. Proposed new National Institutes of Health (NIH) guidelines on conflict of interest are yet another example that will probably increase the workload.

A framework for evaluation and solutions

Although the ever-growing array of research regulations affecting universities can seem bewildering, solutions for problematic regulations fit within a relatively small number of categories:

  • Eliminate outright or exempt universities from the regulation
  • Harmonize the regulation across agencies to avoid duplication and redundancy
  • Tier the regulation to levels of risk rather than assuming that one size fits all
  • Refocus the regulation on performance-based goals rather than on process
  • Adjust the regulation to better fit the academic research environment.

Table 1 is a matrix that associates examples of regulations with the solutions defined above. In most cases, regulatory relief does not mean simply eliminating a regulation. Solutions tend to fall within several categories (for example, harmonization and tiering to risk) rather than only one, and should be pursued carefully to ensure that they make sense and are not counterproductive. Below we discuss specific examples from the table in more detail:

Effort reporting. Effort reports show the percentage of total effort that individuals contribute to university activities. Faculty commit to devote a certain fraction of their work time to specific projects funded by the federal government, and must regularly certify that they are devoting this amount of time to those activities.

Effort reporting has been widely criticized for imposing significant cost without adding value. For example, according to FDP, “…effort reporting is based on effort which is difficult to measure, provides limited internal control value, is expensive, lacks timeliness, does not focus specifically on supporting direct charges, and is confusing when all forms of remuneration are considered.”

Effort reporting can be eliminated without any detriment to the accountability or oversight of the research enterprise for five reasons. First, it is redundant. Requirements that faculty provide regular progress reports to funding agencies serve the same function as effort reporting, but do so more effectively because they better align with incentives for faculty performance such as research accomplishments, success on subsequent grant proposals, and ultimately promotion and tenure. Second, it is unnecessary. Faculty rarely spend less time than they initially commit to federally funded research. Indeed, as acknowledged by the OMB A-21 Clarification Memo of January 2001, faculty routinely spend more time than they committed to. Third, it lacks precision. It is incompatible with an academic research environment in which researchers do not work on billable hours and researcher responsibilities such as student supervision often cannot realistically be billed reliably to a single project. Fourth, it is expensive and wasteful of government funds. The federal government must spend money in the auditing of effort reports and associated administrative processes. Finally, effort reporting is responsible for adding considerably to universities’ administrative costs and taking faculty away from research. Virtually every institution that responded to our request for information identified effort reporting as an area that has had significant cost and productivity implications.

The costs are significant. For example, one public university in the Midwest told us that nine employees spend about one quarter of their time each year monitoring certifications, at an estimated annual cost of $117,000. For many schools, effort reporting also requires the development or purchase and the continuing maintenance of specialized software systems. A public university in the midwest reported that the cost of the necessary software was more than $500,000, exclusive of implementation and training costs. Several universities reported that they spent in the range of $500,000 to $1 million annually on effort reporting.

Chemical Facilities Anti-Terrorism Standards (CFATS). The Department of Homeland Security (DHS) Appropriations Act of 2007 granted DHS the authority to regulate chemical facilities that present “high levels of security risk.” Under this authority, DHS promulgated CFATS. Since 2007, the research community has urged DHS to reconsider the manner in which CFATS is applied to research laboratories located at universities.

The current regulations fail to recognize the differences between university research laboratories and major chemical manufacturing and production facilities, including how chemicals are used and stored for research purposes. Chemical plants often store large volumes of toxic substances; universities generally do not. Rather, they distribute regulated “chemicals of interest” in very small quantities, among laboratories in multiple buildings and generally in more than one geographic location. Given this distributed environment, research organizations present a low risk for serious toxic releases through theft, sabotage, or attack.

Nonproduction research laboratories with similar chemical use patterns located at noncommercial, nonprofit research organizations such as colleges and universities should be regulated differently. DHS should establish separate but robust standards, protocols, and procedures for assessing vulnerabilities and improving the security of chemicals of interest in a research setting. Several other federal agencies have established separate and successful standards for research laboratories; these standards include separate chemical safety regulations at the Occupational Safety and Health Administration and separate hazardous waste management regulations at the Environmental Protection Agency, both of which are distinct from those applied to industrial production and other facilities.

The current CFATS regulations take an inappropriately broad look at campuses, treating an entire campus as a single entity. Although CFATS allows some flexibility in defining the boundaries of facilities, site security plans or alternative security plans must be developed in the aggregate and may not be developed specifically for a lab or unit operation. DHS should take an approach in which the security requirements apply only to individual laboratories where chemicals of interest exist in quantities greater than the threshold planning quantity.

U.S. Citizenship and Immigration Services changes to Form I-129. In early 2011, the U.S. Citizenship and Immigration Service (USCIS) added a question about export control licenses to its Form I-129, which employers must complete when petitioning for a foreign worker to come to the United States temporarily to perform services. As a result, I-129 petitioners now have to complete a new certification for H-1B visas and certain other specialty occupation visa petitions. This new requirement puts substantial burdens on universities with questionable benefit for national security.

The value and purpose of Form I-129 remain unclear, especially considering that USCIS has no responsibility for export control enforcement or compliance and that other security checks are already incorporated into the existing visa process. Under the Visa Mantis program, for example, the State Department provides extra screening of visa applicants who are seeking to study or work in certain fields that are deemed to have national security implications. The change to Form I-129 is therefore redundant and unnecessary.

TABLE 1

A framework for remedies for some regulatory burdens faced by research institutions

Exempt universities or eliminate Harmonize/avoid duplication and redundancy Tier to risk Focus on performance, not process Better synch with university R&D
Human subjects Harmonize human subjects protections between the Office of Human Research Protections (OHRP) and the Food and Drug Administration (FDA).

Eliminate Health Insurance Portability and Accountability Act (HIPAA) from research, or harmonize HIPAA regulations with OHRP regulations.

Tier human subjects research for exemption from Institutional Review Board review (e.g., social science research vs. clinical trials).

Animal research Consult on whether theAnimal Enterprise Terrorism Act provides sufficient protection for animal researchers.

Export controls Eliminate new regulations requiring deemed export certification for certain visa applications (I-129 form). Harmonize International Traffic in Arms Regulation, Export Administration Regulations, and Office of Foreign Assets Control controls. Tier export control lists to risk, removing much of what is currently on these lists or reclassify to lower their control levels. For purposes of enforcement of deemed export control laws, require that individuals have knowledge or intent that controlled information will be exported or transmitted without proper authorization.

Effort reporting Eliminate effort reporting.

Financial reporting Expanded Form 1099 Reporting Requirements will create an additional burden on financial reporting. Sub-recipient monitoring: Modify requirement so that grantees would no longer be required to monitor sub-recipients who regularly receive Federal awards. Federal Funding Accountability and Transparency Act (FFATA): Raise subreporting threshold from $25,000 to the simplified acquisition threshold, use OMB definition of “subcontract” (which eliminates procurements), and only report first tier.

FFATA: Make reporting annual or eliminate more onerous requirements for universities.

Change timing of Quarterly Cash Transaction Report.


Conflict of interest/research integrity Eliminate negative patent reports, which require form completion even when there are no intellectual property concerns. Direct Office of Science and Technology Policy to convene agencies to develop a conflict of interest policy like the Misconduct in Science Policy, which articulates general goals and objectives.

Select toxins and agents Develop a tiered list and associated requirements, as has been documented by the American Society of Microbiology.

Hazardous materials CFATS: Wherever possible, create an exception for research laboratories. CFATS: Tier chemicals of interest to risk when exemption isn’t possible. Examine and consider university facilities as different from large chemical facilities: Design alternative approaches in light of these differences.

Mechanisms should be developed to allow universities to be exempted from certain regulatory and reporting requirements, when appropriate, and if not exempted, to more easily be reimbursed for their associated costs.

Most research conducted by foreign nationals at U.S. research universities is fundamental research, which is excluded from export control requirements. Whether technology is subject to Export Administration Regulations is irrelevant if a foreign national is performing fundamental research. Because of this exclusion, there will probably be very few instances in which export control licenses will be required for foreign nationals employed at research universities on H-1B visas. However, universities must do significant additional review for I-129 submissions to confirm that this is indeed the case.

The inclusion of the “Deemed Export Acknowledgment” makes filling out Form I-129 and the H1-B application process much more complicated for visa petitioners and university employers. At research universities, international affairs and human resources offices typically complete and file the form for potential visa employees. However, to respond correctly to such a narrow question concerning exports licenses, other university officials from the office of sponsored programs and technology licensing, campus compliance officers, and sponsoring faculty must become involved in the petition to hire temporary employees. This has dramatically increased the time it takes university staff to complete Form I-129.

It is also unrealistic in a research environment to expect that export-control issues and technologies connected to a particular line of research in which a researcher is involved will remain static from the time Form I-129 is completed. Universities cannot predict where scientific inquiry will go, and many technologies involved in conducting research may change during the course of the research project as findings and discoveries progress. It is thus easy for universities to inadvertently respond to this question in a way that could eventually turn out to be inaccurate.

Other Examples. Several other examples of redundant and unnecessary research regulations exist. For example, many collaborative research projects involving investigators at different institutions require that subawards be made to other partnering institutions. In these instances, the prime award recipient is also required to “monitor” the business practices and internal controls at the subrecipient institution. Although there may be value to monitoring subrecipients that are not established recipients of federal funding, to monitor and report on other research universities that regularly receive federal awards is a wasteful exercise and should be eliminated.

Other examples involve tiering regulations to risk. In human subjects research, minimal-risk studies, such as many in the social sciences, should not require the same level of review as clinical trials. Similarly, not all research involving pathogens or biological toxins that pose potential risks to public health and safety pose the same level of risk. The requirements associated with the regulation of this “select agents” research should be tiered to risk, as documented by the American Society of Microbiology.

And finally, newly proposed conflict of interest guidelines from NIH that require public posting of faculty-industry relationships, even when potential conflicts are being effectively managed, will create public confusion and unnecessary work and have a potential chilling effect on university-industry interactions. The full impact of these regulatory changes should be carefully evaluated before they are implemented.

Steps toward reform

The specific regulations in Table 1 and discussed here are just a small sample of the regulatory issues facing research universities. Beyond the matrix we have laid out for addressing such issues, several other actions would help universities and the federal government work better together to reduce regulatory burden while still ensuring safety and accountability.

First, we need to improve understanding of the costs of regulation. As we have already discussed, quantifying the costs and burdens of regulations is difficult. The NRC and the Department of Education should conduct the study on regulation in higher education called for by Section 1106 of the Higher Education Opportunities Act (H. R. 4137), to describe by agency the number of federal regulations and reporting requirements affecting institutions of higher education; the estimated time required and costs to institutions of higher education (disaggregated by types of institutions) to comply with these regulations; and recommendations for consolidating, streamlining, and eliminating redundant and burdensome federal regulations and reporting requirements affecting institutions of higher education.

In addition, OMB and the Office of Science and Technology Policy should jointly co-chair an interagency working group that regularly reviews regulations affecting research universities. This group could be organized as a new subcommittee of the National Science and Technology Council Committee on Science, or as part of the existing Research Business Models Subcommittee. Through an application process, research universities or university associations could submit proposals to fix or eliminate rules that add no value or promote inefficiency and excessive regulatory burden. Such a group would also be able to closely examine regulation costs.

Government flexibility and responsiveness must be increased. New or enhanced relationships and pathways of communication between universities and the government will help improve efforts to reduce regulatory burdens. The administration’s EO 13563 provides an impetus for establishing these pathways. We should designate a high-level official within OMB’s Office of Regulatory Affairs to serve as a federal ombudsman. This official would be responsible for addressing university regulatory concerns and seeking ways to increase efficiency and minimize regulatory burdens. The ombudsman would assist in harmonizing and streamlining federal regulations and would also have responsibility for reviewing specific simplification requests. The ombudsman should be OMB’s co-chair on the interagency working group recommended above.

Protocols should be established to address statutorily mandated regulatory concerns. When new laws are passed by Congress to achieve important public policy goals, unintended regulatory burden can be an unfortunate byproduct. When requirements create unintended regulatory burdens for universities, a fast-track approach to amending the law would be a useful tool that could help to minimize burdensome regulations.

Mechanisms should be developed to allow universities to be exempted from certain regulatory and reporting requirements, when appropriate, and if not exempted, to more easily be reimbursed for their associated costs. There are three ways in which this can be done.

First, research universities should be given exemptions similar to those provided to small entities under the Regulatory Flexibility Act (RFA). The RFA requires agencies to prepare and publish a regulatory flexibility analysis describing the impact of a proposed rule on small entities. In addition, agencies are encouraged to facilitate participation of the affected entities by holding conferences and public hearings on the proposed rule. The RFA encourages tiering of government regulations or the identification of “significant alternatives” designed to make proposed rules less burdensome. The law should be amended to include organizations engaged in conducting federally sponsored research and education activities.

Second, coverage provided under the Unfunded Mandates Reform Act (UMRA) should be extended to research universities. It is often not a single regulation that creates compliance challenges, but the stacking of regulations over time. Agencies rarely reevaluate, eliminate, or redesign regulatory schemes to reduce the burden of compliance. The UMRA requires Congress and agencies to give special consideration to the costs and regulatory impact of new regulations on state and local governments, as well as on tribal entities. Extending coverage to public and private universities would result in research funding agencies being more responsive to the cost burdens of new requirements.

Third, institutions should be allowed to better account for new regulatory costs and to charge these costs to federal awards. The Paperwork Reduction Act requires that all proposed regulations be analyzed for the paperwork that they require and that paperwork be reduced to a minimum. Regulations creating new paperwork requirements must be cleared by OMB. Unfortunately, agency projections of paperwork burden are often underestimated and do not recognize how new reporting requirements will be paid for. The American Recovery and Reinvestment Act reporting requirements and the recently proposed NIH reporting requirements related to financial conflicts of interest are two notable examples. In cases in which new requirements are not effectively controlled to minimize the imposition of additional and sometimes substantial new costs, institutions should be allowed to establish a cost reimbursement mechanism in which the incremental costs can be recovered as a direct charge to the federal award.

Finally, cost sharing policies that are appropriate for the research community and that differentiate universities from for-profit entities should be developed. Although a cost sharing commitment between government agencies and industry partners may be appropriate, requiring the same commitment from university partners ignores universities’ educational and public service roles and their nonprofit status. The President’s Council of Advisors on Science and Technology, in a 2010 report on energy R&D, recommended that universities be exempted from cost sharing requirements. The National Science Foundation (NSF) recently implemented a new policy that prohibits voluntary cost sharing on NSF programs, while also reaffirming its policy that mandatory cost sharing be required only in exceptional situations where it is necessary for long-term program success. Congress and other research funding agencies should follow NSF’s lead and prohibit cost sharing policies that inappropriately impose additional costs on universities.

To better address regulatory issues at research universities, we need new and more timely and flexible mechanisms for universities and associations to work with federal officials. We have proposed a set of recommendations that would begin to establish these mechanisms. Only by working together can research universities and the federal government reach the shared goal of reducing undue regulatory requirements while maintaining safety and accountability. A more balanced regulatory load would help ease financial burdens on universities and improve the morale and productivity of the researchers whose discoveries and innovations will drive our nation’s economy in this century.

Recommended reading


Forum – Spring 2011

Technology innovation: setting the right policies

In “Fighting Innovation Mercantilism” (Issues, Winter 2011), Stephen Ezell has identified a truly vexing problem: the proclivity of important countries (notably China) to stimulate domestic innovation by using a wide variety of subsidies, such as public grants, preferential government procurement, a sharply undervalued currency, and other techniques. Elements of “innovation mercantilism” are not particularly novel, but the current scale of these practices poses a distinct threat to U.S. leadership on the innovation frontier.

To be sure, from the early days of the Republic, the U.S. government has deployed an array of public policies to promote innovation; not only patents and copyrights, but bounties and land grants to promote canals and railroads, easements to build out electricity, telegraph and telephone networks, military outlays to lay the foundations for nuclear power, civilian aircraft, the Internet, and much more.

Using Ezell’s terminology, it’s overly simplistic to say that U.S. innovation supports have historically been “good”—benefitting both the United States and the world—while Chinese supports are “ugly”—benefitting China at the expense of other nations. However, two features distinguish contemporary Chinese policies.

First, Chinese subsidies are combined with less-than-energetic enforcement of intellectual property rights (IPRs) owned by foreign companies. In fact, China often requires foreign companies to form joint ventures with Chinese firms, and in other ways part with their technology jewels, as the price of admission to the Chinese market. Second, during the past five years, China’s sharply undervalued renminbi has enabled the nation to run huge trade surpluses, averaging more than $200 billion annually, and build a hoard of foreign exchange reserves approaching $3 trillion. A decade ago, the trade surpluses corresponded to exports of toys and textiles; increasingly, Chinese trade surpluses are now in areas such as sophisticated manufactures, electronics, and “green” machines (like wind turbines).

The burst of Chinese innovation mercantilism coincides, unhappily, with languishing U.S. support. Federal R&D outlays have declined from 1.3% of U.S. gross domestic product (GDP) in 2000 to 0.9% in 2007. Equally important, adverse features of the U.S. corporate tax system not only prompt U.S.-based multinationals to locate production abroad but also to consider outsourcing R&D centers.

What should be done? I agree with many of the specifics in Ezell’s policy recommendations, but let me highlight three broad themes:

  • Instead of carping at U.S.-based multinationals over taxes and outsourcing, President Obama and Congress should listen to what business leaders prescribe for keeping innovation humming in the United States.
  • Any U.S. company that assembles the specifics on unfair subsidy or IPR practices by a foreign government should be warmly assisted by the U.S. Trade Representative in bringing an appropriate case, especially when high-tech products are at stake.
  • The United States should no longer tolerate trade deficits that exceed 2% of GDP year after year. Balanced trade, on a multilateral basis, should become a serious policy goal.

GARY CLYDE HUFBAUER

Reginald Jones Senior Fellow

Peterson Institute for International Economics

Washington, DC


Protection is not the long-term route to growth and competitiveness, as Stephen Ezell argues. Although trade protection has helped to incubate local steel industries, for instance, most protected or publicly owned steel industries have lagged behind global best practices and often led to high local steel prices. In the automotive industry, India combined trade barriers to protect its infant automotive sector with a ban on FDI to create local industries but could not close the cost and performance gap with global companies. India’s decision to remove both trade and investment barriers meant that productivity more than tripled in the 1990s, and some local players emerged as innovative global competitors. Protecting local producers usually comes at a cost to consumers. The high prices and limited growth of the Indian and Brazilian consumer electronics sectors can be attributed largely to the unintended consequences of policies such as Brazil’s information act that protected the nascent local computer industry, and India’s high, yet poorly enforced, national and state-level tariffs.

Ezell rightly argues, too, that overemphasizing exports is mistaken. Providing incentives for local export promotion can be very expensive. For instance, Brazilian state governments competing to host new automotive plants offered subsidies of more than $100,000 for each assembly job created, leading to overcapacity and very precarious financial conditions for Brazilian local governments. And in any case, manufacturing is not the sole answer to the global challenge of job creation.

Research by the McKinsey Global Institute (MGI, McKinsey & Company’s business and economics research arm) finds that promoting the competitiveness and growth of service sectors is likely to be much more effective for creating jobs. Productivity improvements are a key factor in all sectors, but most job growth has come from services. In high-income economies, service sectors accounted for all net job growth between 1995 and 2005. Even in middle-income countries, where industry contributes almost half of overall GDP growth, 85% of net new jobs came from service sectors.

Another message that emerges from MGI’s research is that, as your article suggests, an emphasis on local production in innovative sectors is not nearly as important as the impact of innovation on the productivity in the broader economy. Innovative emerging sectors are too small to make a difference to economy-wide growth. In the case of semiconductors, the sector employs 0.5% or less of the total workforce even among mature developed economies and has a limited direct contribution to GDP. But the sector’s innovation has contributed hugely to the information technology adoption that has improved business processes and boosted productivity in many other sectors—and in that way has made a difference for economy-wide growth. These benefits often don’t require local suppliers. In fact, policy efforts to protect local-sector growth can halt that growth if they increase costs and reduce the adoption and use of new technologies. For instance, low-tech green jobs in local services, such as improving building insulation and replacing obsolete heating and cooling equipment, have greater potential to generate jobs than does the development of renewable technology solutions.

LENNY MENDONCA

Director

McKinsey & Company

San Francisco, California


Stephen Ezell’s article captures an unhappy reality of our present world economy: that some governments are pursuing technology innovation policies that are deliberately designed to favor their domestic firms. Ezell highlights China as the contemporary archetype of purveyors of what he calls “ugly” technology innovation mercantilism—“ugly” in that the behavior hurts competing U.S. and international firms and workers. He rightly calls for U.S. government economic diplomats and trade negotiators to take aggressive multilateral, regional, and bilateral actions.

I argue that although Ezell is right to label these technology innovation mercantilist policies ugly, they pointedly fit his “bad” and even “self-destructive” categories, too, because they contradict our and their long-term interests. The United States has built the world’s best technology innovation system by investing in public-good basic research in our national laboratories and universities. But the strength of U.S. technology innovation is not money alone. European scholars, searching for explanations for why the United States has emerged as the technology innovation center of the world, say that Americans integrate public research institution science with private enterprise technology market developers better than they do in Europe and everywhere else. U.S. contributions of new medicines, medical devices, clean energy, and information technologies are due to technology laws that encourage public research laboratories and universities to license patented technologies to private enterprises, whether an established large business or a small entrepreneurial venture, whether American or foreign. Many big European, Japanese, and Korean firms conduct their most innovative work at their U.S. R&D centers. Fuelled by risk-tolerant capital markets, U.S. and international firms operating in the United States share patented technologies and collaborative know-how to get new products into the marketplace, first in the United States and then in other markets.

Nobody else has such an effective technology innovation system. Americans should not be shy about recommending our technology innovation system as a model for other countries. Studies consistently find that the most innovative companies keep their best technologies out of China and everywhere else where their intellectual property is not respected. Technology competitors and consumers suffer when the locally available technology is second-rate. Brazil’s Embraer became the world’s dominant midsized aircraft maker after their government opened the borders to international information technologies. Brazilian intellectual property and technology law reforms and public science and technology (S&T) investments are resulting in technology innovation unprecedented in Brazil. India’s people will get access to the newest innovative medicines when Indian policymakers and judges implement policies that encourage global innovators to sell their patented medicines in the country and that make their local biomedical S&T community another hub of global innovation. The vast Indian generic marketplace will not be diminished; rather, Indian generic makers will benefit from the global know-how entering their country. We should all participate, not just our trade negotiators, in dialogues with the S&T leaders in countries around the world, especially in developing countries, where policy choices are being made about S&T institution/market relationships that will encourage new dynamism to everybody’s benefit.

MICHAEL P. RYAN

Director

Creative and Innovative Economy Center

George Washington University Law School

Washington, DC


Climate Plan B

If one set out to assemble some of the worst possible policy responses to the threat of climate change, and implement them with maximum opacity to the general public, one could not do much better than William B. Bonvillian’s “Plan B,” as elucidated in your Winter 2011 Issues (“Time for Climate Plan B”).

Bonvillian’s plan is fundamentally undemocratic: The public, through its elected representatives, has repeatedly rejected greenhouse gas (GHG) emission controls, and polls show that the public is unwilling to pay for GHG reductions. Bonvillian’s plan is also fundamentally dishonest, hiding a GHG reduction agenda behind an energy policy façade. Americans want energy policy that offers affordable and abundant energy; Bonvillian’s plan would use government muscle to force consumers to buy more expensive energy, appliances, automobiles, and more.

Aside from lacking in democracy, Bonvillian’s plan is a dog’s breakfast of failed economic thinking. His call for increased R&D spending flies in the face of what is well known to scholars: Government-funded R&D only displaces private R&D spending. As Terence Kealey puts it in The Economic Laws of Scientific Research, “… civil R&D is not only not additive, and not only displacive, it is actually disproportionately displacive of private funding of civil R&D.” It’s also unnecessary: Contra to Bonvillian, there’s plenty of private R&D going on. According to the Energy Information Administration, the top 27 energy companies had revenues of $1.8 trillion in 2008. At Bonvillian’s estimate of energy sector R&D spending of 1% per annum, that’s $18 billion. Thus, Bonvillian’s support for President Obama’s desired $15 billion in annual government R&D spending would simply displace what’s already being spent.

The rest of Bonvillian’s plan rests on the “fatal conceit” that government planners can centrally plan energy markets. Thus, he wants more government subsidies and loan guarantees to pick winning and losing technologies. He wants more regulations that burden the private sector and retard economic growth. He wants more appliance standards that reduce consumer choice and increase the cost of appliances and automobiles. He wants more government mission creep, focusing the Department of Defense on energy conservation rather than actually defending the country. These are old, economically illogical, historically failed public policy approaches. This is not so much a Plan B, but a rerun of the big-government nonsense of the pre-Clinton era.

Rather than pouring market-distorting subsidies, tax credits, regulations, “performance standards,” and other such economically nonsensical things into an already bad economy with tragically high levels of unemployment, what we need to do is to take the “resilience option.” We should address threats of climate variability—manmade or natural—by increasing the resilience of our society, while revving up our economy through the use of free markets. We can do this best by eliminating subsidies to climatic risk-taking, streamlining environmental regulations, removing subsidies to all forms of energy, removing housing and zoning restrictions that make relocation harder, and making maximum use of free markets to deliver goods and services that are fully priced, incorporating the price of climatic risk. That is a true Plan B.

KENNETH P. G REEN

Resident Scholar

American Enterprise Institute

Washington, DC


Reducing access barriers

In “Reducing Barriers to Online Access for People with Disabilities” (Issues, Winter 2011), Jonathan Lazar and Paul Jaeger do an excellent job of raising a warning and calling for action. If anything they understate the case, and the implications of their arguments should extend beyond regulation and procurement to research, standards, and policies shaping America’s digital future.

Lazar and Jaeger note that roughly 20% of the U.S. population has at least one disability. By age 45, most people face changes in their vision, hearing, or dexterity that affect their use of technology. Everyone will experience disability in their lifetime. There is an even larger proportion of the population that at any given time has a limitation that is not typically tracked as a disability but is nevertheless affecting their ability to leverage technology to achieve their full potential and live rich lives (for example, illness, injury, poverty, or mild impairment). We are seeing a growing population of cognitive disorders that also can affect and be affected by the use of technology. Further, everyone at some point experiences contextual disability (such as noisy environments, cognitive load from distractions, and glare from bright sunlight). A 2003 Forrester Research study suggests that 60% of adult computer users could benefit from accessibility features. Although the focus of Lazar and Jaeger is appropriately on those formally identified as having disabilities, the goal should be a world in which everyone is achieving their potential irrespective of individual differences.

Lazar and Jaeger note that although the Internet has clearly opened opportunities for people with disabilities, many Web sites are inherently problematic, depending on a given person’s set of disabilities and goals. This is an issue today, but it will become more of an issue tomorrow. It is clear that the digital future that is emerging will require even greater dependence on technology in order to fully engage with the world. This future can be the fulfillment of a dream, or it can be a nightmare.

To increase access to the wealth of information, communications, and services that are emerging, Lazar and Jaeger call for a more aggressive stance within federal and state governments. We can aim higher. We have the ability to create a digital world that adapts to each individual’s personal characteristics. Cloud computing, the processing power and intelligence that are evolving behind it, and the increasing ubiquity of wireless networks mean that most individuals will rarely if ever need to be isolated. The variety of devices available to the individual is increasing, more and more information about the world and how we can interact with it is available, and the palette of technologies that extend the range of natural user interactions and experiences is increasing ever more rapidly. Everyone should be able to appropriate the set of technologies that makes sense to accomplish their goals and extend their potential.

Government, academia, and industry should be working together, not just reactively to ensure that the digital world is accessible but collaborating to create the infrastructure for a fully accessible digital future and to drive the innovation that embracing full diversity can unleash.

ARNOLD M. LUND

Director, User Experience

Microsoft Corporation

Redmond, Washington


Jonathan Lazar and Paul Jaeger effectively articulate the importance of accessible technology. I’d like to emphasize that the field of accessible technology is broad-reaching and a rich source of innovation.

The market for accessible technology extends far beyond people with severe disabilities. Naturally, there is a wide variety in the level of people’s abilities. One person may experience a persistent disability, such as permanent vision loss. Another person may experience vision strain at the end of a long working day. The value of making technology accessible is that it can be used by a broad set of people, in a way that meets their unique requirements. And that technology can adapt as the person’s abilities change, which can result from changing health, aging, or merely being in an environment or situation that reduces vision, hearing, mobility, or speech or increases cognitive load. Therefore, the market for accessible technology expands to people with mild impairments, occasional difficulties, the aging population, and the mainstream population in various situations.

The technology industry should realize that a powerful outcome of making technology accessible is that it drives innovation in the computing field as a whole. The resulting innovations are core building blocks for new, exciting computing experiences. Take, for example, screen-reading software, which reads aloud information on the screen with a computer-generated voice. A person who is blind relies on the screen reader to interact with their computer, listen to documents, and browse the Web. Other groups of people also benefit from screen readers, such as people learning another language and people with dyslexia. Listening to information read aloud helps with language acquisition and comprehension. Yet another application of screen-reading technology is the growing trend of eyes-free computing. An emerging application of eyes-free computing is driving a car while listening to driving directions or email or interacting with entertainment devices.

There is no free lunch or free green energy. It’s time for our political leaders to tell us honestly that it’s going to cost us a lot to preserve the future for our grandchildren.

This dynamic ecosystem of services and devices needs to be engineered so all the pieces work together. Our engineering approach at Microsoft is one of inclusive innovation. The principle behind inclusive innovation is that the entire ecosystem of products and technologies needs to be designed from the ground up to be usable for everyone. This will result in robust solutions that will benefit a broad population. To build accessible technology from the ground up requires dedication across the entire software development cycle. From product planners to the engineers, the teams need to incorporate accessibility into their fundamental approach and mindsets. At Microsoft, our accessibility initiatives include outreach, education, and research with public and private organizations. These collaborations are key to delivering accessible technology and reaching our goal of educating others who are creating technology solutions.

ANNUSKA PERKINS

Senior Program Manager, Accessibility Business Unit

Microsoft Corporation

Redmond, Washington


No free energy

“Accelerating the Pace of Energy Change” (Issues, Winter 2011) by Steven E. Koonin and Avi M. Gopstein is a refreshingly frank look at the challenge we face to protect our climate’s and nation’s futures. We in the United States are likely to assume that as a nation we can accomplish anything if we have the will to do so. After all, we designed the nuclear bomb in less than 5 years and accomplished the goal of the Apollo program in less than 10. But these projects constructed a few items, albeit very complex ones, from scratch. As the article points out, the existing energy system is huge, even by U.S. government standards. It consists of an enormous capital investment in hardware, matched by a business strategy that generates a modest but reliable return on investment.

It’s tempting to hope that one or more technical innovations will be discovered to solve the problem, such as cheaper solar cells, economical means to convert grass into ethanol, inexpensive CO2 sequestration, etc. As an applied scientist, I enthusiastically endorse R&D to improve all potential contributors to our future energy supply and energy conservation. But if we follow the authors’ reasoning, technical innovations can contribute only a small part of the solution. Even after the benefits of an innovation are obvious, there will be a long delay before the capital structure catches up with it; that is, waiting for existing equipment, which has already been paid for, to approach the end of its useful life and require replacement.

The alternative, investing in new equipment and infrastructure before the normal replacement cycle, is expensive, as is forcing the use of less economical alternative energy supplies. The money will not come from existing utility company profits, nor from current government revenues. It must be provided by citizens, either through increased taxes or increased energy costs. There is no free lunch or free green energy. It is time for our political leaders to tell us honestly that it’s going to cost us a lot to preserve the future for our grandchildren. It is also time to stop spending precious resources on illusions of green energy, like corn ethanol.

From a macro perspective we need young workers to move from depressed areas to booming areas. The mobility bank would help to finance the short-term costs of making such a move.

As the authors point out, essential ingredients for inducing energy companies to make changes are stability and predictability. Unfortunately, the U.S. Congress rarely commits itself even one year ahead. That matches poorly with energy investments whose useful life may be 50 years. The only alternative I can imagine is to formulate a long-term plan that receives sufficient public endorsement that future legislators are hesitant to abandon it. There are precedents; each is called a “third rail of American politics.” One requirement of such a plan is absolute honesty: If we agree to pay the cost of such a plan, we don’t want to be surprised later, except by savings we didn’t expect. Please don’t tell us about savings that may never appear and don’t assume that the economy will always remain at peak levels.

VICTOR VAN LINT

1032 Skylark Drive

La Jolla, California


Telling science stories

I see considerable irony in the fact that Meera Lee Sethi and Adam Briggle (“Making Stories Visible: The Task for Bioethics Commissions,” Issues, Winter 2011) begin their analysis of the role of narrative in explaining science with a story of their own: a story about David Rejeski’s childhood fascination with Captain Marvel, ham radio, and rockets. To do so mythologizes their human subject (Rejeski) just as surely as Craig Venter’s analogies serve, in the view of these authors, to tell us a fairy story about synthetic biology. We are invited here to see Venter as an evil genius bent on misleading the public by oversimplifying synthetic biology and downplaying its risks, while Rejeski comes across as the authentic superhero who can bring him to task for this transgression.

A scientific journal article is, in its own way, a narrative story, with a tendency to mythologize its subject: the experiment or study that it reports. Everyone working in science knows that research does not proceed as neatly, cleanly, or predictably as the tersely worded research publications that survive peer review tend to suggest. So it is not just “the public” (whoever they are) that needs stories to explain the complex nature of scientific truth. Scientists tell stories to one another all the time. The problem for the rest of us often amounts to deciding which stories we should believe. On this point I agree with Sethi and Briggle.

I also agree that there is money in synthetic biology, and that Venter and others can certainly smell it. What I am less certain of is whether Rejeski’s use of scary images from science fiction helps his credibility as a spokesperson for “the public.” He may hope that such images can scare regulators into fearing a panicked populace, thus pushing for more aggressive regulation, but this is a rhetoric that may be self-defeating to the extent that it suggests public fears are simply silly.

As someone who taught media studies for 20 years, I know how easy it is to mistake popular-culture images for what various publics are actually thinking. Worth noting in this context: Research by Michael Cobb and Jane Macoubrie at North Carolina State has suggested that Americans who have read Prey might be less fearful of nanotechnology than those who have not, a phenomenon probably attributable to the fact that science fiction fans tend to like science.

Indeed, Americans in general tend to like science, and I know of no hard evidence that they fear synthetic biology. They certainly do not fear nanotechnology, which in some ways, as Rejeski’s shop has helped publicize, perhaps they should. Science fiction is one of the few truly popular forums in which our hopes and our fears about new technology can be explored, but its significance should not be overstated. As someone who would like to see a stronger voice for various publics in making science policy, I believe we should think more carefully about how public opinion is actually formed, as well as how it is best consulted. Media content is not “what people think.”

SUSANNA HORNIG PRIEST

School of Environmental and Public Affairs

Editor, Science Communication

University of Nevada, Las Vegas

Las Vegas, Nevada


Reversing urban blight

Michael Greenstone and Adam Looney present an excellent overview of how economists think about the household-level consequences of local job destruction (“Renewing Economically Distressed American Communities,” Issues, Winter 2011). During a deep recession, job destruction increases and job creation slows. Those who own homes in cities that specialize in declining industries will suffer from the double whammy of increased unemployment risk and declining home prices. Poverty rises in such depressed cities. In such a setting featuring bleak job prospects for young people, urban crime, unwed pregnancy rates, and school dropout rates will rise, and a culture of poverty is likely to emerge.

Empirical economists continue to try to identify effective public policies for reversing such blight. The broad set of policies can be divided into those aimed at helping the depressed place and those aimed at improving the quality of life of the people who live in the place. Greenstone and Looney sketch out three innovative proposals. The first is place-based, whereas the second and third are person-based.

I am least optimistic about the beneficial effects for depressed communities from introducing empowerment zones. Rents will already be quite low in these depressed areas. I am skeptical about whether a tax cut and grants would lure new economic activity to the area. It is more likely that the new tax haven would attract firms who would have located within the city’s boundaries anyway but now choose the specific community to take advantage of this tax break. The intellectual justification for luring firms does exist in the case of firms that offer sharp agglomeration benefits. In his own research, Greenstone (along with Moretti and Hornbeck) has identified cases of significant beneficial spillovers to other local industries from luring specific plants (http://emlab.berkeley.edu/~moretti/mdp2.pdf).

I have mixed feelings about the proposal to retrain displaced workers. James Heckman’s evaluation of the Job Training Partnership Act of the 1990s convinced me that the returns from such programs for adult workers are low (http://ideas.repec.org/p/nbr/nberwo/6105.html). I wish this was not the case.

I am most optimistic about the potential benefits from the mobility bank. The United States consists of hundreds of local labor markets. From a macro perspective, we need young workers to move from depressed areas to booming areas. The mobility bank would help to finance the short-run costs of making such a move.

Although such a mobility bank helps the people, how can we help the depressed cities? Depressed cities feature low rents. New immigrants often seek out such communities. Utica, New York, has experienced an infusion of immigrants from Colombia and Somalia. The United States has a long history of immigrant success stories, and increased immigration might be one strategy for revitalizing these cities.

Housing demolition in blighted neighborhoods is a second strategy for reducing local poverty. Housing is highly durable. When Detroit was booming in the 1960s, it made sense to build houses there, but now Detroit has too many houses relative to local labor demand. Cheap housing can act as a poverty magnet. The mayor of Detroit recognizes this point and has instituted a policy for knocking down low-quality homes and building up new green space (http://www.nytimes.com/2010/06/21/us/21detroit.html).

MATTHEW E. KAHN

Professor of Economics

University of California at Los Angeles

Los Angeles, California


Washington’s Media Maze

Policy analysis should not be merely an academic exercise. The goal is to inform and influence public policy, and therefore it has to reach the movers and shakers and the decisionmakers. That means it has to arrive at the right time via the right medium. But how does one do that in a world of network and cable TV, traditional and satellite radio, print newspapers and magazines, the online sites of the traditional media outlets and the proliferating Internet-only sources of information, email news services and listserves, laptops and tablets, Blackberries and iPhones and Androids, tweets and social networks, uTube and Tivo?

Well, one does it in many different ways because the target audience absorbs information via numerous routes. Fortunately, a remarkably helpful guide to the media maze has recently become available online thanks to the generosity of the National Journal. After years of proprietary surveys of how Washington insiders acquire their information, National Journal has decided to make the results available for free online at www.nationaljournal.com/wia. The Washington in the Information Age is a fascinating treasure trove of data about how Capitol Hill staff, federal officials, and the Beltway cognoscenti use a wide variety of information sources. And the data is all presented in an addictive interactive format that is easy to use and difficult to surf away from.

The online site enables one to look at responses to dozens of questions and to break out the results by the sector where the respondent works, by political party, and by age. Some results are predictable: Republicans read George Will and Democrats read Paul Krugman. Others are not: In many respects the 20-somethings are not that different from the 50-somethings in how they seek information. I’m not going to try to pinpoint all these distinctions. In what follows, all the percentages reflect the answers of the total pool of respondents. Although interesting, the differences among subgroups do not alter the overall picture.

As one would expect, when asked what is the source of information about breaking news events, the overwhelming favorites are email alert, news website, and television, with TV being particularly important for Capitol Hill staff who are rarely out of sight of a news channel. Twitter and RSS feeds rank almost as low as print magazines.

But when the question is how to acquire analysis and opinion about a national news story, print newspapers rival news websites for the lead, with more than 60% of respondents listing them among their top four sources. Only 20% list blogs among their top four, trailing behind radio. Blogs are making more inroads on Capitol Hill, where 35% of staff list them among their top four.

When asked how they read their daily news, the respondents vastly prefer screens to paper. About 40% rely on digital sources primarily or completely, and an additional one-third use print and digital equally. Fewer than 3% use print exclusively. This is not encouraging news for a magazine such as Issues, which is primarily a print medium. But Issues is not delivering daily news, and this audience has a very different approach to less time-sensitive information.

When they were asked how they read monthly magazines, the response was dramatically different. Three out of four respondents read them solely or mostly in print. Only 6% read them only in digital form. This probably reflects the length of the articles and the fact that they are reading them at home or on airplanes. It is reassuring to know that the magazine is not yet ready for the trash bin of history.

As significant as the medium in which information is consumed is the timeframe in which it is wanted. National Journal has been conducting this survey for many years, but in the past only small pieces of information were shared with outsiders. One critical insight that did emerge was the overwhelming importance of timeliness to Capitol Hill staff operating under the enormous pressure of the legislative agenda. Most staff focus on specific areas of policy and have little time to stay broadly informed. Even within their areas, they typically can concentrate only on the specific questions being actively debated in Congress. If the topic of your report or article is not on the agenda when it is published, do not expect Hill staff to read it right away. But when a topic is on the agenda, Hill staff often find it hard to acquire as much information as they want. For those who produce information and analysis, the key is to feed that information to the staff when they need it. It might be stale to you, but it could be a revelation to congressional staff.

The current survey provides more fine detail on the importance of timeliness. When asked where they would look for information they needed in the next two hours, and that’s not an unusual situation, the respondents overwhelmingly favored the major news sites and an Internet search. Only about 10% mentioned an academic expert. But if they had a couple of days to obtain the information, the leading sources would be the think tanks and academic experts, with about 65% of respondents naming them. Only about a quarter of the respondents listed blogs.

This should be very reassuring to those whose stock in trade is intellectual rigor. Although we hear plenty of moaning about the shallowness of policy debates and the dominance of bumper-sticker analysis, this survey indicates that the people who make and directly influence national policy value expertise and thorough analysis. For those of us who provide it, the key is to make certain that our contributions reach the target audience when they are wanted. Issues maintains a free searchable online archive of published articles and also assembles collections of articles on major topics such as energy, competitiveness, public health, and national security.

Washington is a noisy place, and the clamor for attention seems to create a cacophony of faceless voices of which only the loudest and crudest can be heard. When asked what word best describes their response to the proliferation of media content, the most common response was “overwhelmed.” But it appears that the voices of the better informed, the more thoughtful, and the more responsible are the ones that are being listened to.

When asked which sources of information they trust, 90% of respondents named the mainstream media such as the New York Times, CNN, and National Public Radio. Only 20% cited online-only sources such as the Huffington Post and Drudge Report, and 10% named blogs. The results were consistent when they were asked which columnists, bloggers, or opinion makers they follow regularly online. The favorites come from the print world: Krugman, Will, Thomas Friedman, and David Brooks. The online commentators such as Matt Drudge and Josh Marshall appear much further down the list.

The upshot of the survey is that although the paths by which news and analysis reaches the political elite is changing because of new technology, the sources of authoritative opinion are weathering the storm. Whether read online or in print, the New York Times, the Washington Post, and the Wall Street Journal are still recognized as having the editorial judgment and journalistic standards that instill confidence. Uninformed opinion and simplistic analysis may seem to dominate debate in the crisis of the day, but when time allows—and eventually there is time—Washington turns to the intellectuals in think tanks and universities because they understand the value of deep knowledge and careful reasoning.

OK, this isn’t true of everyone in Washington, and perhaps it’s true only on the best days of those who participated in the survey. But it’s still a reminder to those capable of providing informed expert opinion that this is a valued commodity in Washington. We shouldn’t be tempted by the siren call of instant headlines, catchy one-liners, and volume-driven debates. That is not what will drive policy in the long run, and besides, we pointy heads aren’t very good at it.

Clearly written, evidence-based, made-available-when-needed policy analysis and prescriptions, even when produced on paper, does have power in Washington, and this survey shows that the users are asking for it.

Greening the Built Environment

There was a stunted debate in Washington and the country in 2009 about climate change that ended the way many debates do these days: with a hung jury and no action. Yet at some point the United States will have to seriously address climate change. As the highest per capita emitter of greenhouse gases (GHGs) by far, the United States must lead on this issue. Time will force action, and the longer policymakers wait, the higher the economic, social, and environmental costs the country and the planet will be forced to bear.

When the debate resumes in earnest, let us hope that the supply-side argument—energy efficiency, renewable energy, and alternative fuels—will not be the dominant thrust. Instead, demand mitigation should be the number-one means by which we will meaningfully address the issue, as Peter Calthorpe compellingly argues in Urbanism in the Age of Climate Change. To quote the immortal Mr. Miyagi in the 1984 movie The Karate Kid, when advising the teenage Daniel on how to avoid being beaten up by his high-school chums: “Best defense, no be there.” The best way of emitting fewer GHGs is by living in a place that does not require the burning of fossil fuels. That is a walkable urban place where nearly all daily trips from home are reached by walking, biking, or short car or transit trips. Although demand mitigation alone will not enable the United States to achieve the required 90% GHG emissions reduction (from the 1990 base) needed by 2050, without demand mitigation, the supply side will be insufficient, as Calthorpe points out.

There has been a crying need for this short, richly illustrated, cogent book to demonstrate the connection between the built environment (buildings and the transportation infrastructure used to travel between those buildings) and energy use and GHG emissions, and Calthorpe is the ideal author. He is the president of an international urban planning firm, one of the founders of the Congress of the New Urbanism, and the author of some of the most important books on architecture and urbanism of the past three decades. These include Sustainable Communities (1986, co-written with Sim Van der Ryn); The Pedestrian Pocket Book (1991, with Doug Kelbaugh), in which he introduced the concept of transit-oriented development; and The Regional City (2001, with Bill Fulton). With his track record of consistently being well ahead of the curve, one can understand why Newsweek named him one of 25 innovators on the cutting edge.

THUS, BRINGING DESTINATIONS CLOSER TOGETHER WHEN DEVELOPING THE BUILT ENVIRONMENT IS A SIMPLER, MORE ELEGANT SOLUTION THAN ASSEMBLING A FLEET OF ELECTRIC CARS AND THE ACRES OF SOLAR COLLECTORS NEED TO POWER THEM.

To solve the climate change challenge, Calthorpe writes, we need to focus on ends, not means. For example, the goal of transportation is access, not movement or mobility; movement is a means, not the end. Thus, bringing destinations closer together when developing the built environment is a simpler, more elegant solution than assembling a fleet of electric cars and the acres of solar collectors needed to power them. Calthorpe calls it “passive urbanism.”

To Calthorpe, where and how we live matters most. The energy use of an average U.S. single-family household (living in a detached home and driving to work) totals just less than 400 million British thermal units per year. If this family bought a hybrid car and weatherized its home, it could cut its energy use by 32%—not bad for what Calthorpe dubs “green sprawl.” In contrast, a typical townhome located in a walkable urban neighborhood (not necessarily in a center city but near transit) without any solar panels or hybrid cars consumes 38% less energy than the green sprawl household. Traditional urbanism, even without green technology, is better than green sprawl. Greening that city townhouse and improving transit options results in 58% less energy use than an average suburban household, Calthorpe calculates. A green in-town condo is even better: 73% less in energy use than the average single-family home in a distant suburb.

Calthorpe argues that a supply side–only approach will not lead us to the most cost-effective, socially rewarding, or environmentally robust solutions. Combining supply-efficiency and demand-mitigation strategies will reduce demand for travel through urbanism, reduce oil consumption through more efficient cars, and reduce electricity use through intelligent building design. New energy sources and technologies, the supply-side approaches, can be deployed in more modest doses, ultimately at less cost.

Calthorpe demonstrates that those who create the built environment have the number-one means of reducing energy use and GHG emissions. “Urbanism is, in fact, our single most potent weapon against climate change, rising energy costs, and environmental degradation,” he writes.

His logic is as follows:

  • The built environment is responsible for more than two-thirds of U.S. energy use and GHG emissions.
  • The spectrum of options by which the built environment is constructed can affect energy use and GHG emissions dramatically.
  • Building “green urbanism,” which today achieves a tremendous per-square-foot market price premium, thereby showing its market appeal, will move the country to where scientists say it needs to be in order to avoid irreparable climate change.

Calthorpe does not stop by making his case for green urbanism; he also demonstrates how green urbanism can be implemented. In doing so, he makes use of his pioneering work in “scenario planning” for metropolitan areas, envisioning how the built environment will evolve over coming decades. It links the many direct and indirect effects of the built environment: land use and conservation, transportation and other infrastructure, water and air pollution, economic growth, net fiscal effects, health, and so on. Calthorpe is currently using the tools he and his colleagues developed to help California implement its own sweeping vision of the future. In my view, metropolitan scenario planning should be the foundation for the next federal transportation bill.

Perhaps the most persuasive example of the value and power of scenario planning is Calthorpe’s work during the past decade in the Salt Lake City metropolitan area. This politically conservative and fast-growing area essentially decided to embrace a green urban future. For example, it has invested in an extensive light rail and commuter rail system. The reason the area chose this path was because the total infrastructure cost would be far lower than for continued sprawl, and economic growth was projected to be greater.

Urbanism in the Age of Climate Change could prove to be the most important book of the year regarding the built environment, the most important book in the environmental movement, and the most important real estate business book as well—quite a hat trick.

An Energy Agenda for the New Congress

At the beginning of this new Congress, it is already becoming clear that energy policy will have a major place on the agenda. Part of that is because the president made clear in his State of the Union Speech that he will give energy a major priority in his administration. In part, it is because our energy security is dependent on overseas supplies and global stability. The events that we have seen unfold in North Africa and the Middle East are stark reminders that the world is an unpredictable place. Whenever geopolitical events potentially affect our access to affordable energy supplies, it is a spur to consider energy policies that might reduce those geopolitical risks.

But perhaps more important than any of those reasons is the competitive pressure the United States is experiencing from other major world economic powers as they take a very leading role in clean energy markets. According to Bloomberg New Energy Finance, new investment in clean energy globally reached nearly a quarter of a trillion dollars in 2010. That was a 30% jump from where it was in 2009, and a 100% increase from the level in 2006.

China alone invested $51.1 billion in clean energy in 2010, making it the world’s largest investor in this sector. China now manufactures over half of the photovoltaic modules used globally. In 2010, China installed about 17 gigawatts of new wind capacity, roughly half of the total capacity installed globally, with virtually all the equipment being supplied by its domestic manufacturers.

But the concern about the competition for clean energy jobs is not just about China. Europe also made major strides last year toward competing in these markets. Countries such as Germany, the Czech Republic, Italy, and the United Kingdom, have emphasized small-scale distributed electricity-generation projects. In Germany, 8.5 gigawatts of new photovoltaic capacity were added in 2010. The United States must be aware of these initiatives as it considers its course of action.

It is also significant that other countries consume energy more efficiently than does the United States. According to the International Energy Agency, Japan, the United Kingdom, and Canada are all ahead of the United States in implementing policies to make sure they get the most out of every BTU that they consume. Japan, for example, has its Top Runner program, which encourages competition among appliance and equipment manufacturers to continuously improve the efficiency of those appliances and that equipment.

So the question is: How does the United States respond to this competition for clean energy jobs? I believe that to remain at or near the forefront of this strongly developing market, the United States needs to do at least four things:

  • First, it needs to ensure that it remains at the forefront of energy R&D, because innovation is the source of its greatest competitive strength. The president made that point in his State of the Union Speech and in other forums as well.
  • Second, it must ensure that it has a strong domestic market for clean energy technologies. Without clean energy market pull in the United States, there will not be the incentive to manufacture and deploy these technologies here.
  • Third, it has to ensure that it has the necessary financial infrastructure and the incentives to provide the capital needed to build advanced energy technology projects.
  • Finally, it needs to have explicit policies to promote the development of U.S. manufacturing capabilities for these clean energy technologies.

I think these four items or elements should be at the heart of whatever comprehensive energy legislation we undertake in this Congress. Let me say a few more words about each of them.

R&D

The first item to consider is support for advanced energy technology R&D. The United States has traditionally led the world in many of the characteristics that are essential to having an innovation economy. It has the predominant share of the world’s best research universities. It is the world’s largest source of financial capital. It has a disproportionate share of the world’s leading innovators in high technology. But these advantages are shrinking rapidly. In 2007, U.S. energy research expenditures were at about 0.3% of gross domestic product (GDP). Japan was at about 0.8% of GDP, and even China was at about 0.4%. Since then, overseas competitors have significantly increased their research investments in energy, while U.S. investments in this area have grown only modestly. It is clear that if Congress is to put together any kind of bill that deserves to be labeled as comprehensive energy legislation, we need to address the huge gap between where the nation’s investment in energy technology research is and where in fact it ought to be.

In his State of the Union address, President Obama correctly identified this as a major priority for the appropriations process this year. He followed up on that speech by submitting a budget proposal for the Department of Energy (DOE) in February that increased the department’s budget by nearly 12%, with strong funding increases proposed for basic energy sciences, the Advanced Research Projects Agency–Energy, and expanded technology programs for solar, wind, geothermal, and biomass energy. And he did all this at a time when he was proposing government-wide budget cuts to deal with the deficit. His willingness to make thoughtful and forward-leaning investments in energy R&D demonstrates the priority he has given to this area.

The second item is ensuring robust domestic demand for clean energy technologies. It is not enough just to support the research. Getting clean technologies developed, manufactured, and deployed in the United States will require a robust and certain demand for clean energy in the marketplace. This reality was underscored to me during a trip recently to Silicon Valley. I spoke to various people there involved in financing and developing clean energy projects. The message I heard consistently was that uncertain U.S. demand for clean energy is preventing many promising clean technologies from being developed in this country. Companies will not establish a manufacturing base where they do not see a strong market. Private capital sources are, in fact, exerting intense pressure on U.S. clean energy innovators to establish their manufacturing base overseas, where government policies are creating this strong clean energy demand.

We have to take seriously the marketplace reality that the high-wage clean-energy manufacturing of the future will be located both close to demand and in countries with the most favorable clean energy policies. My desire is to see the United States lead the world in renewable energy manufacturing, so that all of the solar panels and wind turbines that are installed around the country are not stamped “Made in China” or “Made in Germany.” This is the key reason why I have long supported a Renewable Electricity Standard. The country needs to have long-term market predictability for renewable electricity. On-again, off-again production tax credits are no match for the comprehensive approaches being put in place by other countries.

The third item is support for deployment. Although end-use demand is certainly one of the first things an entrepreneur or potential investor looks at when deciding where to locate operations, the analysis does not end there. There is an equally important question: Is there a path to full commercialization of this technology? How can one build the first-of-a-kind project (or the first-few-of-a-kind projects) using a new clean energy technology to demonstrate its actual cost and performance? This is what the private sector wants to see before it will invest in a technology.

This is a particular problem for clean energy technology, because the capital costs in this area are higher than those of previous U.S. high-tech success stories such as information technology or biotechnology. No investor in today’s marketplace can match these capital requirements alone. Asian and European countries have set up institutions to address the problem. They have already successfully lured companies to commercialize and manufacture their U.S.-developed clean energy technologies in those markets. The United States needs to set up similar institutions if it hopes to support clean energy jobs at home.

The fourth element is support for manufacturing. If the nation wants clean energy jobs, it needs to have policies to encourage domestic manufacturing. In addition to providing a predictable market for clean energy and a robust financing capability for first-of-a-kind projects, domestic companies need to have incentives for manufacturing the critical components for clean energy technologies. Other countries, most notably China, have complemented their clean energy market standards with robust tax incentives and other fiscal subsidies specifically targeted at manufacturing clean energy components. And as a result, the United States has gone from being a world leader in producing clean energy technologies and enjoying a green trade surplus of more than $14 billion in 1997, to a green trade deficit of nearly $9 billion in 2008. The country cannot afford to sit idly by as its economic competitors move clean energy manufacturing steadily overseas, and deprive Americans of solid job opportunities.

So these are four key strategic elements that need to be included in any energy legislation in this Congress, if an energy bill is to help us compete in global energy markets in the future. None of these individual ideas are new, but their interconnection is now more apparent. A few years ago, it seemed possible that the country could do just one or a few of these things and be successful. It is now clear that action is required on all four of them and on a level that is competitive with what other countries are doing.

Policy prescriptions

Let me now describe some of the specific policy initiatives that I think will be very timely to pursue in the Senate this year. Most of these initiatives will be items I hope to champion in the Committee on Energy and Natural Resources. This is not intended to be an all-inclusive list. The committee has 22 members, many of whom have just been appointed. I anticipate numerous meetings and extensive bi-partisan dialogue over the next few weeks as we work out our legislative roadmap for this Congress. But the following topics are issues that I think are particularly crucial for us to address. They are also issues where we did have strong bi-partisan consensus in the 111th Congress. This gives us a good place to start our deliberations this year.

The cheapest energy is the energy we do not have to use by operating more efficiently. So, clearly where I’d start with is energy efficiency. In the last Congress, we had a very productive dialogue in the Energy Committee and among businesses, manufacturers, and efficiency advocates interested in appliance and equipment energy efficiency. The result was a package of legislative provisions that codified consensus agreements to update certain existing appliance standards, to adopt new appliance standards, and to improve the overall functioning of DOE’s efficiency standards program. Many of these efficiency provisions were part of the comprehensive energy bill we reported out of committee in 2009. Others were subsequently approved by the committee or incorporated into bipartisan bills.

These sorts of standards are essential if U.S. appliance manufacturers are to remain competitive in world markets, which will increasingly demand highly efficient appliances and equipment. By ensuring a strong domestic market for energy-efficient products, we keep innovation and jobs here in the United States, while realizing significant energy and water savings and major cost savings to the consumer.

Obviously we had great difficulty in getting any sort of legislation though in the lame duck session of the last Congress; we were not able to enact these consensus provisions. We had overwhelming broad bipartisan support, but not unanimous support, in the Senate. This is an important piece of our early agenda in this Congress, and I have introduced a follow-on bill along with Senator Murkowski and other colleagues. At a recent hearing before the Energy Committee, the bill was broadly endorsed by industry, consumer, and environmental groups. I look forward to advancing it to consideration by the full Senate.

By ensuring a strong domestic market for energy-efficient products, we keep innovation and jobs here in the United States, while realizing significant energy and water savings and major cost-savings to the consumer.

There is also much that can and should be done to promote efficient use of energy in other parts of the economy. In residential and commercial buildings, a broad coalition supported Home Star, a program for residential building efficiency. Similar interest was apparent with commercial buildings in a program called Building Star. I plan to continue to advance the goals of these proposals in this Congress, although the form in which we provide funding to promote these goals may need to change. In transportation, two proposals from the previous Congress deserve a closer look. First, we should provide a greater point-of-sale incentive to vehicle purchasers, with dealership rebates that would be larger for the more fuel-efficient cars. Senators Lugar, Snowe, and others cosponsored this legislation with me in the previous Congress. A second set of proposals dealt with diversifying the sources of energy that we use in transportation. This bill, which was proposed by Senators Dorgan and Alexander, passed out of the Energy Committee on a 19-4 vote.

Energy efficiency in manufacturing and industrial operations is also important. The legislation reported by the committee last year contained a comprehensive program on manufacturing energy efficiency that had good bipartisan support. Again, I hope we can move forward with this legislation.

Another priority is the one highlighted by the president in his State of the Union speech: moving to a cleaner energy mix in the way we generate electricity. For a number of years, I have advanced a proposal for a Renewable Electricity Standard to ensure long-term and predictable demand for renewable clean energy resources. The president proposed to expand on that concept by including a broader suite of technologies such as nuclear energy, coal with carbon capture and storage, and natural gas generation. The president’s stated goal, as he described it, is to obtain 80% of the nation’s electricity from such clean energy sources by 2035. The White House has asked us to work with them to see how the provisions for this Clean Energy Standard would be developed. Obviously, there are a lot details to work out. I am pleased that the administration has reached out to the committee to consult on this subject.

Perhaps no topic garnered more scrutiny during the previous Congress’s markup that the Renewable Electricity Standard. I plan to work with colleagues on both sides of the aisle in the committee to determine how we can craft a workable legislative proposal to achieve what the president has set out as his goal. As we do so, a number of key design questions will need to be answered: What counts as a clean energy technology? How does the proposal account for existing clean energy sources? Does the credit trading system that we have developed for renewables in our proposal for renewable resources fit with these other resources?

With respect to financing assistance for energy projects, I think there are at least three top priorities for early attention in this Congress: reforming the current loan guarantee program for clean energy projects, providing financing support for advanced energy manufacturing in this country, and providing reasonable stability and predictability in the tax provisions that apply to clean energy projects and technologies.

The first of these is to replace the current loan guarantee program for clean energy technologies with a Clean Energy Deployment Administration. CEDA would be a new independent entity within DOE, with autonomy like the Federal Energy Regulatory Commission has. It would provide various types of credit to support the deployment of clean energy technologies, including loans, loan guarantees, and other credit enhancements.

This proposal received strong bipartisan support in the Energy Committee as part of the larger energy bill we reported. It also had a broad range of external support from clean energy developers, innovators, and venture capital firms. Fixing the problems of the current DOE loan guarantee program and ensuring that we have an effective financing authority for a broad range of clean energy technologies, including renewables, nuclear, energy efficiency, and carbon capture and storage, needs to be one of our highest priorities. I am committed to moving ahead with that legislation in this Congress.

The second priority in the area of financing assistance relates to encouraging the domestic location of manufacturing facilities and replenishing the fund to award tax credits under section 48C. This section provides up to a 30% tax credit for the costs of creating, expanding, or reequipping facilities to manufacture clean energy technologies.

The initial funding was vastly oversubscribed; the government received $10 billion in applications for $2.3 billion in tax credits. This is a powerful demonstration of the potential for clean energy manufacturing that exists in this country. In the previous Congress, Senators Hatch, Stabenow, and Lugar joined me in filing the American Clean Technology Manufacturing Leadership Act. This bill would have added another $2.5 billion in tax credit allocation authority. President Obama has since called for an additional $5 billion. I hope we can help reintroduce bipartisan legislation to ensure this credit’s continuation at the president’s proposed level. Although this is a matter that will be handled in the Finance Committee, it is an important near-term bipartisan opportunity in this Congress.

The third essential element is to bring stability and predictability to this part of the tax code in order to attract private capital to clean energy projects. If you look at this part of the tax code, many of the energy-related tax incentives will expire at the end of 2011, including the section 1603 program; the credit for energy-efficient residential retrofits; the credit for construction of new energy-efficient homes; the credit for energy efficient appliances; and the incentives for alcohol fuels (mostly ethanol), biodiesel, and renewable diesel. Other energy-related tax incentives are set to expire at the end of 2012, 2013, and 2016.

One other major challenge and priority for the committee in this Congress will be to address the proper and effective regulation of energy development to order to protect the public health and safety and the environment. I have discussed this with Michael Bromwich, the director of the Bureau of Ocean Energy Management, Regulation, and Enforcement, and he is working very hard to get his arms around this critically important issue.

One of the important lessons learned from the National Commission on the Deepwater Horizon Oil Spill is that in the long run, no one—least of all the regulated industry—benefits from inadequate regulation and underfunded regulators. In the aftermath of the Deepwater Horizon disaster, the Committee on Energy and Natural Resources last June came together and unanimously voted out a bipartisan bill to address the key problems uncovered by our hearings on the disaster. Unfortunately, Congress did not enact our bi-partisan bill.

At its first hearing in the current Congress, the committee heard from the co-chairmen of the President’s Commission on their recommendations. I hope to introduce in the near future a bipartisan follow-on bill to last year’s legislation. I hope that we can repeat our bipartisan success of the previous Congress in developing a bill that recognizes the need to develop the rich resources of the outer continental shelf but also to minimize the potential impact on the marine and coastal environment and on human health and safety.

Finally, an item that I hope the Energy Committee can address early in this Congress deals with perhaps the nation’s most pressing energy security problem: the vulnerability of the electrical grid to cyber attack. A major disruption of the electric transmission grid, or the equipment it contains, as part of a cyber attack could have disastrous consequences. We need to ensure that adequate preventative measures are in place across the grid. The problem is that we don’t currently have mechanisms to ensure that these needed steps are being taken. The whole grid is as vulnerable as its weakest link. In the previous Congress, the Energy Committee twice passed legislation to address this need. The House of Representatives also sent a bill to the Senate on this subject, but again, due to the inability to process legislation in any mode other than unanimous consent in the Senate, we were not able to pass the legislation into law. I hope to work with the members of the committee on both sides to deal with this issue early in this Congress.

In conclusion, this Congress has before it an aggressive agenda of issues and proposals that relate to energy in all its forms and uses. At the same time, we face a daunting partisan environment in Congress for legislation of any type, as well as the added challenge of responding to higher prices for fuels and electricity that are being occasioned both by the energy demand created by global economic recovery and by instability in North Africa and the Middle East. My plan is to work to achieve bipartisan engagement with both the returning and new members of the Senate Energy and Natural Resources Committee, so that we make visible progress on a suite of energy bills that the full Senate could consider in the first several months of this year.

From the Hill – Spring 2011

Obama proposes essentially flat 2012 R&D budget

On February 14, the Obama administration proposed a fiscal year (FY) 2012 R&D budget of $147.9 billion, a $772 million or 0.5% increase from FY 2010. Although the overall budget is essentially flat, the president carves out increases for his priorities in areas such as clean energy R&D, education, infrastructure, and innovation.

The White House released its budget request the same week as the new Republican majority in the House approved a bill to provide funding for the remainder of the 2011 fiscal year that includes significant cuts in R&D spending.

In releasing the federal R&D budget request, John Holdren, director of the White House Office of Science and Technology Policy, said that “This is a budget that our nation can be proud of. It provides solid research and development investments to achieve game-changing advances in areas of crucial importance to ’s future.”

Overall, basic and applied research and nondefense research fare very well in the president’s budget request. Basic research would grow almost 12% to $32.9 billion. Applied research would increase 11.4% to $33.2 billion. Total nondefense research would increase 6.5% to $66.8 billion.

The president’s budget keeps the National Science Foundation (NSF), the Department of Energy’s (DOE’s) Office of Science, and the National Institute of Standards and Technology (NIST) on a multiyear path to doubling their budgets. The NSF R&D budget would increase 16.1% to $6.3 billion. The DOE Office of Science budget would increase 9.1% to $4.9 billion. The NIST budget would increase dramatically by $284 million to $872 million, mostly because of a ramping up in investments in cyberinfrastructure research and advanced manufacturing technologies. Because funding for part of FY 2011 still has not been approved, all figures for FY 2012 use a FY 2010 baseline for comparison.

Climate change is also a priority in the administration’s budget. Funding for the U.S. Climate Change Research Program, an interagency initiative, would increase more than 20% to $2.6 billion.

Several key agencies would see modest increases in their budgets, including the National Institutes of Health, which would receive a $1 billion or 3.4% increase to $31.2 billion. The National Aeronautics and Space Administration R&D budget would rise by $559 million or 6% to $9.8 billion. The National Oceanic and Atmospheric Administration (NOAA) budget would increase by $36 million or 5.2% to $728 million.

Other agencies did not fare so well. The U.S. Department of Agriculture (USDA) budget would decline by 17.7% to $2.15 billion, mostly because of reductions in building and facilities, congressionally designated projects, and extramural research. The Department of Interior R&D budget would drop by $49 million to $727 million. The U.S. Geological Survey budget would decrease by 8.2% to $607 million. The Environmental Protection Agency (EPA) budget would decline by more than 12% to $579 million. The Department of Defense R&D budget would decline by 4.9% to $76.6 billion, although most of the decrease is because of cuts in development. Basic research would increase by 14.5% to $2.1 billion.

The president’s FY 2012 budget request stands in stark contrast to the bill passed by the House on February 19 that would cut FY 2011 discretionary funding by $61 billion below FY 2010 enacted levels. Under the bill, which was rejected by the Senate, R&D as a whole would be cut by $6.41 billion, 4.4% less than FY 2010. Overall, the president’s budget request totals $7.4 billion or 12.5% more in nondefense R&D investment than the House bill. Some of the biggest differences are in funding for energy R&D, the NIH, and the NSF.

R&D in the FY 2011 and FY 2012 Budgets by Agency (budget authority in millions of dollars)

  FY 2011 Current CR FY 2011 House Change from FY 2010 FY2011 Senate Change from FY 2010 FY 2012 Budget Change from FY 2010
Percent Percent Percent

Defense (military) 81,442 77,189 -4.2% 76,739 -4.8% 76,633 -4.9%

S&T (6.1-6.3 + medical) 13,307 13,308 0.0% 13,309 0.0% 13,311 0.0%

All Other DOD R&D 68,135 63,881 -5.1% 63,430 -5.7% 63,322 -5.9%

Health and Human Services 31,948 30,345 -3.4% 31,943 1.7% 32,343 2.9%

National Institutes of Health1 30,157 28,583 -5.2% 30,159 0.0% 31,174 3.4%

All Other HHS R&D 1,791 1,762 38.8% 1,784 40.5% 1,169 -7.9%

Energy 10,783 9,328 -13.9% 10,133 -6.5% 12,989 19.9%

Atomic Energy Defense 4,074 4,074 5.7% 3,851 -0.1% 4,522 17.3%

Office of Science 4,481 3,515 -22.4% 4,141 -8.5% 4,940 9.1%

Energy Programs 2,228 1,739 -29.1% 2,141 -12.8% 3,527 43.7%

NASA 9,911 9,820 6.0% 9,979 7.7% 9,821 6.0%

National Science Foundation 5,374 5,223 -4.1% 5,355 -1.7% 6,320 16.1%

Agriculture 2,619 2,239 -14.2% 2,548 -2.4% 2,150 -17.7%

Commerce 1,331 1,199 -10.8% 1,298 -3.4% 1,720 28.0%

NOAA 684 593 -14.3% 660 -4.6% 728 5.2%

NIST 589 542 -7.8% 573 -2.5% 872 48.3%

Transportation 1,054 970 -9.3% 1,049 -1.9% 1,215 13.7%

Homeland Security 887 803 -9.4% 727 -18.0% 1,054 18.8%

Veterans Affairs 1,162 1,162 0.0% 1,162 0.0% 1,018 -12.4%

Interior 776 750 -3.4% 770 -0.8% 727 -6.3%

US Geological Survey 661 646 -2.2% 657 -0.6% 607 -8.2%

Environ. Protection Agency 590 552 -6.4% 576 -2.3% 579 -1.9%

Education 356 350 -0.9% 356 1.0% 480 36.0%

Smithsonian 226 224 5.1% 226 6.1% 212 -0.5%

All Other 575 575 1.8% 575 1.8% 650 15.0%

Total R&D 149,034 140,730 -4.4% 143,435 -2.5% 147,911 0.5%

Defense R&D 85,516 81,263 -3.8% 80,590 -4.6% 81,155 -3.9%

Nondefense R&D 63,518 59,467 -5.1% 62,845 0.3% 66,756 6.5%

Source: OMB R&D data, H.R.1 as passed by the House, Senate bill as posted on appropriations website, agency budget justifications, and agency budget documents.

Note: The projected GDP inflation rate between FY 2010 and FY 2012 is 2.7 percent.

All figures are rounded to the nearest million. Changes calculated from unrounded figures.

1/ H.R.1: Sec.1812 sets the average total cost of all Competing RPGs awarded during FY 2011 at a maximum of $400,000.

Sec.1850 directs NIH to award at least 9,000 new competing research grants in FY 2011.

Major R&D cuts in the House bill, compared to FY 2010, include: the USDA, $415 million; NIST, $160 million; NOAA’s Operations, Research, and Facilities budget, $454 million; NSF, $360 million; fossil energy R&D, $131 million; the Department of Education’s Mathematics and Science Partnership Program, $180 million; and NIH, $1.63 million. Additionally, the House bill would prohibit the use of federal funds for NOAA’s Climate Service, the Intergovernmental Panel on Climate Change, and EPA programs involving greenhouse gas registry, greenhouse gas regulation, offshore drilling, mountaintop mining, mercury emissions from cement plants, and Chesapeake Bay cleanup.

Because Congress could not agree to a bill funding the government for the full fiscal year, it approved a temporary bill that extended funding through March 4 but which also cut spending by $4 billion below enacted FY 2010 levels. The cuts included $41 million in the Department of Homeland Security’s Science and Technology Program and $77 million and $292 million, respectively, in DOE’s Office of Science and Energy Efficiency and Renewable Energy program.

In a March 3 letter sent to Senate Majority Leader Harry Reid (D-NV) and Minority Leader Mitch McConnell (R-KY), the Task Force on American Innovation, which is made up of about 170 scientific and other organizations, said the cuts in the House bill would have a “devastating impact” on the NSF, DOE’s Office of Science, NIST’s core research programs, and science, technology, engineering, and math (STEM) education programs contained in the America Competes law, a major priority of the research community.

In a flurry of activity in the lame-duck session in December 2010, Congress unexpectedly approved reauthorization of the America Competes Act. The primary goal of the Act is to authorize increased funding over three years, from FY 2011 to FY 2013, for the NSF, NIST, and the DOE’s Office of Science. NSF would receive $7.4 billion, $7.8 billion, and $8.3 billion; NIST would receive $919 million, $971 million, and $1.04 billion; and the Office of Science would receive $5.3 billion, $5.6 billion, and $6 billion. In addition, the legislation provides modest increases for DOE’s Advanced Research Projects Agency-Energy to $300 million, $306 million, and $312 million, respectively. Given the new political landscape in, these increases are now in question.

Scientific integrity guidelines released

More than 21 months after President Obama requested them, the White House Office of Science and Technology Policy (OSTP) on December 10, 2010, released government-wide guidelines on scientific integrity. The document elaborates on the principles laid out by the president on March 9, 2009, and provides guidance to executive departments and agencies on how to develop policies on issues involving scientific integrity.

The guidelines are in response to controversies that occurred during the George W. Bush administration. A number of scientists, scientific organizations, and congressional leaders accused Bush officials of taking steps that politicized science.

The memorandum states that science should be free from “inappropriate political influence.” To strengthen government research, the memo states that job candidates should be hired “primarily” on their merits, that data and research used to support policy decisions should undergo peer review when possible, and that clear conflict-of-interest standards and appropriate whistle-blower protections should be promulgated. Additionally, when appropriate, agencies should make scientific and technological information readily available, communicate scientific findings to the public in a clear and accurate manner, and detail assumptions, uncertainties, probabilities of outcomes, and best- and worse-case scenarios of scientific findings.

The memorandum states that for media interview requests, agencies should make available an “articulate and knowledgeable spokesperson” who can portray a research finding in a nonpartisan and understandable manner. Also, after appropriate coordination with their immediate supervisor and the public affairs office, federal scientists may speak to the media and the public about their findings, and the public affairs office cannot ask or direct scientists to change their findings.

The guidelines call on agencies to establish policies that promote professional development of government scientists and engineers and encourage research publication and the presentation of research at professional meetings. Also, the guidelines say that government scientists and engineers should be allowed to be editors and editorial board members of scholarly and professional journals, serve as officers and board members of professional societies, and receive honors and awards.

Reaction to the guidelines was mixed, with some observers saying they left too much discretion to individual agencies.

Climate negotiations inch forward in Cancun

Expectations for the 2010 international climate negotiations in Cancun were far more modest than 2009’s Copenhagen conference, which allowed many to declare the December 2010 meeting of the 190 nations that are party to the United Nations Framework Convention on Climate Change a success. But key decisions on how to move forward on a global system to reduce greenhouse gas (GHG) emissions after the Kyoto Protocol ends in 2012 were left until the next meeting in, to be held from November 28 to December 9, 2011. Delegates did, however, agree that cuts will be needed by both developed and developing countries, and they made progress on other significant issues.

The Cancun agreements established a Green Climate Fund to help developing countries mitigate and adapt to climate change. Developed and developing countries will share control of the fund, with the World Bank initially serving as trustee. Much of the funding for the fund’s adaptation efforts will come from a “fast track finance” fund with an initial commitment of $30 billion and a goal of increasing the amount to $100 billion by 2020, although how the funds will be raised has yet to be resolved. In addition, a new framework and committee was established to promote action on adaptation.

Several agreements were advanced to help reduce GHG emissions through the use of technology and incentives for reducing deforestation. Governments agreed to boost technological and financial support for curbing emissions from deforestation and forest degradation in developing countries. Technology transfer mechanisms were established.

Progress was made in developing standards for the monitoring, reporting, and verification of emissions reductions, for both developed and developing countries, which has been a sticking point between China and the United States.

Patent reform moves ahead

On March 8 the Senate passed the America Invents Act by a vote of 95-5. Meanwhile, Rep. Lamar Smith (R-TX), chairman of the House Judiciary Committee, said that he plans to introduce similar legislation in the House. Both Congress and the Obama administration see reform of the patent system as a means of jumpstarting the U.S. economy and increasing innovation.

The bill would convert the U.S. patent system to a first-to-file regime, the method used in most countries, from the first-to-invent system currently used. It would allow the U.S. Patent and Trademark Office to set its own fees, thus raising the funds needed to hire more patent examiners and decrease the patent backlog, now estimated at more than 700,000 applications.

Furthermore, the bill creates three satellite patent offices, allows certain technology to receive priority for approval, and requires courts to transfer a patent infringement case to a venue that is more convenient than the one at which action is pending. The bill also gives third parties the opportunity to petition the validity of a patent once it is awarded.

Food safety reform bill finally passes

After months of congressional debate and delay, President Obama on January 4, 2011, signed major food safety legislation that will greatly expand the authority of the Food and Drug Administration (FDA) to regulate food production.

The FDA Food Safety Modernization Act will, for the first time, allow the FDA to issue a mandatory recall of food deemed tainted or unsafe. In the past, the agency has relied on voluntary recalls. The bill also gives the FDA the authority to detain food and suspend a facility’s operations should either be found to pose a health risk.

The new law calls on the FDA to create a system to facilitate the tracing of any product back to its origin. Should any shipment of produce, for example, be found tainted with a harmful bacteria, the tracing system would make it simple to track down the farm from which it originated. The law also calls on the Secretary of Health and Human Services to conduct a comprehensive evaluation of common food contaminants and create a nationwide educational program on food safety.

Although the legislation enjoyed widespread support, some critics pointed out that it failed to resolve key jurisdictional issues. Notably, although the FDA generally oversees most food products, the Department of Agriculture (USDA) handles meat, poultry, and eggs. With many food products being processed and packaged in locations handling food under both FDA and USDA jurisdictions, overlap between the two entities becomes understandable, as do gaps in oversight.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Smart Grid: Separating Perception from Reality

There is a widespread expectation in the United States and around the world today that the smart grid is the next big thing, a disruptive technology poised to transform the electric power sector. The belief is that the use of smart meters and other devices and systems will allow consumers to manage their own electricity use to radically reduce energy costs. The implementation of a smart grid system will enable the widespread use of renewable energy sources, allow more-distributed electricity generation, and help reduce carbon emissions.

The reality, however, is more complex and sobering. The smart grid idea is more accurately characterized as an extension of innovations that have been ongoing for decades. Change will continue but will be incremental because the technology is still evolving and because most consumers do not want the more flexible and uncertain pricing schemes that would replace the predictable and stable pricing of today’s system. Indeed, it appears that most consumers, at least in the short term, will not benefit from moving to a smart grid system. Although a smart grid would probably help slow increases in electricity bills in the long run, it will not reduce them, because too many other factors will be pushing prices and power usage up in the years ahead.

The evidence from an IHS Cambridge Energy Research Associates study, which draws on the knowledge and experience of those closest to smart grid implementation, is that the smart grid “revolution” is off to a bumpy start and that there will be many more bumps in the road ahead. That road is still worth pursuing, but we will need to develop a more realistic understanding of how the electric power system in the United States is evolving. Instead of a demand-side–driven transformation of consumer behavior and the elimination of future capacity needs, expect a supply-side, engineering-driven application of smart grid technologies to improve network operation and reliability in the short term and to slow growth in generating capacity needs in the long run. In many respects, we already have a smart grid in the United States. In coming decades, we will be moving to a “smarter” grid. The pace will be gradual, but the eventual benefits will be real.

The smart grid narrative

In the United States and other developed countries, an appealing and optimistic vision of the future smart grid has gained credence, even though the move toward a smarter grid is likely to turn out quite differently. In the current narrative, the United States and others are currently crippled by a balkanized “dumb” grid with endemic cascading failures, a result of continued reliance on antiquated, century-old technology. The solution is the smart grid: a continental-scale network of power lines incorporating advanced meters, sensing, and communication and control technologies that are linked through universal standards and protocols. It will be coordinated with advanced two-way broadband communication technologies that feed data into complex optimization software systems, allowing control technologies to deliver a more secure, self-healing, higher-quality, and lower-cost power network.

Smart grid deployment, the story continues, will dramatically reshape power use. The smart grid will present consumers with real-time power prices and displays of information regarding power use by specific end uses. These price signals and information streams will empower consumers to have more control over their power consumption. Consequently, the smart grid will alter consumer decisions either directly through behavioral changes or indirectly through preprogrammed smart appliances and control applications. As a result, market failures will be fixed and much of the low-hanging fruit of the efficiency gap will be harvested. These efficiency gains will provide enough savings to drive monthly power bills lower. In addition, the gains in reducing peak power demand will be more than enough to offset the baseline growth in power in the future. Consequently, the smart grid will eliminate the need to build conventional power plants in the years ahead.

The smart grid will also enable a transformation in power supply, the narrative says. Indeed, eventually the smart grid will allow renewable sources such as wind and solar to supplant traditional sources. The use of small-scale, distributed-generation resources will lead to a significant decarbonization of future power production. “Smart systems may well be mankind’s best hope for dealing with pressing environmental problems, notably global warming,” said the Economist in a November 6, 2010, special report.

The smart grid narrative also envisions a rapid increase in electric vehicles, which will generate power or act as batteries in the grid. In time, there will no longer be a need to build conventional power plants to deal with peak power periods because of the new distributed, small-scale power generation.

Finally, according to the current narrative, the pace of smart grid investment, including widespread installation of smart meters, demonstrates that smart grid technology is reliable, economical, and gaining enough momentum that the smart grid will be ubiquitous in power systems within a decade.

The above story about the smart grid has been repeated so often by industry leaders, technologists, and the media that is has taken on a life of its own. It is appealing because it reflects optimism that a disruptive technology can transform the power sector by solving problems that otherwise appear difficult and expensive to address with current technology, and that it can do so without downsides. But this vision is also too good to be true. In reality, forcing a technological transformation of the power sector through the deployment of smart grid technologies along with real-time power prices appears to be not only a formidable task but also not a very likely outcome any time soon.

Killer app?

Dynamic or real-time pricing, the ability to price electricity based on moment-to-moment changes in production costs, is expected to be the killer app of an emerging smart grid. The reality is that although some consumers can benefit from smart grid capabilities and dynamic pricing schemes, the majority cannot.

Real-time pricing is not a new idea. Economists have long considered the ability to use real-time prices that reflect the marginal cost of electricity at different times of the day as a more economically efficient way to price electricity. The Public Utility Regulatory Policy Act of 1978 encouraged utilities to use time-of-use–based rates to price electricity. Congress, in the Energy Policy Act of 2005, encouraged state regulators and utilities to shift from fixed rates to time-varied electric rates in order to increase energy efficiency and demand response.

But most consumers focus on their pocketbook rather than the theoretical basis of this supposedly more efficient pricing system. After all, the prospect of real-time pricing involves higher and more unpredictable prices; on an hour-to-hour basis, the marginal cost of electricity is hard to predict and can change by a factor of 100 during any given day. Research clearly indicates that most consumers far prefer the stable and predictable power pricing schemes they currently have.

Real-time power prices are usually higher than traditional rates during peak periods and lower during off-peak periods. But most consumers use more electricity during peak periods than during off-peak periods. Thus, unless they can shift enough of their power use, typical consumers face a higher bill with a move to real-time pricing. Most consumers, according to research, doubt they can do this and expect that real-time pricing will increase their bills.

Policy designed to support smart grid investments should avoid setting unrealistic expectations, especially the belief that smart grid programs will reduce power bills.

The residential consumers who are more supportive of dynamic pricing tend to be higher-income people with bigger homes who have more space to heat and to cool and more electric appliances. They are more likely to find an adequate payoff from investing in systems to manage this consumption across time and against dynamic prices. Pilot studies show that electric-intensive nonindustrial consumers respond favorably to enabling technologies such as programmable thermostats, price-alert mechanisms, or direct-load controls. In contrast, consumers with smaller homes and fewer electric appliances generally have less flexibility in shifting their power use. It is not surprising that consumer participation rates in dynamic pricing programs have usually been extremely low.

Participation in almost all dynamic pricing programs in the United States has been voluntary. Currently, time-of-use rates are offered by more than half of investor-owned utilities. Many of these programs have been offered for years, and in some cases decades. The average participation rate in such programs is estimated at 1%.

Participation in programs in Illinois is typical. Commonwealth Edison ran a residential real-time pricing pilot program from 2003 to 2006, and for the past four years has made it available to all of its residential consumers. A neighboring utility, Ameren, has a similar program. As of September 2010, Ameren Illinois and Commonwealth Edison each had about 10,000 participating customers, representing 1% and 0.3% respectively of their eligible consumers. In the eastern United States, Baltimore Gas and Electric made time-of-use rates available to residential consumers for several years, but only 6% of residential consumers opted to participate.

Arizona provides an example of how the characteristics of the customer base affect the outcomes. Consumers there tend to be more electric-intensive because of above-average cooling loads. In addition, the nature of these loads provides greater-than-average flexibility in the time pattern of electric use and thus a higher-than-average probability that shifting power use could lower a consumer power bill. The Salt River Project and Arizona Public Service (APS) have about half of their customers on a dynamic pricing scheme. APS offers four time-of-use rates to customers. A 2010 analysis of two of the rates indicated that customers saved 21% on their electricity bills as compared to being on a flat rate.

The same economic logic that helps to understand the Arizona versus Illinois results also applies to nonresidential consumers. Some industrial and commercial consumers find that power bills make up a large percentage of their operating costs. They also have the flexibility to alter their consumption pattern and can thus benefit from dynamic pricing schemes. Still, it appears that only a minority of nonresidential consumers can benefit from dynamic pricing. For example, although Georgia Power runs one of the most successful real-time pricing programs in the country, it has signed up only 20% of its largest commercial and industrial customers.

Even for large nonresidential consumers, switching to real-time pricing does not guarantee lower prices. Indeed, many face higher power bills, according to research by Severin Borenstein in a September 2005 National Bureau of Economic Research working paper. In a four-year study of 1,142 large industrial and commercial customers in Northern California, Borenstein found that holding all else constant, about 55% would see their bills rise under real-time pricing. He estimated that most customers would see their bills rise or fall by less than 10%, with more variability in their monthly payments.

A majority of power customers are not clamoring for access to dynamic pricing. So what explains the enthusiasm expressed by many who have participated in smart grid pilot projects? First and foremost is the fact that the programs have been voluntary. As a result, participants are self-selected members of a small set of the population who are inclined to try a new technology because they like experimenting with innovations. But self-selection bias can make pilot-project results unreliable as an indicator of how the larger population is likely to react to the new technology. It is risky to assume that if other consumers were to learn about these programs or were required to participate, they would end up loving them too. Mandatory participation could also lead to a backlash and derail any significant implementation of the technology.

Indeed, a bit of a backlash has already occurred. Many smart grid initiatives are going forward without any dynamic pricing schemes and those that do use dynamic prices employ highly muted price signals. Currently, there are no real-time pricing mandates for small customers (residential or small commercial) anywhere in the United States. This outcome of the regulatory process aligns with lessons from the past. The Maine Public Utility Commission mandated time-of-use rates for large-use residential consumers during the late 1980s, and the state of Washington mandated such rates for 300,000 residential consumers served by Puget Sound Energy in 2001. But in both cases most consumers were not able to shift enough usage to lower their electric bills, and the programs were eliminated within two years. In addition, these consumer preferences often translate into laws and regulations. California passed a law prohibiting dynamic pricing for residential customers, and New York imposed restrictions on the use of such pricing.

Many states, however, have recognized that some residential customers have the flexibility in power use to benefit from dynamic pricing and have required utilities to install a smart meter at the customer’s request. As expected, only a minority of consumers have requested the meters. Also as expected, these consumers are primarily large industrial firms. However, even for larger consumers, the offerings typically involve dampened price signals that fall far short of real dynamic pricing.

In addition to lackluster consumer demand, there have also been bumps on the supply side, as utilities have struggled to install the equipment and systems needed to make the smart grid work. There have been notable examples of technology problems and cost overruns, indicating that smart grid technologies and their optimal technical configurations are not yet proven and fully commercially available.

  • In Boulder, Colorado, Xcel Energy’s costs to implement a smart grid program have soared from an estimated $15.2 million in 2008 to $42.1 million in February 2010.
  • In Texas, Oncor Electric Delivery Company installed smart meters that later turned out not to comply with the standards set by the Public Utilities Commission of Texas. Oncor was subsequently allowed to recover $686 million from customers to install meters incorporating the new standards, as well as recover the $93 million cost of obsolete smart meters that were never installed.
  • In California, the communication system included in the original smart meter deployment at Pacific Gas and Electric Company (PG&E) turned out to be incompatible with the communication and control needs of the evolving smart grid applications. PG&E was allowed to increase prices to recover almost $1 billion of associated costs. In addition, in November of 2009, PG&E was forced to temporarily stop deploying smart meters in Bakersfield, California—part of its $2.2 billion, 10-million smart meter deployment program—because of consumer complaints and lawsuits concerning perceptions of billing errors. Although these perceptions turned out to be wrong, the backlash illustrates the problem of attempting to roll out the smart grid program at the same time that power prices were increasing.
  • In Maryland, the public service commission refused Pepco’s request to implement one form of dynamic pricing, even on an opt-in basis, because it considered the risk too great that customers would opt into the system with the expectation of lower bills only to find that, at least initially, the new rate would result in higher bills.
  • Also in Maryland, after consumer advocates challenged the cost/benefit analysis of Baltimore Gas and Electric’s (BG&E’s) smart grid initiative, the company’s request for rate recovery of the $835 million cost of its smart grid meter deployment plan was initially denied. The state Public Service Commission (PUC) ruled against BG&E even though the company had received a $136 million grant from the U.S. Department of Energy to help fund the project. The PUC found that, “The Proposal asks BG&E’s ratepayers to take significant financial and technological risks and adapt to categorical changes in rate design, all in exchange for savings that are largely indirect, highly contingent and a long way off.” In rejecting the proposal, the PUC also noted that the cost estimate did not include the approximately $100 million in not-yet-depreciated value of existing meters that would be retired before the end of their useful lives.

As the above examples make clear, the direct benefits of smart grid investments have not yet proven certain or significant enough to fully offset the costs of implementation. The implication is clear: The United States is not moving to a rapid full-scale deployment of smart grid technologies and systems anytime soon. Future implementation is likely to be phased in by customer segments and be geographically uneven and far from complete in one decade.

One way to manage expectations is to stop using the term smart grid because it implies a disruptive technology investment and instead portray the evolution toward a smarter grid as just business-as-usual grid automation and modernization.

A more realistic outlook

A more realistic vision of the future begins with the recognition that the smart grid is an incremental technology trend well under way rather than a disruptive technology that will transform the power sector in the next decade. The evolution toward a smarter grid has been taking place for several decades, as the power sector has incorporated available and emerging monitoring, automation, and control and communications technologies into the grid in varying degrees. These developments have already produced tangible gains: reduced costs for metering and for service connections and disconnections, as well as improved detection and isolation of problems during power outages and faster restoration of power. These gains in security and reliability have thus far reinforced the traditional grid and large central station power system rather than created economic forces pushing toward a distributed supply structure. As a result of these changes, it is inaccurate to think of the U.S. system as having a dumb grid that technology is poised to transform into a smart grid. Instead, smart technologies are already adding another layer of visibility to the condition and operation of the grid and also adding another layer of reliability by enhancing the capabilities needed to predict potential instabilities in the system. In short, the evolution to a smarter grid is helping to maintain and improve the high levels of reliability to which consumers have become accustomed.

The evolving smart grid will allow more experiments with various dynamic pricing schemes, but they should be experiments, and they must be gradually introduced or face a possible backlash from consumers, who mostly cannot benefit from dynamic pricing and value the stable and predictable prices of the current system. As dynamic pricing schemes evolve in the years ahead, they will mostly be used by larger, electric-intensive consumers who have the capability and the money to invest in and manage the new systems.

Investment in smart grid technologies in the years ahead will depend to some degree on the political tolerance for increases in power prices, because developing a smarter grid is not likely to reduce bills, for two reasons: First, the percentage increase in prices will probably not be offset by a larger reduction in electricity use enabled by the smart grid. Second, smart grid implementation is occurring during a period of rising real power prices. Even if smart grid savings could offset costs, there are other factors that are continuing to push prices up. As a result, the case for smart grid investments will involve a different expectation: that although power prices are increasing, prices are going to be lower than they otherwise would have been but for the smart grid investments. This is a harder argument to demonstrate and thus a weaker driver for smart grid investment than the straightforward guarantee of a lower power bill.

The evolution of smart grid technologies could allow the introduction of meaningful numbers of electric vehicles, but this process, too, will be slow. The big hope is that electric vehicles can act as roving batteries to the grid, thus reducing the need for new system capacity. But this outcome is unlikely anytime soon, because current electric batteries are technically not well suited to power system storage and their prices are extremely high. Still, effective coordination of smart grid policy and policy support for electric vehicles could help accelerate smart grid development.

Smart grid implementation is also not likely to reduce energy use enough to provide meaningful greenhouse gas emissions reductions. The reason is that the primary link between the smart grid and greenhouse gas emissions is not within the power sector—enabling renewable power or reducing demand—but rather outside the power sector by enabling the use of electric vehicles, something that adds rather than detracts from power usage.

Finally, the pace of smart grid implementation will probably be slowed by consumer privacy and cybersecurity concerns. Many privacy advocates are concerned that smart grid data could provide a detailed profile of consumer behavior.

Policy implications

Policy designed to support smart grid investments should avoid setting unrealistic expectations, especially the belief that smart grid programs will reduce power bills. The long-run success of smart grid policies hinges on delivering what has been promised. Policies that fail to meet expectations will lead to disappointment, a search for a scapegoat, and a political backlash that will impede progress in the years ahead. One way to manage expectations is to stop using the term smart grid because it implies a disruptive technology investment. It would be wiser and more accurate to speak of the evolution toward a smarter grid as just business-as-usual grid automation and modernization.

The smarter grid rollout should start first with consumers that meet the profile of those most likely to benefit from smart grid programs: electric-intensive consumers with significant flexibility in their use of power over time. Because customer characteristics, particularly the flexibility to cost-effectively shift power use, are so varied from one place to the next, we can expect the implementation of smart grid capabilities to be geographically uneven.

The pace of implementation, especially of dynamic pricing schemes, should be phased in based on the political tolerance of consumers for power price increases. The move to real-time prices should begin with mildly time-differentiated prices that move gradually toward real-time price signals over the long run. Education of consumers will be necessary, but policymakers must recognize the limits of education in divorcing consumer preferences from underlying pocketbook issues.

A significant role remains for smart grid pilot projects to manage the technology risk associated with the evolving smart grid, although policymakers need to recognize the limits on generalizing the results of these projects. The focus for pilot programs should expand from testing dynamic pricing schemes to experimenting with new applications for smart grid capabilities.

In sum, by resetting our expectations and taking modest, gradual steps forward, we can eventually move toward a more robust, smarter power grid in the United States.

Is Climate Change a National Security Issue?

Around the planet there is growing momentum to define climate change as a security issue and hence as an agenda-topping problem that deserves significant attention and resources. In December 2010, for example, while poised to start a two-year term on the United Nations Security Council, Germany announced its intention to push to have climate change considered as a security issue in the broadest sense of the term. Germany’s objective captures a sentiment that has been expressed in many venues, including several recent high-level U.S. national security documents. The May 2010 version of the National Security Strategy repeatedly groups together violent extremism, nuclear weapons, climate change, pandemic disease, and economic instability as security threats that require strength at home and international cooperation to address adequately. The February 2010 Quadrennial Defense Review links climate change to future conflict and identifies it as one of four issues in which reform is “imperative” to ensure national security. This sentiment has met resistance, however, and today there is a serious debate about whether linking climate change to security, and especially to national security, makes sense.

The case in support of this linkage integrates three strands of argument. The first builds on efforts to expand a very narrow definition of the term “national security” that was dominant during the 20th century. The narrow meaning was shaped by a specific set of events. After World Wars I and II, a third major war involving nuclear weapons was widely regarded as the single greatest threat to the survival of the United States and indeed to much of the world. In response to this perception, the National Security Act of 1947 sought “to provide for the establishment of integrated policies and procedures for the departments, agencies, and functions of the Government relating to the national security.” Its focus was on strengthening the country’s military and intelligence capabilities, and the government was supported in this effort through the rapid buildup of independent think tanks and security studies programs at colleges and universities throughout the country. National security was seen by most experts as a condition that depended on many factors, and hence the broadest goals of the national security community were to build and maintain good allies, a strong economy, social cohesion and trust in government, democratic processes, civil preparedness, a skilled diplomatic corps, and powerful, forward-looking military and intelligence agencies. For more than four decades after World War II, however, efforts to improve national security were assessed against estimates of the threats of nuclear war and communist expansion, and invariably emphasized the paramount importance of military and intelligence assets. National security was largely about the military and intelligence capabilities necessary for preventing or winning a major war.

Because some resources are becoming increasingly scarce and others increasingly valuable, the prospects for environmental factors gaining weight in the security arena appear robust.

In the 1990s, this powerful architecture was challenged in several ways. First, with the rapid and largely unexpected collapse of the Soviet Union came the question: Since there were no other countries likely to launch a full-scale nuclear attack against us, could we now reduce our large military and intelligence expenditures and invest in other areas? Second, as the 20th century drew to a close, it became evident that the nature of violent conflict had changed from short, brutal, and decisive interstate wars to long, somewhat less brutal, and frequently inconclusive civil wars. Under the quagmire conditions of this new generation of warfare, superior military capability did not translate inexorably into victory.

Finally, having spent so much time focused on the particular threat of military-to-military conflict, analysts asked if we should now be looking at threats more broadly and even considering alternative ways of thinking about security. By mid-decade, human security and some variant of global security had gained support as alternative or complementary ways of thinking about security. Further, in the United States and abroad, conceptions of security threats expanded to include issues such as terrorism, disease, and global economic crisis.

As the era of great wars receded, some observers concluded that violence was now mainly structural, a fact hidden or ignored during the Cold War, when the threat of large-scale violence was linked to an ideologically based power struggle. From the structuralist perspective, victory and defeat were unproductive ways of thinking about security. Instead, improvements in security depended on extensive reform of the global economy, the international system of states, the divide between nature and civilization, and entrenched patterns of gender and ethnic inequality. Many others agreed that our new era of security underscored the limits of military force, which had been the centerpiece of much 20th-century security policy. Hence, at the very least, we needed to carefully rethink security and reconsider what was needed to provide it, a reflection that would certainly lead to important, if not necessarily structural, change.

One of the issues invigorating all of these challenges to Cold War security thinking (challenges that, incidentally, were not entirely new and had been voiced at various times throughout the 20th century) was a growing concern about environmental degradation and stress. Indeed, just as the Cold War ended, the Rio Summit on Environment and Development catalyzed global attention around climate change, biodiversity loss, and deforestation; underscored the need for national, regional, and global conservation strategies; and introduced a transformative vision that involved shifting the entire planet onto the path of sustainable development. In this context, a handful of observers argued that, in light of the trends observed by scientists from multiple disciplines, the Cold War peace dividend should be redirected toward environmental rescue, and that failing to do this could push the world toward higher and higher levels of insecurity.

The second strand woven into the case for integration picks up on this latter intuition. A central question of this strand of analysis is: What could happen if we fail to act to promote sustainable development and allow alarming environmental trends to continue more or less unchecked? Building on arguments that extend at least as far back as 18th-century demographer Thomas Malthus, who worried that population growth would outstrip increases in food production, leading to a period of intense famine, war, and disease, a contemporary generation of scholars used case studies and other methodologies to explore linkages between environmental stress and two national security challenges: violent conflict and state failure. Although simple, causal relationships have proved elusive—a generic problem in the study of war and peace—patterns have been identified that many have found compelling. To simplify what is becoming a rich field of inquiry, certain natural resources, especially when they suddenly become scarce (water or arable land) or acquire high value (diamonds or timber) can become a significant factor affecting government behavior, development prospects, population flows, and forms of competition. Under certain conditions, such challenges trigger innovation and adaptation, but under other conditions they contribute to violent conflict and other types of insecurity. Because some resources are becoming increasingly scarce and others increasingly valuable, the prospects for environmental factors gaining weight in the security arena appear robust.

TABLE 1

Climate change and national security

National security concerns Weakening of elements of national power State failure Disruption and violent conflict
Climate change impacts
Changes in water distribution Job loss in rural areas Reduce agricultural outputs, basic needs unmet Increased competition for water

Severe weather events Undermine economic strength Funds diverted to disaster relief, away from infrastructure, etc. Displace people into areas where they are not welcome

Heat waves Pandemics Greater demands to meet basic needs Riots in urban areas

Drought Undermine economic development Deepen social inequality as some groups control food and water Displace people into areas where they are not welcome

Sea-level rise Destroy coastal military bases Increase inequality and promote extremism as some people lose land Put the survival of states such as the Maldives and Bangladesh at risk

Flooding Reduce military effectiveness in the field Destroy critical infrastructure Increase urban strife

The examples in Table 1 are not meant to be definitive but rather to indicate how climate effects could affect national security. Clearly many of these examples could be reiterated in many boxes.

Scholars such as Thomas Homer-Dixon, for example, focus on the adverse social effects of scarcity of water, cropland, and pasture. Scarcity, he argues, results from a decrease in the supply of a resource, an increase in the demand for a resource, or a socially engineered change in access to a resource. Under conditions of resource scarcity, Homer-Dixon contends that developing countries may experience resource capture (one group seizes control of the resource) or ecological marginalization (people are forced to move into resource-poor lands), either of which may contribute to violent conflict. Continuing this trajectory of thought, Colin Kahl argues that resource scarcity may generate state failure ( a collapse of functional capacity and social cohesion) or state exploitation (in which a collapsing state acts to preserve itself by giving greater access to natural resources to groups it believes can prop it up). Although some researchers are not persuaded by arguments linking environmental stress to state failure and violent conflict, many others regard them as compelling, and many policymakers and practitioners have absorbed these arguments into their world views.

The third strand of analysis involved in integrating climate change and national security builds on the environment and security literature by focusing on the real and potential societal effects of climate change. Climate change scientists are observing changes in the distribution of water, increases in the intensity of severe weather events, longer heat waves, longer droughts, and sea-level rise and flooding. Some worry that continued global warming will move the planet across critical thresholds, causing “black swan” events such as massive gas releases, rapid glaciation, or microbial explosions. There are several ways in which such changes could generate threats to national security.

Summarizing the discussion above, challenges to national security can be organized into three groupings: anything that weakens the elements of national power; contributes to state failure; or leads to, supports, or amplifies the causes of violent conflict. Climate change has the potential to have a negative impact in each of these domains (see Table 1).

National power. National power depends on many variables, including environmental factors such as geography and resource endowment, military capacity, intelligence capacity, and a range of social factors, including population size and cohesiveness, regime type, and the size and performance of the national economy. Climate change has the potential to affect all of these elements of national power. For example, militaries may be less effective at projecting and exercising power if they have to operate in flooded terrain or during a heat wave. Warming that affects land cover could reduce a country’s renewable resource base. Intelligence is difficult to gather and analyze in a domain marked by uncertainty about social effects.

Perhaps the area of greatest concern, however, is that climate change might undermine economic development, especially in poor and fragile states. The economist Paul Collier has argued that the bottom billion people on the planet currently live in states that are failing to develop or are falling apart. He contends that these states are often enmeshed in interactive conditions and processes that inhibit development: chronic violent conflict, valuable natural resources such as oil or diamonds that groups vie to control, unstable neighboring countries creating chronic transboundary stress, and government corruption and inefficiency. An increase in costly and hard-to-manage events such as floods, droughts, heat waves, fires, pandemics, and crop failures would probably be an enormous additional burden on these countries, introducing a daunting new layer of development challenges and hence weakening a central element of national power.

State failure. The authors of the 2009 report of the International Federation of the Red Cross and Red Crescent Societies wrote that, “The threat of disaster resulting from climate change is twofold. First, individual extreme events will devastate vulnerable communities in their path. If population growth is factored in, many more people may be at significant risk. Together, these events add up to potentially the most significant threat to human progress that the world has seen. Second, climate change will compound the already complex problems of poor countries, and could contribute to a downward development spiral for millions of people, even greater than has already been experienced.” The 2010 report notes that the cost of climate-related disasters tripled from 2009 to 2010 to nearly $110 billion. Disasters are costly, and the costs appear to be mounting dramatically. From the perspective of state failure, disasters are deeply alarming because they shift scarce funds away from critical activities such as building infrastructure, investing in skills development, and implementing employment and poverty-reduction programs, and into emergency relief. Such a shift can have a direct and very negative impact on a government’s functional capacity.

The same argument can be advanced for the diffuse longer-term effects of climate change that might affect food security, public health, urban development, rural livelihoods, and so on. Under conditions of either abrupt or incremental change, people may be displaced into marginal lands or unwelcoming communities, enticed by extremist ideology, compelled to resort to crime in order to survive, or take up arms, all of which risk overtaxing the government, deepening social divisions, and breeding distrust and anger in the civilian population.

The gravest climate change threat, however, is that states will fail because they can no longer function as their territories disappear under rising seas, an imminent threat to the Maldives and some 40 other island states. Glacial-outburst floods might cause similar devastation in countries such as Nepal, and a change in the ocean conveyer that warms the northeast Atlantic Ocean could cause countries such as the United Kingdom to disappear under several feet of ice within a few years. These starkly existential threats have become the single most important national security issue for many vulnerable countries. Last year, the president of the Maldives held a cabinet meeting underwater to bring attention to this type of threat.

Violent conflict. Building on the insights of Homer-Dixon, Kahl, and many others, it is reasonable to suggest that climate-induced resource scarcities could become key drivers of violent conflict in the not too distant future. On this front, another area of particular concern has to do with so-called climate refugees. In 2006, Sir Nicholas Stern predicted that 200 million people could be permanently displaced by mid-century because of rising sea levels, massive flooding, and long, devastating droughts. Large flows of poor people from rural to urban environments and across ethnic, economic, and political boundaries would cause epic humanitarian crises and be extremely difficult to manage. One can easily imagine such stress becoming implicated in violent conflict and other forms of social disruption.

Stern’s prediction is of the back-of-the-envelope variety and has faced criticism from researchers such as Henrik Urdal, who argues that the “potential for and challenges related to migration spurred by climate change should be acknowledged, but not overemphasized. Some forms of environmental change associated with climate change like extreme weather and flooding may cause substantial and acute, but mostly temporal, displacement of people. However, the most dramatic form of change expected to affect human settlements, sea-level rise, is likely to happen gradually, as are processes of soil and freshwater degradation.” The bottom line, however, is that nobody knows for sure what the scale and social effects of climate-increased population flows will be.

The basic concerns suggested above are well captured in the many publications that followed the publication of the 2007 Intergovernmental Panel on Climate Change (IPCC) reports. For example, the CNA Corporation report National Security and the Threat of Climate Change concluded that “climate change acts as a threat multiplier for instability in some of the most volatile regions of the world.” Further, it predicted that “projected climate change will add to tensions even in stable regions of the world.” Similarly, the German Advisory Council on Global Change’s report World in Transition: Climate Change as a Security Risk said that “Climate change will overstretch many societies’ adaptive capacities within the coming decades.” The tenor of much recent writing is that climate change will weaken states that are already fragile, and it will contribute to violent conflict, intensify population displacement, increase vulnerability to disasters, and disrupt poverty alleviation programs, especially in South Asia, the Middle East, and sub-Saharan Africa, where large numbers of people, widespread poverty, fragile governments, and agricultural economies conspire to create heightened vulnerability.

The counterargument

The case against linking climate change to national security raises concerns about each of the strands of argument outlined above and is rather intuitive. Insofar as the language of national security itself is concerned, three important criticisms have been advanced. In a series of editorials in Foreign Policy magazine, Stephen Walt contends that a careful reading of the arguments about climate change made in the CNA report and in similar documents makes it clear that this is simply not a national security issue, at least not for the United States. In the foreseeable future, climate change may cause serious problems in places such as Bangladesh that spill over into places such as India, but these problems and the responses they will trigger are better described as humanitarian issues. For Walt and other realist thinkers, national security is about the survival of the state, and apart from black swan events we can imagine but not predict or prepare for, threats of this magnitude have been and continue to be threats of military aggression by other states. Walt asks us to consider what we gain in terms of analysis, strategy, and policy formulation by expanding the domain of national security into areas where immediate or near-term threats to the survival or even well-being of the United States are vague or unknown, even though the rhetoric used to describe them is often urgent and dramatic.

The tenor of much recent writing is that climate change will contribute to violent conflict, intensify population displacement, increase vulnerability to disasters, and disrupt poverty alleviation programs.

A very different concern comes from scholars such as Daniel Deudney, Barry Buzan, and Ole Waever, who worry about militarizing or securitizing climate change and the environment. Like Walt, they are not suggesting that climate change is a trivial matter; rather, they worry about whether framing it as a national security issue and thus linking it to military and intelligence tools is wise. This linkage, they suggest, might inhibit certain forms of global cooperation by drawing climate change into the zero-sum mentality of national security. It might encourage Congress to authorize significant funds, a good thing in principle, but insofar as these funds are expended through the defense community, this may prove a costly and inefficient way of promoting adaptation, mitigation, and disaster response. It might encourage the government to conclude that extraordinary measures are acceptable to fight climate change—actions that could make many scientists, development specialists, social entrepreneurs, business leaders, and environmentalists uncomfortable.

Finally, a third concern has been expressed within the United Nations (UN), largely in response to efforts by the secretary general and by countries such as Germany to frame climate change as an issue that should be considered by the UN Security Council. On the one hand, this could give the five countries of the world that are permanent members of the Security Council—China, France, Russia, the United Kingdom, and the United States—enormous leverage over this issue, and not all of the other member countries are convinced that this would lead to good, fair, and effective outcomes. On the other hand, some countries in the UN, especially the G77 countries, think that it may prove to be in their long-term interest to have climate change framed as primarily a development issue rather than as a national or even global security issue. Such a frame could serve as the basis for lucrative compensation payments, development assistance, and special funds for adaptation and mitigation. In short, then, linking climate change and national security may compromise responses to the former, muddy the rationale of the latter, reinforce global inequities, and reduce development assistance as resources are transferred to humanitarian and military activities.

The second strand of argument has to do with the relationship between environmental stress and major outcomes such as violent conflict and state failure. Critics of this literature, such as Nils Petter Gleditsch and Marc Levy, point to its methodological and analytical weaknesses. To date, studies have been inconclusive. There appears to be a correlation between certain forms of environmental change, such as sudden changes in water availability, and violent conflict or state failure, but the findings are tentative and must compete with other variables that correlate nicely with disastrous social outcomes. Case studies are often quite persuasive, but they are in some sense easier to shape and their authors may be selecting for relationships that in fact are atypical.

Insofar as the case for integrating climate change and national security draws on arguments that environmental stress contributes to violent conflict and state failure, these skeptics emphasize that this literature is young and flawed by speculation. A frequent concern is that after the initial outburst of largely theoretical claims advanced in the 1990s, there has not been much progress in weeding through these claims and bolstering and clarifying those that are most promising from the perspective of empirical data. Moreover, very little has been done to estimate the extent to which environmental stress has generated effective positive responses such as innovation, adaptation, and cooperation. If for every Haiti there are a dozen Costa Ricas, then the alarm bells may be ringing too loudly.

Finally, the third strand of the case for integrating climate change and national security is rooted largely in the IPCC reports, and especially AR4, released in 2007. But although increases in the amount of carbon in the atmosphere, the severity of storms, the average global temperature, and so on are well documented, the social effects of these trends are far more speculative. Will climate change really tend to intensify the (possibly weak) relationships between environmental stress and national security? Even if it does, is securitizing these relationships wise, or should they be cast more explicitly in terms of humanitarian crises, global inequities, development challenges, population displacements, and poverty alleviation?

The Danish economist Bjorn Lomberg has been vocal in this arena, arguing that the environmental/climate security community underestimates the vast stocks of human ingenuity that are available to ease adaptation. Lomberg argues further that it is not at all clear that investments in climate change response are the best investments to make in terms of the safety and welfare of the human species. Here the idea of the fungibility of different forms of capital is relevant. If over the next 50 years we can make great gains per dollar invested in technologies that can be used for multiple purposes, and much smaller gains in terms of shifting the alarming global trend in carbon emissions, is the latter really a wise course of action? A large stock of technological capital, enabled by shrewd investments today, might be far more beneficial to the current poor and to all future generations than steps that marginally reduce greenhouse gas emissions or add small amounts of forest cover, or than steps that do much more along these lines but only by radically reducing investments elsewhere.

Action or lethargy?

The case for linking climate change and national security is robust but imperfect. This is partly because there remains considerable uncertainty about how climate change will play out in different social contexts and partly because the term national security is loaded with expectations and preferences that some analysts find worrisome.

If one finds the linkage persuasive, then there is much the United States can and should be doing on this front. For the past decade, innovation and response have taken place mainly at the state and city levels. Although this activity has in many ways been remarkable, it has not been uniform across the United States, and it connects poorly into larger global initiatives. In this latter regard, the United States has been particularly lethargic, a lethargy nourished by massive but not clearly successful investments in the war on terrorism and the financial bailout.

A few more years of lethargy could be detrimental to the United States in several ways. It could strengthen China, which has an enormous amount of capital to invest and is directing some of this into alternative energy and green technology—far more than the United States is. With or without climate change, the world’s need for new sources of cheap and reliable energy is growing, and China is positioning itself for an emerging market that could be huge. Delaying might force the United States to contend with a considerably more robust multilateral framework for addressing climate change, a framework that it has not helped to design or synchronize with other multilateral institutions that it does support. Delaying could impose huge long-term costs on the U.S. economy, as it finds itself compelled to deal with water shortages, dust bowls, and hurricanes in an emergency mode. Katrina disabused everyone, except perhaps politicians and other government officials, of the notion that the nation is adequately prepared for the severe events that climate science predicts. Even if the United States does not increase its own vulnerability to megadisasters, inaction may not be cheap, as the country finds itself embroiled in costly humanitarian efforts abroad. And finally, in the worst-case scenario, lethargy might enable the sort of global catastrophe that climate scientists have described as possible. It is hard to imagine what competing investments of the nation’s resources would warrant ignoring this issue.

So is climate change a national security issue? Climate change is the most protean of science-based discourses, with an admixture of confidence and uncertainty that allows it to be integrated into any political agenda—from calls for sweeping reforms of the international system to those for more research and debate. Climate change does not mobilize agreement or clarify choices so much as engender reflection on the values we hold, the levels of risk we are comfortable assuming, the strengths and weaknesses of the institutions and practices that exist to meet our personal needs and allocate our shared resources, and the sort of world we want to bequeath to our children and grandchildren.

Medical Devices: Lost in Regulation

The implanted medical device industry was founded in the United States and has been a major economic success and the source of numerous life-saving and life-improving technologies. In the 1950s and 1960s, technological innovations such as the cardiac pacemaker and prosthetic heart valve meant that thousands of suffering Americans had access to treatment options where none had existed before. And because so many breakthrough devices were developed in the United States, the nation’s citizens usually had timely access to the latest technological advances. In addition, U.S. physicians were at the forefront of new and improved treatments because they were working alongside industry in the highly dynamic innovation process. In fact, they rose to worldwide preeminence because of their pioneering work on a progression of breakthrough medical therapies.

But that was then. Although the United States is still home to numerous medical device companies, these companies no longer bring cutting-edge innovations to U.S. patients first. And U.S. clinical researchers now often find themselves merely validating the pioneering work that is increasingly being done in Europe and elsewhere in the world. Worse still, seriously ill patients in the United States are now among the last in the world to receive medical innovations that have secured regulatory approval and clinical acceptance elsewhere in the developed world.

What’s behind this erosion of leadership and late access to innovations? Simply stated, an overreaching, overly burdensome, and sometimes irrelevant Food and Drug Administration (FDA) regulatory process for the most sophisticated new medical devices. To be fair, occasional device recalls have caused great political pressure to be placed on the FDA for somehow “allowing” defective products to harm patients. The agency’s response to political pressure has been to add additional requirements and to ratchet up its tough-cop posture in order to assuage concerns that it is not fulfilling its responsibility to the public. It is presumed, incorrectly, that a lax approval process is responsible. In most instances, however, the actual cause of a recall is outside the scope of the approval process. The most frequent causes of recalls are isolated lot-related subcomponent failure; manufacturing issues such as operator error, processing error, or in-process contamination; latent hardware or software issues; and packaging or labeling issues. In addition, company communications that describe incorrect and potentially dangerous procedures used by some medical personnel are also considered a recall, even though the device is not faulty. Face-saving implementation of new and more burdensome clinical trial requirements, often called added rigor by the FDA, is an ineffective and wrong answer to such problems.

Excessive approval burdens have caused a once-vibrant medical innovation engine to become sluggish. Using the FDA’s statistics, we learn that applications for breakthrough approvals are near an all-time low. It is not that companies have run out of good ideas, but the regulatory risks have made it impractical to invest in the truly big ideas. A slow but inexorable process of added regulatory requirements superimposed on existing requirements has driven up complexity and cost and has extended the time required to obtain device approval to levels that often make such investments unattractive. It must be noted that the market for many medical devices is relatively small. If the cost in time and resources of navigating the regulatory process is high relative to the anticipated economic return, the project is likely to be shelved. The result is that companies will instead shift resources toward making improvements in existing products, which can receive relatively rapid supplemental approval and start generating revenue much sooner. Some patients will benefit from these updated devices, but the benefits are likely to be much less impressive than those that would result from a major innovation.

THE RESEARCH COMMUNITY SHOULD TAKE THE INITIATIVE TO ENSURE THAT ALL DOCTORAL AND POSTDOCTORAL TRAINEES RECEIVE INSTRUCTION IN THE ETHICAL STANDARDS GOVERNING RESEARCH.

Perhaps the best measure of the FDA’s stultifying effect on medical device innovation is the delay, often of several years, between device approval in Europe (designated by the granting of the CE mark) and approval in the United States. The Europeans require that so-called Class III medical devices (products such as implanted defibrillators, heart valves, and brain stimulators) must undergo clinical trials to prove safety and functionality as well as compliance with other directives that relate to product safety, design, and manufacturing standards. In addition, the European approach relies on decentralized “notified bodies,” which are independent commercial organizations vetted by the member states of the European Union for their competence to assess and control medical device conformance to approval requirements. The primary difference in the U.S. system is a requirement for more and larger clinical trials, which can be extremely time-consuming and difficult to assemble. Ultimately, the European approach places more responsibility on physicians and their clinical judgment rather than on government officials who may have little appreciation of or experience with the exigencies of the clinical circumstance.

These Class III devices are complex and can pose a risk of significant harm to patients if they are unsafe or ineffective. It is for this reason that the FDA’s pre-market approval (PMA) pathway for these products is arduous and rigorous. It should be. Rigor, however, must be tempered with expert judgment that compares the demonstrable benefits with the possible risks to patients. And in setting requirements for evidence, regulators must distinguish between data that are essential for determining device safety and effectiveness and data that are nice to have.

Not to be lost in the FDA’s quest to avoid possible patient harm, however, is the reality that PMA devices offer the greatest potential for patient benefit. Delays in the approval of effective devices do result in harm to patients who need them. If we examine the date of approval for the identical device in Europe and the United States, we see that most devices are approved much later in the United States. Three examples illustrate this point. Deep brain stimulation for ineffectively managed symptoms of tremors and Parkinson’s disease was approved for use in the United States 44 months after European approval. A novel left ventricular assist device that permitted patients with severe heart failure to receive critical circulatory support outside the hospital was approved 29 months later. A pacemaker-like device that resynchronized the contraction sequence of heart muscle for patients suffering from moderate to severe heart failure was approved 30 months after it became available for patients in Europe.

These examples are drawn from experiences over the past 20 years. Each has matured into a treatment of choice. Table 1, which is based on data from the first 10 months of 2010, shows that delays continue to be long. Of the 11 new devices approved in this reporting period, 9 received the CE mark between 29 and 137 months earlier. It is not known whether the sponsor of the other two devices applied for a CE mark. In the case of an intraocular lens listed in the table, the FDA noted that more than 100,000 patients had already received the implant overseas. This level of utilization is significant by medical device standards and suggests strongly that its attributes have made it part of routine clinical practice. Yet U.S. patients had to wait more than five years for it to be available.

A legitimate question is whether the hastier approval of Class III devices in Europe harms overseas patients. A study conducted by Ralph Jugo and published in the Journal of Medical Device Regulation in November 2008 examined 42 PMA applications that underwent review between late 2002 and 2007. Of the 42, 7 resulted in FDA disapproval, of which 5 had received prior CE mark approval. Reasons for disapproval were attributed to study design, failure to precisely meet primary study endpoints, and the quality of the collected data in the FDA’s opinion. In other words, the problem was that these devices failed to satisfy some part of the FDA protocol, not that the FDA found evidence that they were not safe. The majority (34 of 42) of applications garnered both European approval and a subsequent, but considerably later, PMA approval.

Examples of Class III devices that received the CE mark and were subsequently pulled from the market are few. In recent testimony before the health subcommittee of the Energy and Commerce Committee, the director of the FDA’s device branch cited two examples. One involved certain breast implants. The other was a surgical sealant. These events indicate that the European approval process is imperfect, but hardly one that has subjected its citizens to a large number of unsafe devices. It is simply unrealistic to expect an event-free performance history, given the complexities and dynamic nature of the device/patient interface and the incomplete knowledge that is available.

But what about the harm caused by delaying approval? Delay may not be of much consequence if the device in question serves a cosmetic purpose or if there are suitable treatment alternatives. Delay is of major significance if the device treats an otherwise progressive, debilitating, or life-threatening disease for which medical alternatives don’t exist or have only limited effects. Such afflicted patients can’t wait for what has become an inefficient process to run its course. The paradox is that the FDA’s current regulatory approach may be causing unnecessary patient suffering and death by virtue of the regulatory delay imposed by its requirements.

It is particularly frustrating that devices invented and developed domestically are unavailable here for significant periods of time whereas patients elsewhere receive tangible benefit. It is not unusual for second and third generations of some products to be available internationally before the now outdated device finally secures U.S. approval.

The example of a minimally invasive transcatheter heart valve for the treatment of inoperable aortic stenosis illustrates the implications of excessive delay on the well-being of ill patients. Patients suffering from severe aortic stenosis have an estimated 50% mortality within 2 years after symptom onset if they do not undergo open-heart surgery for valve repair or replacement. Quality of life is adversely affected because of shortness of breath, limited exercise capacity, chest pain, and fainting episodes. A definable subset of affected patients includes those who are too frail to undergo the rigors of open-heart corrective valve surgery. The transcatheter approach, whereby a new replacement valve is inserted via the vasculature, much the way in which coronary balloon angioplasty is done, offers a much less invasive and less traumatic therapeutic option for the frail patient. Even though the technology and procedure are still evolving, clinical results have been impressive, and thousands of patients have received it. In a recently published clinical study, one-year mortality has been reduced by 20 percentage points when compared to the mortality of patients in the standard medical care group. Quality-of-life measures also improved substantially. The transcatheter heart valve was approved in Europe in late 2007; it is still awaiting FDA approval. A transcatheter valve of different design was approved in Europe in March 2007 and has produced impressive results in high-risk patients. Over 12,000 patients in Europe and 40 other countries where approval has been granted have received this valve. It too is still not approved in the United States. In the case of a disease with a poor prognosis, years of delay do not serve the best interests of affected U.S. patients, especially if there is credible clinical evidence that a new intervention performs well.

A more subtle effect of over-regulation is the loss of a leadership position by U.S. physicians and clinical researchers. Whereas pioneering clinical trials used to be the province of U.S. physicians at major academic medical centers, today non-U.S. physicians and medical centers are conducting a substantial and growing number of safety and effectiveness trials. As a result, overall clinical expertise and identification of ways to further improve a new technology have shifted overseas. International physicians increasingly supplant U.S. clinical researchers as medical pioneers. The United States can no longer be assured that its physicians are the preeminent experts at the cutting edge or that U.S. patients are receiving world-class treatments.

The peer-reviewed medical literature serves as a good indicator of where innovation in clinical practice and technology is taking place. The role of journals is to publish findings that are new, true, and important. Reported findings inform the future course of medical practice. A review of the current medical literature concerning transcatheter heart valves, as an example, shows that non-U.S. investigators and centers dominate the field. Published reports not only document the initial clinical experience but also identify advances in technique, refine indications for use, and propose next-generational improvements. High-caliber clinical studies are, without question, being performed in the United States as part of the data package for the FDA, and they are producing valuable information. The point is that although they are adding layers of relevant confirmatory data, they are not driving the cutting edge of medical practice.

A rigorous approval process for medical devices is absolutely necessary. However, the process must be relevant for the safety and effectiveness questions that pertain to the product under review. The process must be efficient, streamlined, administratively consistent, predictable, and conducted with a sense of urgency. It must limit its scope of requirements to those data that are central to demonstrating safety and effectiveness. There are always more questions that could be asked of a new product. A patient-centered regulatory process prioritizes and limits questions to those that are essential to the demonstration of safety and effectiveness in the context of the disease. The FDA has a very legitimate role to play in ensuring that new technologies are sufficiently safe and effective for patient use. This is a relative, not absolute, standard. Benefits must be balanced against risk. As practiced today, the regulatory process is unbalanced at the expense of innovations that could help patients.

Current FDA processes for the approval of medical device innovations need to be reengineered to balance the quest for avoidance of possible harms with the potential for helping today’s seriously ill patients. The agency must also limit the scope of studies to address necessary questions rather than to aspire to scientific elegance and excessive statistical certainty. As Voltaire said, “The perfect is the enemy of the good.” The European experience demonstrates that it is possible to make safe and effective new medical devices available to patients much more quickly. Actual clinical experience demonstrates that an excessively cautious and slow regulatory process conflicts with the interests of patients suffering from serious and progressive diseases. They simply don’t have the luxury of time.

Why Don’t U.S. Women Live Longer?

Over the past 25 years, female life expectancy at older ages has been rising in the United States at a slower pace than has been achieved in many other high-income countries, such as France, Italy, and Japan. Consequently, the United States has been falling steadily in the world rankings for level of female life expectancy, and the gap between the United States and countries with the highest achieved life expectancies has been widening. International comparisons of various measures of self-reported health and biological markers of disease reveal similar patterns of U.S. disadvantage. The relatively poor performance of the United States over the past 25 years is surprising given that the country spends far more on health care than any other nation in the world, both absolutely and as a percentage of gross national product. Concerned about this divergence, the National Institute on Aging asked the National Research Council to examine evidence on possible causes. The panel concluded that a history of heavy smoking and current levels of obesity are two factors that are playing a substantial role in the relatively poor performance of the United States. All of the data in the following figures comes from the panel’s report Explaining Divergent Levels of Longevity in High-Income Countries (National Academies Press, 2011).

U.S. women trailing in life expectancy

In 1980, women in the United States, Japan, France, and Italy who reached the age of 50 could all expect to live an additional 30-31 years. Today, women aged 50 in Japan can expect to live an additional 37 years, whereas women in the United States can expect to live only an additional 33 years.

Heart disease and lung cancer are leading culprits

International comparative analysis of cause-of-death data is complicated by variation in coding practice across countries and over time. Nevertheless, it is clear that all four countries have made significant progress in reducing certain leading causes of death such as heart disease over the past 25 years. In contrast, deaths due to lung cancer—a reliable marker of the damage from smoking—have been increasing in the United States.

U.S. women finally foregoing smoking

Three to five decades ago, smoking was much more widespread in the United States than in Europe or Japan, and the health consequences of this prior behavior are still playing out in today’s mortality rates.

It’s the cigarettes, stupid

The yellow line shows the actual trend in female life-expectancy, and the orange line represents what the trend would hypothetically look like if smoking-related mortality were removed. The difference between the two trend lines remained small until around 1975, when it began increasing rapidly. By 2005 it had grown to 2.3 years.

Obesity, the next cause for concern

Other factors, particularly the rapid rise in the level of obesity in the United States, also appear to have contributed to lagging life-expectancy in the United States, but there is still a good deal of uncertainty about the mortality consequences of obesity and how it is changing over time.

Energy in Three Dimensions

The United States has been unable to develop any coherent energy program that can last past changes in the control of our federal executive or Congress. The latest failure was the Waxman-Markey cap-and-trade bill that would have driven an enormous change in the country’s energy supply system in the name of controlling global warming. It barely passed the House and went nowhere in the Senate, where what had started as a nonpartisan and more moderate effort by Senators Graham, Kerry, and Lieberman died in the polarized atmosphere that developed in the campaign season leading up to the 2010 congressional elections.

I wonder if a big part of our current problem is an overemphasis on “greenness,” leading to a too-narrow focus on climate change as the sole driver for action. The public debate on energy is dominated by climate change and its deniers and its exaggerators. The deniers say global warming is a fraud or that it has nothing to do with human activities so we can’t do anything about it anyway. The exaggerators say that unless we move within a decade to obtain 30% of our electricity from a narrowly defined group of renewable energy sources the world will descend into chaos by the end of the century. Between these two extremes are many members of Congress who see a need for government action on energy but do not believe that the country needs to move immediately to run the country on wind generators and solar cells. This group includes many Democrats who did not support the Waxman-Markey bill in the House.

Making major changes in the country’s energy systems has a major impact economically as well as technically. There are potential benefits from making changes to energy sources that go beyond the climate issue, as important as that issue is. For example, the cost of the oil we import is about equal to our balance-of-trade deficit. If all cars, SUVs, pickups, and minivans traveled 50 miles per gallon of gas, our oil imports could be cut in half, reducing our balance-of-trade deficit by about $200 billion and decreasing emissions as well. The Energy Future: Think Efficiency study that I led for the American Physical Society concluded that 50–mile-per-gallon single-fuel vehicles can easily be produced in the 2020s.

National security must also be an essential consideration in energy policy. Delivering a single gallon of fuel to the front in Afghanistan requires a supply chain that consumes hundreds of gallons of fuel. Improvements in the efficiency of military vehicles would result in enormous savings in addition to reducing the exposure to danger of military personnel throughout the supply chain. Small solar or wind systems to provide power to forward bases would be treasured. Reducing U.S. dependence on oil would also make an important difference in foreign policy, making it possible to be more assertive in relations with oil-supplying nations in the Middle East and perhaps even with President Chavez of Venezuela.

Too much of the energy debate in recent years has suffered from a one-dimensional focus on climate change. But the systems that need to be changed to do something about global warming affect all dimensions of society: the economy, national security, and a variety of environmental concerns. Energy policy is too important to be entirely subsumed under a debate about climate science. Federal action on energy will occur only after we confront a number of realities that are creating unnecessary barriers to progress:

  • The exclusive focus on climate change as a justification for action on energy has excluded potential allies.
  • The emphasis on ultra-green technologies that are not yet ready for the big time has let the desire for the perfect drive out the available good.
  • Pushing policies that are as narrowly targeted as renewable portfolio standards has prevented many larger and less costly emissions reductions to be made in the nearer term than have been made with the renewables.

The one-dimensional focus of the energy debate on climate change has led to stalemate, and the way to break out of it is to broaden the base of support by working in all three dimensions where we may find allies ready for action, though their reasons may be different from those driven by concern about climate change. This need not be difficult. In fact, across the country there are signs that some federal agencies, state governments, and private companies are already putting this strategy into practice. Motivated by a variety of concerns, they are taking actions that are moving the nation’s energy system in the right direction:

  • The Excelon Corporation plans to spend $5 billion in the next six years on efficiency and on increasing the output of its nuclear power plants.
  • No new coal plants have been started this year because natural gas prices are so low.
  • NASA has let two contracts (Lockheed-Martin and Northrop-Grumman) for airplane concepts that might cut fuel consumption in half.
  • California soundly defeated the ballot proposition that would have suspended its greenhouse gas reduction program. Most of the counties that voted Republican in the California senatorial campaign voted against the proposition.

The states are coming together to do regionally what Washington is unwilling to do nationally. There are now three regional compacts on reducing greenhouse gas emissions:

  • The Regional Greenhouse Gas Initiative includes 10 Northeastern and mid-Atlantic states and has a cap-and-trade system.
  • The Midwest Greenhouse Gas Reduction Accord includes six states and one Canadian province.
  • The Western Climate Initiative includes seven states and four Canadian provinces.

Economic realities and enlightened self-interest can spur private companies to make the investments that will benefit the nation as well as their stockholders. There are many in government who may not accept a global warming argument, but who can be persuaded by an economic or security argument. Voters in the states are providing evidence that there is broad support for sensible action on energy. National policymakers need to hear the message that there is not just one rationale for setting policies that will transform the nation’s energy system. Although the reasons for action may differ, there is agreement on the general direction of the change that is needed.

If the goal is to do everything possible to reduce greenhouse gas emissions, is there any sound reason not to provide incentives for energy efficiency, natural gas, and nuclear power, all of which are relatively inexpensive, effective, and scalable now?

Immature technology

It is easy to forget how long it takes a new energy technology to mature and become cost-effective at scale and how much longer it takes to make a major penetration into the national infrastructure. A November 2010 report from the President’s Council of Advisors on Science and Technology (PCAST) concluded that we have to plan for a 50-year period to transform the nation’s energy infrastructure.

A shortcoming of much of the proposed legislation in Washington and the states is that we are pushing too hard on what is not ready and not hard enough on what is ready. For example, the National Renewable Energy Laboratory’s Western Wind and Solar Integration Study on integrating 35% of electricity delivery by wind and solar over the entire Great Plains and the West concluded that it could be done, but the system could not be made stable without having available backup for 100% of wind and solar capacity. Why? Because sometimes there are periods of days when the wind does not blow or the Sun does not shine, and we cannot afford blackouts of long duration.

When advocates of renewable power calculate the cost of wind and solar systems, they rarely mention the very high cost of building, maintaining, and operating a large backup system. Likewise, they are likely to ignore the cost of building long-range high-power transmission lines to deliver power from the remote locations where renewable systems are often built to the urban and suburban areas where the electricity is needed, nor do they factor in the very long and difficult regulatory path that must be followed before the lines can be built. It can take longer to win approval for a transmission line than for a nuclear power plant.

When large-scale energy storage systems become available, and when part of the environmental movement stops suing to block the transmission lines that other parts of the environmental movement want to build to distribute renewable electricity, perhaps wind and solar can reach the scale being promoted. Meanwhile, up to 10 or 15% of demand is all that can be reasonably expected from renewable sources.

We cannot seem to stop doing things that make no sense at all. Hydrogen for transportation is an example. The program should be abandoned or sent back to the laboratory for the development of better catalysts and more efficient end-to-end systems that can make it deployable and affordable at scale. It makes no sense to use energy to produce hydrogen, distribute the hydrogen by an entirely new system, and put it into cars to be used by a fuel cell to produce electricity, when we can much more efficiently distribute the electricity and put it into batteries.

Mature technology

Many of the renewable energy systems being promoted may eventually reach full scale, but they are not ready for that now. On the other hand, natural gas has become cheap with the new ability to exploit shale gas. A modern gas plant emits one-third of the greenhouse gases of an average coal plant. Changing all the coal-fired power plants in the country to modern natural gas plants would eliminate 1.4 billion tons of carbon dioxide emission annually, a quarter of total emissions.

California’s Million Solar Roofs project is to install 2 to 3 gigawatts of photovoltaic (PV) capacity at a cost of $10 billion to $20 billion. For 15% of the cost, one could eliminate twice as much greenhouse gas by converting the Four Corners 2-gigawatt coal-fired power plant to natural gas. Even if PV were down to $1 per watt from today’s typical $4 to $5 per watt, the coal-to-gas conversion would still eliminate more greenhouse gases for the same cost. Alternatively, one could build two nuclear power plants for today’s PV cost and eliminate five times the emissions.

If the goal is to do everything possible to reduce greenhouse gas emissions, is there any sound reason not to provide incentives for energy efficiency, natural gas, and nuclear power, all of which are relatively inexpensive, effective, and scalable now? What is the rationale for emphasizing renewable portfolio standards that target only solar, wind, geothermal, and small hydroelectric technologies? Is the goal to promote the Chinese PV and wind-turbine industries, or is to reduce emissions?

Policy

In looking for ways to free energy policy from its narrow focus and to end the political stalemate, we can find some helpful, and not so helpful, suggestions in four recent reports:

  • The National Academy of Sciences has issued a sequel to its Rising Above the Gathering Storm report first issued in 2005. It emphasizes education and innovation and recommends spending more money on energy research. But it said this before. It did not happen then and is unlikely to happen now.
  • The PCAST report cited earlier says spend more money and base what you spend on a quadrennial energy review like the Department of Defense’s (DOD’s) quadrennial defense review. If you are thrilled at the weapons and policies coming from the DOD reviews, you might like this.

The next two are more interesting.

  • The American Energy Innovation Council, whose board includes well-known current CEOs and retired CEOs such as Jeff Immelt of General Electric, Bill Gates of Microsoft, and Norm Augustine of Lockheed-Martin, among others, may have more impact. Its report A Business Plan for America’s Energy Future discusses the multidimensional energy challenge we face and recommends a new national strategy board made up of nongovernmental people, a $16 billion–per–year innovation fund, and a better-defined role for the federal government. It won’t happen soon because of the money, but the CEOs of the Innovation Council should be influential, and it has some interesting ideas.
  • The most unusual is the tripartite report Post-Partisan Power, from the American Enterprise Institute, Brookings Institution, and a new West Coast player, the Breakthrough Institute. It says invest in education, overhaul the energy innovation system, reform subsidies, and add a few small fees so that it can be done without adding to the deficit. Any time the names of Brookings and the American Enterprise Institute are on the cover of one report, it should earn attention.

Those who are waiting for a national cap-and-trade bill or a carbon tax will have to wait at least until we see the results of the 2012 election, and maybe longer. But significant progress is possible without these measures. The heavy lifting will have to be done by industry, and the key to industry success is to establish policies that specify what the nation wants to achieve, not how industry should do it.

Politically, it will be essential to support all proposals in as many dimensions as are appropriate. A further increase in the automobile mileage standard can be justified on economic and national security grounds as well as on environmental ones. The technology already exists with hybrids, diesels, and direct-injection gasoline engines.

Reject renewable portfolio standards, and opt instead for emission reduction standards. Because natural gas is cheaper and better than coal today, it should be encouraged. Government, and forgive me for saying so, environmentalists, are better off focusing on the goals, and not on how to reach them.

Tell the electric power industry to reduce emissions by some percentage by some date and then get out of the way. Competitive companies will determine what mix of efficiency management, natural gas, renewable sources, and other measures is quickest and cheapest. We will need solar and wind eventually, so they need some support, but not at the expense of limiting cost-effective action today.

Don’t be too clever by half, as the Brits say. One too-clever regulation is California’s low carbon fuel standard. It requires that one count all the carbon in a megajoule of each fuel, including the energy and emissions that go with making the fuel, and then reduce that amount by 10% by 2020. There are smart people who love this. It was adopted by California in April 2009, and more states are considering following California’s lead. The theory is that it forces emissions included in fuel production to be counted, so, for example, if one uses more oil from Canadian tar sands to make gasoline, the carbon score goes up. But emissions depend on both fuel and efficiency. Larger and less costly reductions in emissions can be made by focusing on the efficiency side: A diesel will reduce emissions by about 20% as compared to a gasoline engine; a hybrid will reduce it by 50%. So why waste effort and money on the fuel side? Once again, set the goals and get out of the way.

The fundamental question is, can environmental, scientific, business, and policy organizations put together a coherent message that brings in as many allies as possible, starts large-scale action with things that are scalable and affordable today, and encourages the innovation we will need for tomorrow?

It will not be easy, but it is the only way we will turn things around.