U.S. Workers in a Global Job Market

Among the many changes that are part of the emergence of a global economy is a radically different relationship between U.S. high-tech companies and their employees. As late as the 1990s, a degree in science, technology, engineering, or mathematics (STEM) was a virtual guarantee of employment. Today, many good STEM jobs are moving to other countries, reducing prospects for current STEM workers and dimming the appeal of STEM studies for young people. U.S. policymakers need to learn more about these developments so that they can make the critical choices about how to nurture a key ingredient in the nation’s future economic health, the STEM workforce.

U.S. corporate leaders are not hiding the fact that globalization has fundamentally changed how they manage their human resources. Craig Barrett, then the chief executive officer (CEO) of Intel Corporation, said that his company can succeed without ever hiring another American. In an article in Foreign Affairs magazine, IBM’s CEO Sam Palmisano gave the eulogy for the multinational corporation (MNC), introducing us to the globally integrated enterprise (GIE): “Many parties to the globalization debate mistakenly project into the future a picture of corporations that is unchanged from that of today or yesterday….But businesses are changing in fundamental ways—structurally, operationally, culturally—in response to the imperatives of globalization and new technology.”

GIEs do not have to locate their high-value jobs in their home country; they can locate research, development, design, or services wherever they like without sacrificing efficiency. Ron Rittenmeyer, then the CEO of EDS, said he “is agnostic specifically about where” EDS locates its workers, choosing the place that reaps the best economic efficiency. EDS, which had virtually no employees in low-cost countries in 2002, had 43% of its workforce in low-cost countries by 2008. IBM, once known for its lifetime employment, now forces its U.S. workers to train foreign replacements as a condition of severance. In an odd twist, IBM is offering U.S. workers the opportunity to apply for jobs in its facilities in low-cost countries such as India and Brazil at local wage rates.

Policy discussions have not kept pace with changes in the job market, and little attention is being paid to the new labor market for U.S. STEM workers. In a time of GIEs, advanced tools and technology can be located anywhere, depriving U.S. workers of an advantage they once had over their counterparts in low-wage countries. And because technology workers not only create new knowledge for existing companies but are also an important source of entrepreneurship and startup firms, the workforce relocation may undermine U.S. world leadership as game-changing new companies and technologies are located in low-cost countries rather than the United States. The new corporate globalism will make innovations less geographically sticky, raising questions about how to make public R&D investments pay off locally or even nationally. Of course, scientists and engineers in other countries can generate new ideas and technologies that U.S. companies can import and put to use, but that too will require adjustments because this is not a strategy with which U.S. companies have much experience. In short, the geographic location of inputs and the flow of technology, knowledge, and people are sure to be significantly altered by these changes in firm behavior.

As Ralph Gomory, a former senior vice president for science and technology at IBM, has noted, the interests of corporations and countries are diverging. Corporate leaders, whose performance is not measured by how many U.S. workers they employ or the long-term health of the U.S. economy, will pursue their private interests with vigor even if their actions harm their U.S. employees or are bad prescriptions for the economy. Simply put, what’s good for IBM may not be good for the United States and vice versa. Although this may seem obvious, the policy and political processes have not fully adjusted to this reality. Policymakers still turn to the CEOs of GIEs for advice on what is best for the U.S. economy. Meanwhile, STEM workers have yet to figure out that they need to get together to identify and promote what is in their interest.

Most STEM workers have not embraced political activism. Consider employees in the information technology (IT) industry, one of the largest concentrations of STEM workers. They have by and large rejected efforts by unions to organize them. One might expect a professional organization such as the Institute of Electrical and Electronics Engineers (IEEE) to represent their interests, but IEEE is an international organization that sees little value in promoting one group of its members over another.

Because STEM workers lack an organized voice, their interests are usually neglected in policy discussions. There was no worker representative on the National Academies committee that drafted the influential report Rising Above the Gathering Storm. And although the Council on Competitiveness, which prepared the National Innovation Initiative, has representatives of labor unions in its leadership, they did not participate in any significant way on the initiative. Both studies had chairs who were CEOs of GIEs. It should come as no surprise, therefore, that neither of these reports includes recommendations that address the root problem of offshoring: the misalignment of corporate and national interests, in which firms compete by substituting foreign for U.S. workers. Instead, the reports diagnosed the problem as a shortage of qualified STEM workers and therefore advocated boosting R&D spending, expanding the pool of STEM workers, and recruiting more k-12 science and math teachers.

Low-cost countries attract R&D

Although everyone recognizes that globalization is remaking the R&D landscape, that U.S.-based companies are moving some of their high-value activities off shore, and that some low-income countries such as China and India are eager to enhance their capabilities, we actually have very little reliable and detailed data on what is happening. In fact, much of what we think we do know is contradictory. For example, in 2006, China was by far the leading exporter of advanced technology products to the United States, surpassing all of the European Union combined. On the other hand the number of triadic patents—those filed in Europe, the United States, and Japan—awarded to Chinese inventors in 2002 was a mere 177 versus more than 18,000 for American and more than 13,000 for Japanese inventors. A mixed picture also emerges from India. On the one hand, India’s indigenous IT services companies such as Infosys and Wipro have become the market leaders in their sector, forcing U.S.-based competitors such as IBM and HP to adopt their offshore outsourcing business model. But in 2003, India produced only 779 engineering doctorates compared to the 5,265 produced in the United States.

The standard indicators in this area are backward-looking and often out of date by the time they are published. More timely and forward-looking information might be gleaned from surveys of business leaders and corporate announcements. A survey by the United Nations Conference on Trade and Development of the top 300 worldwide R&D spenders found that China was the top destination for future R&D expansion, followed by the United States, India, Japan, the United Kingdom, and Russia. A 2007 Economist magazine survey of 300 executives about R&D site selection found that India was the top choice, followed by the United States and China.

No comprehensive list of R&D investments by U.S. multinational corporations exists, and the firms aren’t required to disclose the location of R&D spending in financial filings. We must rely on the information that companies offer voluntarily. From public announcements we know that eight of the top 10 R&D-spending companies have R&D facilities in China or India, (Microsoft, Pfizer, DaimlerChrysler, General Motors, Siemens, Matsushita Electric, IBM, and Johnson & Johnson), and that many of them plan to increase their innovation investments in India and China.

Although early investments were for customizing products for a local market, foreign-based facilities are now beginning to develop products for global markets. General Motors has a research presence in India and China, and in October 2007, it announced that it would build a wholly owned advanced research center to develop hybrid technology and other advanced designs in Shanghai, where it already has a 1,300-employee research center as part of a joint venture with the Shanghai Automotive Industry Corporation. Pfizer, the number two R&D spender, is outsourcing drug development services to India and already has 44 new drugs undergoing clinical trials there. The company has approximately 200 employees at its Shanghai R&D center, supporting global clinical development. Microsoft has a large and expanding R&D presence in India and China. Microsoft’s India Development Center, its largest such center outside the United States, employs 1,500 people. The Microsoft China R&D Group also employs 1,500, and in 2008, Microsoft broke ground on a new $280-million R&D campus in Beijing and announced an additional $1 billion investment for R&D in China. Intel has about 2,500 R&D workers in India and has invested approximately $1.7 billion in its Indian operations. Its Indian engineers designed the first all-India microprocessor, the Xeon 7400, which is used for high-end servers. Intel has been investing in startup companies in China, where it created a $500 million Intel Capital China Technology Fund II to be used for investments in wireless broadband, technology, media, telecommunications, and “clean tech.”

Although General Electric spends less than the above companies on R&D, it has the distinction of having the majority of its R&D personnel in low-cost countries. Jack Welch, GE’s former CEO, was an early and significant evangelizer of offshoring. The firm has four research locations worldwide, in New York, Shanghai, Munich, and Bangalore. Bangalore’s Jack Welch R&D Center employs 3,000 workers, more than the other three locations combined. Since 47% of GE’s revenue in 2008 came from the United States and only 16% from Asia, it is clear that it is not moving R&D to China and India just to be close to its market.

The fact that China and India are able to attract R&D is an indicator that they have improved their ability to attract the mid-skill technology jobs in the design, development, and production stages. The true benefit of attracting R&D activities might be the downstream spillover benefits in the form of startup firms and design and development and production facilities.

U.S. universities have been a magnet for talented young people interested in acquiring the world’s best STEM education. Many of these productive young people have remained in the United States, become citizens, and made enormous contributions to the productivity of the U.S. economy as well as its social, cultural, and political life. But these universities are beginning to think of themselves as global institutions that can deliver their services anywhere in the world.

Cornell, which already calls itself a transnational institution, operates a medical school in Qatar and sent its president to India in 2007 to explore opportunities to open a branch campus. Representatives of other top engineering schools, such as Rice, Purdue, Georgia Tech, and Virginia Tech, have made similar trips. Carnegie Mellon offers its technology degrees in India in partnership with a small private Indian college. Students take most of their courses in India, because it is less expensive, and then spend six months in Pittsburgh to complete the Carnegie Mellon degree.

If students do not have to come to the United States to receive a first-rate education, they are far less likely to seek work in the United States. More high-quality job opportunities are appearing in low-cost countries, many of them with U.S. companies. This will accelerate the migration of STEM jobs out of the United States. Even the perfectly sensible move by many U.S. engineering programs to provide their students with more international experience through study-abroad courses and other activities could contribute to the migration of STEM jobs by preparing these students to manage R&D activities across the globe.

Most of the information about university globalization is anecdotal. The trend is clearly in its early stages, but there are indications that it could grow quickly. This is another area in which more reliable data is essential. If the nation’s leaders are going to manage university activities in a way that will advance U.S. interests, they will need to know much more about what is happening and what is planned.

Uncertainty and risk

The emerging opportunities for GIEs to take advantage of high-skilled talent in low-cost countries have markedly increased both career uncertainty and risk for the U.S. STEM workforce. Many U.S. STEM workers worry about offshoring’s impact on their career prospects and are altering career selection. For instance, according to the Computing Research Association, enrollment in bachelors programs in computer science dropped 50% from 2002 to 2007. The rising risk of IT job loss, caused in part by offshoring, was a major factor in students’ shying away from computer science degrees.

Offshoring concerns have been mostly concentrated on IT occupations, but many other STEM occupations may be at risk. Princeton University economist Alan Blinder analyzed all 838 Bureau of Labor Statistics standard occupation categories to estimate their vulnerability to offshoring. He estimates that nearly all (35 of 39) STEM occupations are “offshorable,” and he described many as “highly vulnerable.” By vulnerable, he is not claiming that all, or even a large share, of jobs in those occupations will actually be lost overseas. Instead, he believes that those occupations will face significant new wage competition from low-cost countries. Further, he finds that there is no correlation between vulnerability and education level, so simply increasing U.S.education levels, as many have advocated, will not slow offshoring.

The National Science Foundation should work with the appropriate agencies such as the Bureaus of Economic Analysis, Labor Statistics, and the Census to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Workers need to know which jobs will be geographically sticky and which are vulnerable to being offshored so that they can make better choices for investing in their skills. But there is a great deal of uncertainty about how globalization will affect the level and mix of domestic STEM labor demand. The response of some workers appears to be to play it safe and opt for occupations, often non-STEM, that are likely to stay. Further, most employers, because of political sensitivities, are very reluctant to reveal what jobs they are offshoring, sometimes going to great lengths to mask the geographic rebalancing of their workforces. The uncertainty introduced by offshoring aggravates the already volatile job market that is characteristic of the dynamic high-tech sector.

For incumbent workers, especially those in mid-career, labor market volatility creates a special dilemma. The two prior technology recessions, 1991 to 1992 and 2002 to 2004, have been especially long, longer even than for the general labor force. At the same time, technology-obsolescence cycles are shortening, which means that unemployed STEM workers can find that there skills quickly become outdated. If unemployment periods are especially long, it will be even more difficult to reenter the STEM workforce when the market rebounds. An enormous amount of human capital is wasted when experienced STEM professionals are forced to move into other professions because of market vagaries.

Policy has done little to reduce risks and uncertainty for STEM workers. The government does not collect data on work that is moving offshore or real-time views of the STEM labor markets, both of which would help to reduce uncertainty. Trade Adjustment Assistance (TAA), the primary safety net for workers who lose their jobs due to international trade, has not been available for services industries, but it has been authorized as part of the recently passed stimulus legislation. This is one part of the stimulus that should be made permanent. In addition, Congress should ensure that the program is adequately funded, because it is often oversubscribed, and the Department of Labor should streamline the eligibility regulations, because bureaucratic rules often hamper the ability of displaced workers to obtain benefits. This will be especially true with services workers whose employers are reluctant to admit that workers are displaced due to offshoring.

Response to competition

One of the most important high-technology stories of the past decade has been the remarkably swift rise of the Indian IT services industry, including firms such as Wipro, Infosys, TCS, and Satyam, as well as U.S.-based firms such as Cognizant and iGate that use the same business model. There is no need to speculate about whether the Indian firms will eventually take the lead in this sector; they already have become market leaders. By introducing an innovative, disruptive business model, the Indian firms have turned the industry upside down in only four years. U.S. IT services firms such as IBM, EDS, CSC, and ACS were caught flat-footed. Not a single one of those firms would have considered Infosys, Wipro, or TCS as direct competitors as recently as 2003, but now they are chasing them by moving as fast as possible to adopt the Indian business model, which is to move as much work as possible to low-cost countries. The speed and size of the shift is breathtaking.

The Indian IT outsourcing firms have extensive U.S. operations, but they prefer to hire temporary guest workers with H-1B or L-1 visas. The companies train these workers in the United States, then send them home where they can be hired to do the same work at a lower salary. These companies rarely sponsor their H-1B and L-1 workers for U.S. legal permanent residence.

The important lesson is how the U.S. IT services firms have responded to the competitive challenge. Instead of investing in their U.S. workers with better tools and technologies, the firms chose to imitate the Indian model by outsourcing jobs to low-cost countries. IBM held a historic meeting with Wall Street analysts in Bangalore in June 2006, where its whole executive team pitched IBM’s strategy to adopt the Indian offshore-outsourcing business model, including an additional $6 billion investment to expand its Indian operations. IBM’s headcount in India has grown from 6,000 in 2003 to 73,000 in 2007, and is projected to be 110,000 by 2010. The U.S. headcount is about 120,000. And IBM is not alone. Accenture passed a historic milestone in August 2007, when its Indian headcount of 35,000, surpassed any of its other country headcounts, including the United States, where it had 30,000 workers. In a 2008 interview, EDS’s Rittenmeyer extolled the profitability of shifting tens of thousands of the company’s workers from the United States to low-cost countries such as India. He said outsourcing is “not just a passing fancy. It is a pretty major change that is going to continue. If you can find high-quality talent at a third of the price, it’s not too hard to see why you’d do this.” ACS, another IT services firm, recently told Wall Street analysts that it plans its largest increase in offshoring for 2009, when it will move many of its more complex and higher-wage jobs overseas so that nearly 35% of its workforce will be in low-cost countries.

As Alan Blinder’s analysis indicates, many other types of STEM jobs could be offshored. The initiative could come from foreign competitors or from U.S.-based GIEs.

Preserving STEM jobs

Private companies will have the final say about the offshoring of jobs, but the federal government can and should play a role in tracking what is happening in the global economy and taking steps that help the country adapt to change. Given the speed at which offshoring is increasing in scale, scope, and job sophistication, a number of immediate steps should be taken.

Collect additional, better, and timelier data. We cannot expect government or business leaders to make sound decisions in the absence of sound data. The National Science Foundation (NSF) should work with the appropriate agencies, such as the Bureaus of Economic Analysis (BEA) and Labor Statistics and the Census, to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Specifically, the NSF Statistical Research Service (SRS) should augment existing data on multinational R&D investments to include annual detailed STEM workforce data, including occupation, level of education, and experience for workers within and outside the United States. These data should track the STEM workforce for multinational companies in the United States versus other countries. The SRS should also collect detailed information on how much and what types of R&D and innovation activities are being done overseas. The NSF Social, Behavioral, and Economic Sciences division should do four things: 1) begin a research program to estimate the number of jobs that have been lost to offshoring and to identify the characteristics of jobs that make them more or less vulnerable to offshoring; 2) assess the extent of U.S. university globalization and then track trends; 3) identify the effects of university globalization on the U.S. STEM workforce and students, and launch a research program to identify and disseminate best practices in university globalization; and 4) conduct a study to identify the amount and types of U.S. government procurement that are being offshored. Finally, the BEA should implement recommendations from prior studies, such as the 2006 study by MIT’s Industrial Performance Center, to improve its collection of services data, especially trade in services.

Establish an independent institute to study the implications of globalization. Blinder has said that the economic transformation caused by offshoring could rival the changes caused by the industrial revolution. In addition to collecting data, government needs to support an independent institute to analyze the social and economic implications of these changes and to consider policy options to address the undesirable effects. A $40 million annual effort to fund intramural and extramural efforts would be a good start.

Facilitate worker representation in the policy process. Imagine if a major trade association, such as the Semiconductor Industry Association, was excluded from having any representative on a federal advisory committee making recommendations on trade and export control policy in the semiconductor industry. It would be unfathomable. But we have precisely this arrangement when it comes to making policies that directly affect the STEM workforce. Professional societies and labor unions should be invited to represent the views of STEM workers on federal advisory panels and in congressional hearings.

Create better career paths for STEM workers. STEM offshoring has created a pessimistic attitude about future career prospects for incumbent workers as well as students. To make STEM career paths more reliable and resilient, the government and industry should work together to create programs for continuing education, establish a sturdier safety net for displaced workers, improve information about labor markets and careers, expand the pool of potential STEM workers by making better use of workers without a college degree, and provide assistance for successful reentry into the STEM labor market after voluntary and involuntary absences. Some specific steps are:

  • The government should encourage the adoption and use of low-cost asynchronous online education targeted at incumbent STEM workers. The program would be coordinated with the appropriate scientific and engineering professional societies. A pilot program should assess the current penetration rates of online education for STEM workers and identify barriers to widespread adoption.
  • The Department of Labor should work with the appropriate scientific and engineering professional societies to create a pilot program for continuous education of STEM workers and retraining of displaced mid-career STEM workers. Unlike prior training programs, these should be targeted at jobs that require at least a bachelor’s degree. Funding could come from the H-1B visa fees that companies pay when they hire foreign workers.
  • The National Academies should form a study panel to identify on-ramps to STEM careers for students who do not go to college and recommend ways to eliminate barriers and identify effective strategies for STEM workers to more easily reenter the STEM workforce.
  • Congress should reform immigration policy to increase the number of highly skilled people admitted as permanent residents and reduce the number of temporary H-1B and L-1 work visas. Rules for H-1B and L-1 visas should be tightened to ensure that workers receive market wages and do not displace U.S. citizens and permanent resident workers.

Improve the competitiveness of the next generation of STEM workers. As workers in other countries develop more advanced skills, U.S. STEM workers must develop new skills and opportunities to distinguish themselves. They should identify and pursue career paths that are geographically sticky, and they should acquire more entrepreneurship skills that will enable them to create their own opportunities. The National Academies could help by forming a study panel to identify necessary curriculum reforms and best practices in teaching innovation, creativity, and entrepreneurship to STEM students. NSF should encourage and help fund study-abroad programs for STEM students to improve their ability to work in global teams.

Public procurement should favor U.S. workers. The public sector—federal, state, and local government—is 19% of the economy and is an important mechanism that should be used by policymakers. There is a long, strong, and positive link between government procurement and technological innovation. The federal government not only funded most of the early research in computers and the Internet but was also a major customer for those new technologies. U.S. taxpayers have a right to know that government expenditures at any level are being used appropriately to boost innovation and help U.S. workers. The first step is to do an accounting of the extent of public procurement that is being offshored. Then the government should modify regulations to keep STEM intensive-work at home.

We are at the beginning of a major structural shift in global distribution of R&D and STEM-intensive work. Given the critical nature of STEM to economic growth and national security, the United States must begin to adapt to these changes. The responses that have been proposed and adopted so far are based on the belief that nothing has changed. Simply increasing the amount of R&D spending, the pool of STEM workers, and the number of k-12 science and math teachers is not enough. The nation needs to develop a better understanding of the new dynamics of the STEM system and to adopt policies that will advance the interests of the nation and its STEM workers.

From the Hill – Spring 2009

Economic stimulus bill provides major boost for R&D

The $790-billion economic stimulus bill signed by President Obama on February 17 contains $21.5 billion in federal R&D funding—$18 billion for research and $3.5 billion for facilities and large equipment. The final appropriation was more than the $17.8 billion approved in the Senate or the $13.2 billion approved in the House version of the bill. For a federal research portfolio that has been declining in real terms since fiscal year (FY) 2004, the final bill provides an immediate boost that allows federal research funding to see a real increase for the first time in five years.

The stimulus bill, which is technically an emergency supplemental appropriations bill, was approved before final work has been completed on funding the federal government for FY 2009. Only 3 of 12 FY 2009 appropriations bills have been approved (for the Departments of Defense, Homeland Security, and Veterans Affairs). All other federal agencies are operating at or below FY 2008 funding levels under a continuing resolution (CR) through March 6.

Under the CR and the few completed FY 2009 appropriations, the federal research portfolio stands at $58.3 billion for FY 2009, up just 0.3% (less than inflation), but after the stimulus bill and assuming that final FY 2009 appropriations are at least at CR levels, the federal research portfolio could jump to nearly $75 billion.

Basic competitiveness-related research, biomedical research, energy R&D, and climate change programs are high priorities in the bill. The National Institutes of Health (NIH) will receive $10.4 billion, which would completely turn around an NIH budget that has been in decline since 2004 and could boost the total NIH budget to $40 billion, depending on the outcome of NIH’s regular FY 2009 appropriation.

The National Science Foundation (NSF), the Department of Energy (DOE) Office of Science, and the National Institute of Standards and Technology (NIST)—the three agencies highlighted in the America COMPETES Act of 2007 and President Bush’s American Competitiveness Initiative—would all be on track to double their budgets over 7 to 10 years. NSF will receive $3 billion, DOE’s Office of Science $1.6 billion, and NIST $600 million.

DOE’s energy programs would also be a winner with $3.5 billion for R&D and related activities in renewable energy, energy conservation, and fossil energy, part of the nearly $40 billion total for DOE in weatherization, loan guarantees, clean energy demonstration, and other energy program funds. DOE will receive $400 million to start up the Advanced Research Projects Agency–Energy (ARPA-E), a new research agency authorized in the America COMPETES Act but not funded until now.

The bill will provide money for climate change–related projects in the National Aeronautics and Space Administration and the National Oceanic and Atmospheric Administration (NOAA). There is also additional money for non-R&D but science and technology–related programs, higher education construction, and other education spending of interest to academia.

The bill provides billions of dollars for universities to construct or renovate laboratories and to buy research equipment, as well as money for federal labs to address their infrastructure needs. The bill provides $3.5 billion for R&D facilities and capital equipment to pay for the repair, maintenance, and construction of scientific laboratories as well as large research equipment and instrumentation. Considering that R&D facilities funding totaled $4.5 billion in FY 2008, half of which went to just one laboratory (the International Space Station), the $3.5-billion supplemental will be an enormous boost in the federal government’s spending on facilities.

Obama cabinet picks vow to strengthen role of science

Key members of President Obama’s new cabinet are stressing the importance of science in developing policy as well as the need for scientific integrity and transparency in decisionmaking.

In one of his first speeches, Ken Salazar, the new Secretary of the Interior, told Interior Department staff that he would lead with “openness in decisionmaking, high ethical standards, and respect to scientific integrity.” He said decisions will be based on sound science and the public interest, not special interests.

Lisa Jackson, the new administrator of the Environmental Protection Agency (EPA), said at her confirmation hearing that “science must be the backbone of what EPA does.” Addressing recent criticism of scientific integrity at the EPA, she said that “political appointees will not compromise the integrity of EPA’s technical experts to advance particular regulatory outcomes.”

In a memo to EPA employees, Jackson noted, “I will ensure EPA’s efforts to address the environmental crises of today are rooted in three fundamental values: science-based policies and programs, adherence to the rule of law, and overwhelming transparency.” The memo outlined five priority areas: reducing greenhouse gas emissions, improving air quality, managing chemical risks, cleaning up hazardous waste sites, and protecting America’s water.

New Energy Secretary Steven Chu, a Nobel Prize–winning physicist and former head of the Lawrence Berkeley National Laboratory, emphasized the key role science will play in addressing the nation’s energy challenges. In testimony at his confirmation hearing, Chu said that “the key to America’s prosperity in the 21st century lies in our ability to nurture and grow our nation’s intellectual capital, particularly in science and technology.” He called for a comprehensive energy plan to address the challenges of climate change and threats from U.S. dependence on foreign oil.

In other science-related picks, the Senate confirmed Nancy Sutley as chair of the Council on Environmental Quality at the White House. Awaiting confirmation as this issue went to press were John Holdren, nominated to be the president’s science advisor, and Jane Lubchenco, nominated as director of NOAA.

Proposed regulatory changes under review

As one of its first acts, the Obama administration has halted all proposed regulations that were announced but not yet finalized by the Bush administration until a legal and policy review can be conducted. The decision means at least a temporary stop to certain controversial changes, including a proposal to remove gray wolves in the northern Rocky Mountains from Endangered Species Act (ESA) protection.

However, the Bush administration was able to finalize a number of other controversial changes, including a change in implementation of the ESA that allows agencies to bypass scientific reviews of their decisions by the Fish and Wildlife Service or the National Marine Fisheries Service. In addition, the Department of the Interior finalized two rules: one that allows companies to dump mining debris within a current 100-foot stream buffer and one that allows concealed and loaded guns to be carried in national parks located in states with concealed-carry laws.

Regulations that a new administration wants to change but have been finalized must undergo a new rulemaking process, often a lengthy procedure. However, Congress can halt rules that it opposes, either by not funding implementation of the rules or by voting to overturn them. The Congressional Review Act allows Congress to vote down recent rules with a resolution of disapproval, but this technique has been used only once and would require separate votes on each regulation that Congress wishes to overturn. House Natural Resources Chairman Nick Rahall (D-WV) and Select Committee on Global Warming Chairman Ed Markey (D-MA) have introduced a measure that would use the Congressional Review Act to freeze the changes to the endangered species rules.

Members of Congress have introduced legislation to expand their options to overturn the rules. Rep. Jerrold Nadler (D-NY), chair of the House Judiciary Subcommittee on the Constitution, Civil Rights and Civil Liberties, has introduced a bill, the Midnight Rule Act, that would allow incoming cabinet secretaries to review all regulatory changes made by the White House within the last three months of an administration and reverse such rules without going through the entire rulemaking process.

Witnesses at a February 4 hearing noted, however, that every dollar that goes into defending or rewriting these regulations is money not spent advancing a new agenda, so the extent to which agencies and Congress will take on these regulatory changes remains to be seen.

Democrats press action on climate change

Amid efforts to use green technologies and jobs to stimulate the economy, Congress began work on legislation to cap greenhouse gas emissions that contribute to climate change. At a press conference on February 3, Barbara Boxer (D-CA), chair of the Senate Environment and Public Works Committee, announced a broad set of principles for climate change legislation. They include setting targets that are guided by science and establishing “a level global playing field, by providing incentives for emission reductions and effective deterrents so that countries contribute their fair share to the international effort to combat global warming.” The principles also lay out potential uses for the revenues generated by establishing a carbon market.

Also addressing climate change is the Senate Foreign relations Committee, which on January 28 heard from former Vice President Al Gore, who pushed for domestic and international action to address climate change. Gore urged Congress to pass the stimulus bill because of its provisions on energy efficiency, renewable energy, clean cars, and a smart grid. He also called for a cap on carbon emissions to be enacted before the next round of international climate negotiations in Copenhagen in December 2009.

In the House, new Energy and Commerce Chair Henry Waxman (D-CA), who ousted longtime chair John Dingell (D-MI) and favors a far more aggressive approach to climate change legislation, said that he wants a bill through his committee by Memorial Day. Speaker Nancy Pelosi (D-CA) would like a bill through the full House by the end of the year.

A hearing of Waxman’s committee on climate change featured testimony from members of the U.S. Climate Action Partnership, a coalition of more than 30 businesses and nongovernmental organizations, which supports a cap-and-trade system with a 42% cut in carbon emissions from 2005 levels by 2030 and reductions of 80% by 2050. Witnesses testified that a recession is a good time to pass this legislation because clarity in the law would illuminate investment opportunities.

Energy and Environment Subcommittee Chair Ed Markey (D-MA) has said that he intends to craft a bill that draws on existing proposals, including one developed at the end of the last Congress by Dingell and former subcommittee chair Rick Boucher (D-VA). Markey’s proposal is also likely to reflect a set of principles for climate change that he announced last year, along with Waxman and Rep. Jay Inslee (D-WA). The principles are based on limiting global temperature rise to 2 degrees Celsius.

President Obama has also taken steps to address greenhouse gas emissions. He directed the EPA to reconsider whether to grant California a waiver to set more stringent automobile standards. California has been fighting the EPA’s December 2007 decision to deny its efforts to set standards that would reduce carbon dioxide emissions from automobiles by 30% by 2016. If approved, 13 other states have pledged to adopt the standards. Obama also asked the Department of Transportation to establish higher fuel efficiency standards for carmakers’ 2011 model year.

Biological weapons threat examined

The Senate and the House held hearings in December 2008 and January 2009, respectively, to examine the findings of the report A World at Risk, by the Commission on the Prevention of Weapons of Mass Destruction, Proliferation and Terrorism. At the hearings, former Senators Bob Graham and Jim Talent, the commission chair and vice chair, warned that “a terrorist attack involving a weapon of mass destruction—nuclear, biological, chemical, or radiological—is more likely than not to occur somewhere in the world in the next five years.”

Graham and Talent argued that although the prospect of a nuclear attack is a matter of great concern, the threat of a biological attack poses the more immediate concern because of “the greater availability of the relevant dual-use materials, equipment, and know-how, which are spreading rapidly throughout the world.”

That view was supported by Senate Homeland Security and Governmental Affairs Committee chairman Joe Lieberman (I-CT) and ranking member Susan Collins (R-ME). Both recognized that although biotechnology research and innovation have created the possibility of important medical breakthroughs, the spread of the research and the technological advancements that accompany innovations have also increased the risk that such knowledge could be used to develop weapons.

Graham and Talent acknowledged that weaponizing biological agents is still difficult and stated that “government officials and outside experts believe that no terrorist group has the operational capability to carry out a mass-casualty attack.” The larger risk, they said, comes from rogue biologists, which is what is believed to have happened in the 2001 anthrax incidents. Currently, more than 300 research facilities in government, academia and the private sector in the United States, employing about 14,000 people, are authorized to handle pathogens. The research is conducted in high-containment laboratories.

The commission said it was concerned about the lack of regulation of unregistered BSL-3 research facilities in the private sector. These labs have the necessary tools to handle anthrax or synthetically engineer a more dangerous version of that agent, but whether they have implemented appropriate security measures is often not known.

For this reason, the commission recommended consolidating the regulation of registered and unregistered high-containment laboratories under a single agency, preferably the Department of Homeland Security or the Department of Health and Human Services. Currently, regulatory oversight of research involves the Department of Agriculture and the Centers for Disease Control and Prevention, with security checks performed by the Justice Department.

Collins has repeatedly stated the need for legislation to regulate biological pathogens, expressing deep concern over the “dangerous gaps” in biosecurity and the importance of drafting legislation to close them.

In the last Congress, the Select Agent Program and Biosafety Improvement Act of 2008 was introduced to reauthorize the select agent program but did not pass. The bill aimed at strengthening biosafety and security at high-containment laboratories. It would not have restructured agency oversight. No new bills have been introduced in the new Congress.

Before leaving office, President Bush on January 9 signed an executive order on laboratory biosecurity that established an interagency working group, co-chaired by the Departments of Defense and Health and Human Services, to review the laws and regulations on the select agent program, personnel reliability, and the oversight of high-containment labs.

Multifaceted ocean research bill advances

The Senate on January 15, 2009, approved by a vote of 73 to 21 the Omnibus Public Lands Management Act of 2009, a package of five bills authorizing $794 million for expanded ocean research through FY 2015, including $104 million authorized for FY 2009, along with a slew of other wilderness conservation measures. The House is expected to take up the bill.

The first of the five bills, the Ocean Exploration and NOAA Undersea Research Act, authorizes the National Ocean Exploration Program and the National Undersea Research Program. The act prioritizes research on deep ocean areas, calling for study of hydro thermal vent communities and sea mounts, documentation of shipwrecks and submerged sites, and development of undersea technology. The bill authorizes $52.8 million for these programs in FY 2009, increasing to $93.5 million in FY 2015.

The Ocean and Coastal Mapping Integration Act authorizes an integrated federal plan to improve knowledge of unmapped maritime territory, which currently comprises 90% of all U.S. waters. Calling for improved coordination, data sharing, and mapping technology development, the act authorizes $26 million for the program along with $11 million specifically for Joint Ocean and Coastal Mapping Centers in FY 2009. These quantities would increase to $45 million and $15 million, respectively, beginning in FY 2012.

The Integrated Coastal and Ocean Observation System Act (S.171) authorizes an integrated national observation system to gather and disseminate data on an array of variables from the coasts, oceans, and Great Lakes. The act promotes basic and applied research to improve observation technologies, as well as modeling systems, data management, analysis, education, and outreach through a network of federal and regional entities. Authorization levels for the program are contingent on the budget developed by the Interagency Ocean Observation Committee.

The Federal Ocean Acidification Research and Monitoring Act establishes a coordinated federal research strategy to better understand ocean acidification. In addition to contributing to climate change, increased emissions of carbon dioxide are making the ocean more acidic, with resulting effects on corals and other marine life. The act authorizes $14 million for FY 2009, increasing to $35 million in FY 2015.

The fifth research bill included in the omnibus package, the Coastal and Estuarine Land Protection Act, creates a competitive state grant program to protect threatened coastal and estuarine areas with significant conservation, ecological, or watershed protection values, or with historical, cultural, or aesthetic significance.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Global Warming: The Hard Road Ahead

With a president committed to fighting climate change and a new Congress inclined to go along, the prospects for greenhouse gas emissions abatement legislation are bright. That’s good news. The Bush and Clinton administrations’ intransigence on this issue set back U.S. action by at least a decade. But it should not obscure the reality that one obstacle to a successful effort in slowing global warming—the cooperation of the rapidly developing economies—is truly daunting. Indeed, failure to acknowledge the difficulty of herding this particular pride of cats in the right direction could cost us another lost decade.

Although there is hardly a consensus about the content of the coming legislation, a market-based system that distributes carbon emissions rights among stakeholders and encourages them to minimize costs by freely trading the permits will probably be at the core of it. The biggest unknown is whether the legislation will tie U.S. containment efforts to those of other countries and whether it will include measures to encourage their cooperation.

What economists call the free rider problem is hardly a new issue here. China recently surpassed the United States as the world’s largest emitter of carbon dioxide, and relatively poor but rapidly expanding economies (think Brazil, India, Indonesia, and Russia as well as China) loom large in projections of emissions growth. Indeed, it is fair to say that any climate initiative that doesn’t engage the developing economies will, at best, deliver very little bang for a buck. Yale University economist William Nordhaus recently estimated that if emissions from half of the global economy remained uncontrolled, abatement costs would be more than twice as high as necessary. More likely, an international containment effort that failed to induce the major emerging economies to join would collapse in mutual recrimination. That fear (plus a friendly nudge from energy and automobile lobbies) explains why Washington refused to pledge support for the Kyoto accord in 2001.

Congress could encourage the emerging economic giants to get with the program in a variety of ways. It could offer government cash in return for foreigners’ agreement to emissions caps. Or it could pay subsidies for specific green commitments, anything from setting emissions standards for heavily polluting industries such as cement and steel to stopping construction of coal-fired power plants. Or the legislation could let U.S. companies meet their abatement obligations through carbon-sparing initiatives in poor countries, an idea embodied in the Kyoto agreement’s Clean Development Mechanism.

Of course, the United States could also play hardball, penalizing trade partners who refused to contain emissions. One way would be to bar imports of products made in ways that offend. Another way, and one more likely to meet U.S. treaty obligations, would be to offset the cost advantages of using polluting technology with green tariffs, much the way the country imposes “countervailing” duties on imports from countries that subsidize the production of exports.

Carrots, unfortunately, are problematic. Positive financial incentives would be hard to monitor: If no one can explain what happened to the trillions of dollars in aid sent to poor countries in the past half-century, why would it be any better this time around? If a sizable chunk of the cash dispensed over the years to build infrastructure has ended up in Swiss bank accounts or superhighways to nowhere, why is there reason to be optimistic that accounting for “green” aid would be any better?

Equally to the point, emissions are fungible, and the measurement of reductions is subject to manipulation: How would we know whether saving the rainforest in one part of a country didn’t lead to accelerated logging in another? How would we know whether the closing of an old coal-fired power plant in China or India would not have taken place even without a cash inducement from abroad?

Note, by the way, that the difficulties of measuring emissions reductions would likely be complicated by the interests of those doing the measuring. Under the Clean Development Mechanism, industries in Kyoto treaty countries can earn credits against their own abatement obligations by financing emissions reductions in poor countries, say by planting trees in Chad or replacing inefficient irrigation pumps in Pakistan. But once the money is spent and credits awarded, the sponsoring industries have little incentive to monitor their progress. Indeed, if this approach is to work on a scale large enough to make a real difference, it will take an enormous commitment on the part of governments to set the rules and make sure they are followed.

But the proverbial sticks would come with their own problems. The gradual opening of international trade has been critical to bringing billions of people out of poverty. Dare we risk losing the fruits of freer trade by encouraging the development of powerful political alliances between environmentalists and domestic industries happy to shut out foreign competitors in the name of saving the planet?

Past experience provides very good reasons to be pessimistic about the capacity of Washington (or any other Western government) to fairly judge compliance on the part of exporting countries. The Commerce Department’s administration of policies designed to enforce U.S. “fair trade” laws have long been a disgrace, a charade in which domestic industries facing foreign competition have effectively called the shots.

All this suggests that the task of limiting climate change will take longer and be more difficult in both political and economic terms than is generally understood. But that’s not a reason to keep fiddling while Gaia burns. Rather, it should change expectations of what U.S. involvement will be able to achieve in the short run, and in the process strengthen national resolve to set the pace in the long run.

First, symbols matter: A key goal of legislation should be to restore U.S. credibility as the leader in organizing carbon abatement efforts. “Do as I say, not as I do” isn’t cutting any ice with China or India, both of which have ample grounds for rationalizing their own reluctance to pay attention to emissions before they grow their way out of poverty. A good-faith effort at reducing emissions at home, in contrast, just might.

Second, the fact that inducing cooperation from developing countries is sure to be difficult is a poor excuse for not trying. For starters, the United States should certainly underwrite R&D aimed at making carbon abatement cheaper, which is bound to be a part of the new legislation in any case, and then subsidize the technology to all comers. The United States should also experiment with cash subsidies to induce targeted change (rainforest preservation in Brazil and Indonesia is probably the most promising), provided it is understood that much of the money could be wasted. By the same token, U.S. business should be given the chance to learn by doing in purchasing emissions offsets in other countries.

Third, policymakers should keep an open mind about what works and what doesn’t. That certainly includes the use of nuclear energy as an alternative to fossil fuels in slaking developing countries’ voracious appetites for energy. And it may include “geoengineering,” using as yet untested means for modifying Earth’s climate system to offset what now seem almost inevitable increases in atmospheric warming.

The world’s largest, richest economy, which is also the world’s second-largest carbon emitter, can’t afford to stay aloof from efforts to limit climate change. And, happily, it probably won’t. Once it does enter the fray, however, it can’t afford to allow the best of intentions to be lost to impatience or indignation over the frailty of international cooperation.

Growth Without Planetary Blight

The Bridge at the Edge of the World by James Gustave Speth. New Haven, CT: Yale University Press, 2008, 295 pp.

In November 2008, Al Gore authored an op-ed essay in the New York Times titled “The Climate for Change” in which he offered a five-part plan for how we can save the economy and the environment at the same time. For half a century, political economists and environmentalists of all stripes have been seeking the solution to a world order in which economics was in balance with ecology. In what appears to be an inexorable degradation of global ecosystems and more recently the near collapse of national economies, with no appreciable abatement in greenhouse gases, the problems addressed by Gore have reached a precarious state.

Many sectarian solutions have been proposed to resolve the deadlock between economy and ecology. Free marketers contend that it is a matter of getting the prices right and internalizing externalities. We are all free riders who are not paying for the despoliation of nature from our pollution. Regulation proponents argue that we need betters laws and stronger enforcement. Neo-anarchists and bioregionalists believe that we cannot achieve sustainable societies until we plan our communities according to proper scale and principles of self-sufficiency. Eco-radicals maintain that population and economic growth must be limited by the ecological constraints of the planet and that the protection of ecosystems that sustain our lives must receive the highest priority. Egalitarian social ecologists remind us that the responsibility for protecting the biosphere falls heavily on the nations that have contributed the most to the planet’s environmental threats and those economies that have already benefited the most from their exploitation of natural resources.

In The Bridge at the Edge of the World, James Gustave (Gus) Speth has given us a fresh look at the question of reconciling economy with ecology. Speth is not shy about probing deep into the structural conditions that lie at the heart of the problem. His contribution is unique because Speth has for years worked as an environmental reformer who has taken for granted that good laws, sound stewardship, honest environmental accounting, and strong federal leadership will make a difference. Currently the dean at the Yale School of Forestry and Environmental Studies, Speth was cofounder of the Natural Resources Defense Council, chairman of the U.S. Council on Environmental Quality, founder and president of the World Resources Institute, and administrator of the UN Development Program.

When Speth addressed the “root causes” of environmental problems in his earlier book, Red Sky at Morning, the focus was on honest prices, ecological technologies, sustainable consumption, and environmental education. He hasn’t abandoned these ideas, but he has reached the conclusion that they do not go deep enough into the problem.

In seeking to understand the structural conditions associated with industrial societies’ uncontrollable appetite for natural resources and unsustainable growth, Speth has dared to raise the “C” word, long viewed as the province of radical economists, Marxist sociologists, and communitarian utopians. For Speth, capitalism is both the source of our success as a post-industrial economy and the obstacle to realizing environmental sustainability.

The book is divided into three parts. Part 1 addresses the global environmental threat and its economic drivers, including the growth imperative. Part 2 provides an analysis of failed efforts to remedy the problem within the framework of neoclassical economics. Part 3 explores the opportunities for the transformation of the current system of market capitalism into a ”post-growth” economy.

Speth’s analysis in not caught up in a choice among historical ”isms”; rather, he says, “I myself have no interest in socialism or economic planning or other paradigms of the past.” What he does propose is a reoriented market system that embodies the values of a post-growth society because “the planet cannot sustain capitalism as we know it.” His book explores the values and constraints that a transformed political economy must embody.

Every market system functions at two levels. First, there is the legal, ethical, and regulatory matrix on which all business and human activities are expected to operate. These include the system of government incentives, research investments, taxes, and environmental laws. Second, there are the free market functions (business transactions, consumer choices) that are overlaid on the matrix. To use a computer metaphor, the political economy has an operating system (the underlying matrix) and a system of programs (markets) whose deep structure conforms to the operating system. In Speth’s view, the underlying matrix on which the market system operates needs reconfiguration if the economy and ecology are to become harmonized. In his words, “Today’s dominant world view is simply too biased toward anthropocentrism, materialism, egocentrism, contempocentrism, reductionism, rationalism, and nationalism to sustain the changes needed.”

More than a quarter century ago, the environmental sociologist Allan Schnaiberg introduced the concept of the “treadmill of production.” According to Schnaiberg, industrial capitalism is driven by higher rates of material throughput, which eventually creates so much waste (additions) and extracts so much of the earth’s natural resources (withdrawals) that it overwhelms the biosphere. Both Schnaiberg and Speth reach the same conclusion. There is only so much that can be done to slow down the biophysical throughput by recycling, green consumerism, and green technologies.

Growth without blight

The resolution between the capitalism of unfettered growth and a planet with limited natural resources and assimilative capacity can be found in the composition of the gross domestic product. And Speth zeroes in on this economic construct. He doesn’t argue that we have to eliminate growth; rather, we have to change its character. We need a new evolved form of capitalism that creates incentives for non-material growth, an economy that reverses the tendency to produce too many useless, redundant, and ecologically damaging consumer goods that effectively turn too many people into polluters.

If we were to draw two columns, the first listing what our current economy produces in abundance and a second indicating the scarcities we face, we might have a clue to what post-market capitalism would look like. Observes Speth, “Basically, the economic system does not work when it comes to protecting environmental resources, and the political system does not work when it comes to protecting the economic system.” His optimism is also evident. “As it has in the past, capitalism will evolve, and it may evolve into a new species altogether.” Speth reaches his conclusions judiciously, after navigating through a thorny intellectual landscape that includes all the major inspirational voices, the environmental sages of our age. His careful examination of a variety of solutions reveals how each is either insufficient or leads to a dead end.

Returning to Schnaiberg’s analysis in his 1980 book, The Environment: From Surplus to Scarcity, he concludes, “If the treadmill is to be slowed and reversed, the central social agency that will have to bring this about is the state, acting to rechannel production surplus in non-treadmill directions. But the state can only do so when there is both a sufficient crisis of faith in the treadmill, and sufficient political support for production apart from the treadmill.” Speth reaches a similar conclusion. In his formulation, “The transformation of contemporary capitalism requires far reaching and effective government action. How else can the market be made to work for the environment rather than against it? How else can corporate behavior be altered or programs built that meet real human and social needs?”

When the collapse of major financial institutions and banks occurred in the fall of 2008, the American people were advised by their president to consume to save the economy from cascading into free fall. But there are many ways for people and their government to consume that do not involve speeding up material throughput. We can consume education, provide support for nonprofit organizations that make our neighborhoods and regions better places to live, consume services for the elderly and for our own self-awareness through the arts, and invest in the research and infrastructure that enhances the quality of our lives. Speth applauds those growth scenarios leading to improving “non-material dimensions of fulfillment.” He also shows us that beyond a certain level of material wealth, increasing material consumption does not correlate with human well being.

This is a book of hope and inspiration. It tells us that we are not locked by default into a particular form of market capitalism that is in its deepest structure unfit for a sustainable world. There are signs that our society is already pregnant with change. What Speth has done, like a good Zen master, is to open our minds to the possibilities of aspiring to human self-realization, societal transformation, and a livable planet without setting limits on economic growth.

Book Review: Danger: Bell curve ahead

Danger: Bell curve ahead

Real Education: Four Simple Truths for Bringing America’s Schools Back to Reality by Charles Murray. New York: Crown Forum, 2008.

Michael J. Feuer

When I was in my junior high-school play, one of the parents in the audience was overheard saying that there were only two things wrong with our performance: The curtain went up, and the seats faced the stage. Similarly, there are only two things wrong with Charles Murray’s latest book: The logic is flawed, and the evidence is thin. Were it not for his claim that his earlier work (Losing Ground, 1984) changed the way the nation thought about welfare, there would be little reason to dignify the current polemic with a review in a magazine of the National Academy of Sciences. But on the off chance that Murray’s ideas might influence the way the nation thinks about education, it is worth a response even at the risk of affording him undeserved attention.

Here is his basic argument, presented in the form of “four simple truths” and an equally simplistic proposal: Ability varies, half the children are below average, too many people go to college, our future depends on how we educate the academically gifted, and privatization will fix the schools. Space constraints prevent me from undoing the errors of omission and commission in each of these claims, so I’ll concentrate on one or two and ask readers to extrapolate from there.

A good place to start is with ability, a complex concept that Murray chooses to simplify by focusing on IQ, which for him captures most of what matters to academic achievement and, for that matter, success in life. IQ is certainly a component of academic ability and a predictor of future performance; on those facts the science is well established. But Murray seems unable or unwilling to acknowledge the preponderance of evidence showing that IQ is only one measure of ability, that it covers only a small subset of what we now understand to mean by intelligence, and it is neither the sole nor the most important correlate of adult success. The observation (simple truth no. 2) that it varies in the population is utterly banal, but Murray unabashedly uses it as a building block for his core argument: Let’s stop wasting our time with children at the low end of the ability continuum, concentrate our resources on those whose IQ scores suggest they can handle rigorous intellectual material, encourage the remaining 80 to 90% to become electricians and plumbers, and stop clogging our colleges and universities with people who don’t have (and will almost certainly never develop) what it takes to benefit from a liberal education.

Murray correctly anticipates that this radical proposal might invite criticism, so he launches an early preemptive strike: “As soon as I move beyond that simplest two-word expression, ability varies, controversy begins.” Well, not quite. Everyone knows that ability varies, and thanks to Garrison Keillor (and introductory statistics courses), almost everyone knows that half the children have to be below average. The controversy begins when Murray moves from that truism to this mischievous accusation: “Educators who proceed on the assumption that they can find some ability in which every child is above average [sic] are kidding themselves.” This statement warrants some unpacking.

First, where is the evidence that this is what educators assume? When teachers work with children to improve their reading and mathematical skills, that doesn’t signify an attempt to make every child “above average,” any more than when physicians strive to improve their patients’ health they are motivated by a naïve desire to make them all “above average” (whatever that might mean). Murray ridicules a completely defensible goal—improving the academic skills of children even at the lower end of the ability distribution—by intentionally confounding improvement with the end of variability. If we accept Murray’s strangely nihilistic logic that raising the average reading or math performance of low-achieving students is futile because some of them will still be below average, then yes, we should stop wasting our time and money. But the premise is flatly wrong—the goal of education is not to undo the basic laws of statistics and make every student above average—and therefore so is the conclusion.

Second, the smug allegation that educators are “kidding themselves” suggests that ability (as measured by IQ) is mainly a fixed trait and kids either have it or they don’t and that we know how to measure it accurately enough so that we can decide ex ante which kids are worth investing in. Here too Murray is on thin ice. Although he concedes that “environment plays a major role in the way that all of the abilities develop, [and] genes are not even close to being everything,” he seems either unaware of or unimpressed by a substantial and growing body of research on the plasticity of brain structure and cognitive functioning over the lifespan. Eight years ago, in its landmark report From Neurons to Neighborhoods, the National Research Council established that “gene-environment interactions of the earliest years set an important initial course for all of the adaptive variations that follow, [but] this early trajectory is by no means chiseled in stone” (italics added). With the advent of magnetic resonance imaging technologies and advanced computational methods, cognitive neuroscience now affords greater appreciation of the interactions between nature and nurture and of the ways in which exposure to education, training, and other stimuli can be associated with changes in the parts of the brain responsible for various cognitive and behavioral tasks.

And even if all that mattered was IQ (a claim now discredited by the scientific community), research evidence should give Murray something to be more hopeful about. As James Flynn has reported, based on his extensive analyses of IQ test data in 20 nations, “there is not a single exception to the finding of massive IQ gains over time.” Clearly something must be contributing to this trend, and though we don’t have enough evidence to support specific causal claims, rejecting the possibility that education makes a difference is a bit premature. As Flynn notes, “every one of the 20 nations evidencing IQ gains shows larger numbers of people spending longer periods of their life being schooled and examined on academic subject matter.” Flynn is a careful scientist and doesn’t allow that finding to obscure counterfactual evidence suggesting that some educational reforms might actually impede IQ gains. None theless, he cautions against the overly deterministic view: “The fact that education cannot explain IQ gains as an international phenomenon does not, of course, disqualify it as a dominant cause at a certain place and time.” This nuance is glaringly absent in Murray’s simplified model of the brain, mind, and cognition.

What about Murray’s willingness to rely on IQ tests to figure out which kids have, for lack of a better metaphor, “the right stuff”? Here Murray challenges the overwhelming consensus in the measurement community concerning the limited validity and reliability of conventional intelligence measures. The bottom line in a vast and easily accessible literature is that almost no one in the testing profession, regardless of political predisposition, is as convinced as Murray of the utility of IQ scores for the kind of lockstep sorting and selection that he envisions.

Moving beyond IQ, is there evidence that investments in human capital can yield significant and sustainable gains in other valued outcomes? Murray sees a glass much less than half full, and based on his cursory summary of evaluations of programs such as Head Start, he sinks to yet another dismal bottom line: “Maybe we can move children from far below average intellectually to somewhat less below average. Nobody claims that any project anywhere has proved anything more than that.” Really? Apparently Murray does not know about, or chooses not to cite, the work by James Heckman and others, which supports a more hopeful conclusion. In a recent interview, Heckman (a Nobel laureate in economics) noted that “the [Perry Preschool] program had substantial effects on the earnings, employment, [involvement in] crime, and other social aspects of participants, compared to non-participants. But what we also find is that the main mechanism through which the program operates is non-cognitive skills” (italics added). This is an important point, as it argues for a broader definition of ability than what is encompassed by IQ, and it emphasizes again the potential value of school-based programs in the development of a wide range of skills that correlate with academic achievement and longer-term success. Heckman is hardly an “educational romantic”(Murray’s label for people who disagree with him about the futility of education) and cautions that “an under-funded, low-quality early childhood program can actually cause harm. But a high-quality program can do a great deal of good—especially one that is trying to cultivate the full person, trying to develop in every aspect the structure of cognition and non-cognitive skills.”

Evidence from other programs, such as Success for All, New York’s District 2, and the famous Tennessee class-size reduction experiment, along with data from the National Assessment of Educational Progress that indicate upward trends (especially in mathematics), clearly undermine Murray’s bleak forecast. One wonders what motivates someone supposedly trained as a social scientist to so willfully ignore large quantities of evidence and to declare categorically that the dream of uplifting children from impoverished intellectual and economic environments is just a lot of romantic nonsense. I leave that question to psychologists better equipped to address it. Meanwhile, I expect that Murray’s book will spur debate and cause people to focus on real research, for which I suppose we should be grateful. It’s the minimum we should demand for enduring Murray’s mean-spirited rhetoric and faulty science.


Michael J. Feuer () is executive director of the National Research Council’s Division of Behavioral and Social Sciences and Education.


Book Review: Follow the money

Follow the money

Science for Sale: The Perils, Rewards, and Delusions of Campus Capitalism by Daniel S. Greenberg. Chicago: The University of Chicago Press, 2007, 324 pp.

Melissa S. Anderson

On the one hand, it appears that the sky is falling yet again. Science is caught up in a competitive arms race for funding, universities are driven by internal and external forces to enter into questionable relationships with the for-profit sector, scientists’ integrity buckles under pressure and, in short, as Daniel S. Greenberg puts it, “much is amiss in the house of science.” On the other hand, despite this general mayhem, scientists as a group demonstrate altruism, work with the best intentions toward scientific progress, and maintain a collective sense of ethical responsibility. Such is the two-handed perspective that dominates Greenberg’s Science for Sale.

The book’s strength lies in Greenberg’s skill as an interviewer of scientists and interpreter of complex developments. The first seven chapters of the book address a range of troublesome issues in science, including financial strain, federal and corporate funding, varieties of academy/industry relations, consequences of the Bayh-Dole Act, academic capitalism and entrepreneurship, breaches of human-subjects and conflict-of-interest regulations, and the regulatory environment. Greenberg covers these through detailed stories and analyses drawn from over 200 interviews with researchers, administrators, regulators, and others, as well as press reports and relevant literature. The strongest of these chapters is a lively review of interactions among federal regulators, institutions, and academic associations concerning human-subjects protection during the past 10 years. Here is Greenberg at his best, revealing the drama and personalities behind federal shutdowns of research programs that violated human- subjects regulations.

Each of the next six chapters is based primarily on Greenberg’s interviews with a single informant. Here we meet Robert Holton, known for his involvement in the development of the drug Taxol, who expresses his disgust with university/industry collaboration, and William Wold, who does his best to explain the financial arrangements that support his work as a professor at Saint Louis University and his role as president of the biotechnology company VirRx. Lisa Bero from the University of California, San Francisco, talks about deliberations and decisions in conflict- of-interest committees; and Drummond Rennie, who has held editorial positions at the New England Journal of Medicine and the Journal of the American Medical Association, expresses frustration at the limited capacity of journals to catch and correct fraudulent research. Greenberg‘s interview with Timothy Mulcahy, who was then at the University of Wisconsin–Madison, concerns technology transfer but actually reveals more about how universities protect their students and postdoctoral fellows in the context of university/industry relationships.

It comes as no surprise that Greenberg, a longtime science journalist, takes a balanced, skeptical stance toward the issues he covers. He gives detailed accounts of things gone wrong but never loses sight of a certain nobleness of character underlying science generally. He documents major cases of malfeasance in academy/industry relationships but also argues that “a lot of steam has gone out of the belief that the linkage of universities and business is fundamentally unholy.“ He balances the need for stronger regulation against the justifiable resistance of scientific associations to inept or excessive regulatory control. He contrasts universities’ enthusiastic expectation of financial windfalls from entrepreneurship with the realities of unlikely jackpots. Universities and the National Institutes of Health have great wealth, but it is never enough to satisfy the demands of scientific potential, leading Greenberg to question the willingness of researchers and research institutions to make tough decisions about financial priorities. Scientists acknowledge a need to be attentive and responsive to public demands for accountability but repeatedly stumble on the gap between rhetoric and reality.

The interview transcripts show Greenberg’s skepticism in action, as he plays both devil’s and angel’s advocate, countering both optimism and pessimism. Such evenhandedness makes this volume a useful counterweight to those who think they have U.S. science figured out. Readers will come away with a realistic sense of the calamity-prone, headache-inducing complexity of the research enterprise. Not all of the issues Greenberg addresses are new; some have been covered in other volumes, and many will be familiar to those who have been reasonably attentive to the scientific press. Not all of the topics fall under the rubric of “science for sale”; some have to do with the organization of scientific work (the federal grant system, peer review, the postdoctoral system) or with broad issues of research integrity, quite apart from involvement with the for-profit sector or marketlike behavior on the part of academic institutions.

Greenberg is less interested in proposing courses of action than in reviewing the effectiveness of current “correctives” of scientific behavior. Institutional policies and regulations, journals’ editorial diligence, academic associations’ influence, oversight by federal agencies such as the Office of Research Integrity and the Office of Human Research Protections, and attention by the press and Congress all play important roles in ensuring the integrity of science; roles that Greenberg argues are imperfectly executed. Academic commercialism raises the stakes: “The sins arising from scientific commercialism pose a far more challenging problem: keeping science honest while potent forces push it hard to make money.“

GREENBERG REPEATEDLY ENDORSES AN ODDLY OLD-FASHIONED IDEA: THE POWER OF SHAME AS A BEHAVIORAL CORRECTIVE IN SCIENCE.

There will always be some whose behavior is determinedly or heedlessly deviant, with or without commercial involvement. There will always be many who conduct their research according to the highest ethical standards. There will always be a middle group who occasionally yield to temptation or misbehave in ways that escape attention. The question is whether the temptations that commercial forces present to the middle group are offset by counterpressures. Skeptical of the adequacy of institutional correctives, Greenberg nonetheless concludes that “for protecting the integrity of science and reaping its benefits for society, wholesome developments now outweigh egregious failings—though not by a wide margin.“

To bolster these wholesome developments, Greenberg repeatedly endorses an oddly old-fashioned idea: the power of shame as a behavioral corrective in science. In one paragraph of the final chapter, he invokes the terms shame, embarrassment, pride, reputations, exposure, harmful publicity, prestige, humiliation, norms, judgments of colleagues and the public, vulnerability, and ethical sensitivity. The shame weapon, as he calls it, has the capacity to keep scientists and their institutions honest because of the public’s expectations for ethical conduct in research, the critical importance of reputation to a successful career, and the severe consequences of having one’s wrongdoing exposed. Greenberg argues, “The scientific profession exalts reputation. Among scientists and journal editors, the risks of being classed as a rogue would have a wondrously beneficial effect on attention to the rules.“ He notes that shaming and public humiliation at universities where research projects were shut down by federal agencies seemed to yield “salutary effects.“

It is perhaps not surprising that Greenberg, recognizing the temptations and pressures facing what he sees as a basically righteous population, proposes a journalist’s sharpest weapon, exposure, as a promising solution. What is old-fashioned about the idea is its connection to professional self-regulation. During the past 20 years, self-regulation has proved an inadequate counterweight against the surge of federal and institutional regulations, rules, oversight, accountability, formal assurances, and training mandates. Greenberg does not, of course, recommend abandoning all of these in favor of exposure and humiliation, but he does seem to argue that shame could pick up much of the slack from these other mechanisms. This argument may not hold up. First, the very importance of a scientist’s reputation, which is critical to shame’s effectiveness, makes potential whistleblowers reluctant to come forward, lest they erroneously inflict damage merely by accusation. Second, both academic and government research institutions depend on the public’s high regard for continued funding, a fact that heightens their susceptibility to shame but also substantially increases their incentive to hide emergent embarrassments. Third, recent research published in Science suggests that a finding of misconduct does not necessarily derail a career in science.

Shame may also be limited as a deterrent by the considerable control that individuals and institutions have over the terms of disclosure of their own activities. Rationalization is not necessarily a weak defense when one’s public questioners have little understanding of the details of scientific research or funding. Institutions can and do defend questionable actions as justifiable or even prudent in the context of changing and challenging environments. Both academic and public attitudes adjust when universities convincingly interpret their arrangements with industry as normative in the current academic economy.

Greenberg brackets his analysis with consideration of universities’ obsession with growth, competitiveness, and prestige. These focal points are relentless pressures that shape the context of many of the problems he addresses in the rest of the book. As he points out, “Risk and disappointment are built into the financial system of science, feeding a mood of adversity among university administrators, research managers, scientists, and graduate students.“ Such systemic fault lines should not be ignored; they give rise to competitive pressures and a sense of injustice that my colleagues and I have found to be strongly linked to scientific misconduct. I challenge Greenberg to turn his estimable analytic skill and spot-on questions toward an investigation of these perverse and fundamental problems in the organization and funding of science.


Melissa S. Anderson () is professor of higher education and director of the Postsecondary Education Research Institute at the University of Minnesota.

Archives – Winter 2009

SIDNEY NAGEL, Two-Fluid Snap Off, Ink jet print, 52.5 × 34.25 inches, 1999.

Two-Fluid Snap Off

A drop falling from a faucet is a common example of a liquid fissioning into two or more pieces. The cascade of structure that is produced in this process is of uncommon beauty. As the drop falls, a long neck, connecting two masses of fluid, stretches out and then breaks. What is the shape of the drop at the instant of breaking apart?

National Academy of Sciences member Sidney Nagel is the Stein-Freiler Distinguished Service Professor in Physics at the University of Chicago. Nagel’s work has drawn attention to phenomena that scientists have regarded as outside the realm of physics, such as the science of drops, granular materials, and jamming. Using photographic techniques, as illustrated by this image in the National Academy of Sciences collection, Nagel and his team study such transitions to understand how these phenomena can be tamed and understood.

Restoring and Protecting Coastal Louisiana

The challenges facing the Gulf Coast reflect a national inability to come to grips with the need to deal with neglected infrastructure, both natural and built.

The sustainability of coastal Louisiana is critical to the nation. It is the location of a large part of the nation’s oil and gas industry and its largest port complex. It provides vital habitat for economically important fisheries and threatened and endangered species. Yet this region is under siege. The catastrophic effects of Hurricane Katrina in 2005 and recent storms in 2008 brought to the nation’s attention the fragility of the region’s hurricane defenses and the continuing loss of wetlands and ecosystems; a loss that has continued for more than a century with little or no abatement. Slowly, the flood protection system in New Orleans is being restored; even more slowly, attention is shifting to restoring the coastal deltaic system. But there is a lack of strong support for these two linked efforts, protection and restoration. There is a lack of funding but also the lack of a prioritization system at the federal level for allocating funds for critical water resources infrastructure. The challenges facing the Gulf Coast reflect a national inability to come to grips with the need to deal with neglected infrastructure, both natural and built, and the realization that both provide security to coastal communities. It will not be possible to protect and restore coastal Louisiana without significant changes in the way federal and state governments deal with these issues.

According to the American Society of Civil Engineers (ASCE), in its frequent report cards on the status of the nation’s infrastructure, the United States is not maintaining and upgrading its infrastructure and is especially neglecting its natural and built water resources infrastructure. The ASCE indicates that the cost of all needed infrastructure work in the United States exceeds $1.5 trillion. Funding for water and wastewater treatment facilities is falling behind at a rate of more than $20 billion each year. Funding for flood-risk management, navigation, hydropower, and ecosystem restoration (wetland and aquatic), not including the short-term levee repair efforts in New Orleans, also continues to decline. With so many clear and pressing needs, it is vital that the United States devise more rational approaches to the funding and prioritization of infrastructure projects, including critical water resource projects such as those in coastal Louisiana.

The 2005 disaster in New Orleans awakened the nation to the serious vulnerabilities in flood protection that exist across the country and to the fact that the nation lacks a realistic assessment of the infrastructure, both built and natural, it takes to reduce these vulnerabilities. The failures of levees and other infrastructure that have occurred since Katrina, including those that occurred during the Midwest floods of 2008, have more clearly defined this issue as national in scope. At the same time, the need for national priorities in ecosystem restoration has lacked attention. The loss of coastal wetlands along the Gulf had been well known for decades, and environmental groups had been campaigning for action to restore this deltaic coast. Resources were going to projects in other parts of the country such as the $7.8 billion federal initiative to restore the Florida Ever-glades and the joint federal/state efforts to reduce pollution in the Chesapeake Bay. Other regions also deserve attention. The need for ecosystem restoration has been recognized in the Missouri River, the upper Mississippi River, the California Bay Delta, the Great Lakes, and numerous smaller areas across the country. There is an urgent need to assess investments in natural and built environments to reduce vulnerabilities to increased flooding risks.

Coastal Louisiana sits at the end of a natural funnel that drains 41% of the coterminous United States and parts of two provinces of Canada. This watershed, the Mississippi River basin, delivers water to the Gulf of Mexico through the mouths of the Mississippi and Atchafalaya Rivers. Extending more than 11,400 square miles, this coastal area was formed during the past 6,000 years by a variety of deltaic lobes formed by the Mississippi River switching east and west from Lafayette to Slidell, creating an extensive system of distributaries and diverse wetland landscapes as freshwater and silt mixed with coastal processes of the Gulf of Mexico. Periodic river flooding by breaches in natural levee ridges (crevasses) along the numerous distributaries across the deltaic landscape out to the barrier islands limited salt water intrusion and added sediments to coastal basins. These river and coastal processes built and sustained an extensive wetland ecosystem, the eighth largest delta in the world. In addition to providing nurseries for fish and other marine life and habitat for one of the largest bird migration routes in North America, these wetlands serve as green infrastructure, providing natural buffers that reduce flood risks to the vast energy production and port facilities of the Gulf area as well as human settlements inland from the coast. Early settlers in New Orleans were more concerned by flooding from the Mississippi than by the threat of Gulf storms, which would be buffered by extensive coastal forests that stood between the city and the Gulf of Mexico.

Long before Katrina, coastal wetlands were disappearing because of considerable human influence and disruption in the natural processes of a deltaic coast. Levees were built along the banks of the Mississippi to keep the river from overflowing into floodplains and coastal environments to protect lands that had been converted to agriculture, industry, and human settlement. The sediment that once breached natural levees and nourished the wetlands was instead channeled out into the Gulf of Mexico, in essence starving the delta and causing it to recede rather than grow. The effect of levees was exacerbated by the construction of channels and pipeline corridors that crisscrossed the wetland landscape to provide access for extracting much needed domestic oil and gas resources by providing reliable navigation channels that could be connected to Mississippi River commerce. During the 1960s and 1970s, coastal land, mostly wetlands, disappeared at the rate of 39 square miles per year.

The potential conflict of human activities and processes necessary for a sustainable deltaic coast were identified after the 1927 flood. But pressure for protection and economic development ignored the call for more prudent management of river resources to integrate both protection and restoration policies. By the mid-1980s, coastal scientists had brought the public’s attention to the loss of wetlands and the degradation of the Mississippi River delta. Very little was done to address the enormous problem because the environmental consequences were not deemed sufficient to justify the expense of restoration and mitigation. In 1992, the Mississippi River Commission, recognizing the problem of increased salinity that threatened deltaic habitats along the coast, opened a diversion structure through a Mississippi River levee at Caernarvon, south of New Orleans. This diversion structure simulates a levee breach by allowing Mississippi River water to flow by gravity (flood gates are opened during elevated river levels) into the wetlands behind the levees during certain periods of the year. This became the first significant step in what may become a series of such structures to the south of New Orleans.

New Orleans and the surrounding region have been protected in various ways from potential Mississippi River floods since the city was settled in 1717. After the disastrous 1927 flood, the Army Corps of Engineers instituted a massive river levee-rebuilding program that was accompanied by floodways and channel modification. This river-protection system has performed as expected since that time.

Coastal protection became the additional authority of the Corps in 1965, when Hurricane Betsy flooded parts of New Orleans. Until the arrival of Katrina, federal and local efforts had focused on providing protection against a storm defined by the National Oceanic and Atmospheric Administration (NOAA) as the standard project hurricane. Shortly after construction began in earnest, NOAA increased the estimated size of the standard project hurricane. In contrast to the river-protection system, funding for the coastal-protection system was through individual projects that came in dribs and drabs, thus limiting the ability of the Corps to change its design to accommodate the new, larger target hurricane. Instead, the Corps decided to move ahead to first complete all the work at the original level of protection. But as individual construction projects took place, ever-present subsidence was diminishing the level of protection provided by the newly constructed levees. When Katrina hit, the degree of completion of the major components of the protection system varied from 65 to 98% of the original design standards, not taking into account datum errors, subsidence, and sea level rise that had taken place since the original design. The failure during Katrina of several components of the protection system, together with the massive size of the hurricane itself and the loss of coastal habitat, resulted in a loss of more than 1,400 lives, the devastation of major housing districts within the city, and other damage throughout the region.

Finding solutions

Postmortems on the impact of the hurricane flooding recognized the longstanding relationship between extensive coastal wetlands and community protection, resulting in a great deal of debate about whom or what was to blame for failing to implement integrated protection and restoration. Now, however, it is more important that we devote our attention to finding solutions that will leave this important region with reduced risks from hurricanes, a navigation system that will support the substantial foreign trade through the Port of New Orleans, support for the area as a viable energy producer for the nation, and a rich and vibrant coastal wetland ecosystem.

Although there are now cooperative efforts to deal with the problems of coastal Louisiana, the picture is far from rosy. Two parallel efforts, one led by the state of Louisiana and the other by the Corps, have been under way since Katrina to determine the appropriate combination of structural activity (levees, flood walls, gates, and so forth), non-structural features (for example, building codes and evacuation planning), and wetland restoration needed to protect urban areas and distributed assets across the coastal landscape. The state plan has been approved by the Louisiana legislature, but the Corps plan has yet to be completed and submitted to Congress. Both plans call for restoration of the wetlands through diversions of the Mississippi River, and both would rely on adaptive management of the process to address the substantial design uncertainties in such a large dynamic deltaic system. A coastal ecosystem restoration program, much like that for the Everglades, was authorized by Congress in the Water Resources Development Act of 2007. Only a few preliminary projects were authorized, however, and funding has not yet been provided. This authorization establishes a structure to oversee this work but does not identify methods to be used to determine priorities among the various components of the overall program, nor does it provide an effective means for competent project authorization and funding. The state has recently announced plans to spend nearly $1.2 billion over the next three years on protection and restoration projects that are consistent with the state master plan. Although this is an impressive investment, it is an order of magnitude less than even some of the conservative estimates of system-level project costs for both coastal ecosystem restoration and storm risk reduction.

The specter of climate change is adding to the water and coastal management challenges. Climate change will bring about changes in weather patterns and the potential for increased flooding, drought, and sea-level rise. Existing projects will have to be modified to accomplish the purposes for which they were originally designed, and additional attention will be required to deal with the already significant strain on recovering ecosystems. The vulnerabilities of coastal landscapes to projected environmental changes are relative to the capacity of ecosystems to adapt. The present rate of wetland loss in this region suggests that these adaptive mechanisms are insufficient to deal with present rates of sea-level rise and subsidence.

Those working on coastal Louisiana restoration and protection have attempted to deal with the program on a comprehensive (watershed) basis, recognizing that the problems of southern Louisiana are not solely those of that state. The sediment required to replenish the wetlands will come from lands scattered throughout the basin and will be affected by the activities in the basin states. Much of the original sediment load of the Mississippi is trapped behind major dams on the Missouri River system. A major dead zone (an area where marine life is stressed because of lack of oxygen) now exists in the Gulf of Mexico along Louisiana and parts of Texas as a result of excessive nutrients traveling down the Mississippi from the farmland of the Midwest. The flux of nitrate has increased threefold since the 1960s. Although sediments are critical to rebuilding the wetlands of the Mississippi River Delta, additional nutrients flowing through river diversion structures could potentially impair inland waters of the state. Two strategies have been suggested to limit the potential water quality issues along coastal Louisiana. An upstream strategy is a significant reduction in the application of chemicals to the farmland of the Midwest, along with restoring wetland buffer strips on the edge of fields that can reduce nutrient loading in river waters. Downstream in the coastal delta, wetland restoration is considered another mechanism of nutrient reduction to coastal waters. Both strategies have uncertainties in system capacity of nutrient reduction and political will in implementation. So a potential conflict in diverting river sediment for wetland restoration may be limited by coincident nutrient enhancement of hypoxia.

Funding limitations

Even though the nation’s largest port and energy complex, a metropolitan area of nearly a million residents, and coastal wetlands of immense value are at risk, funds to support the restoration and protection of coastal Louisiana have been slow in coming. The Corps has been provided with about $8 billion to restore the levee system around New Orleans to the level of a 100-year flood. This level of protection is below that of a 400-year storm such as Katrina, but it will relieve New Orleans residents of the requirement to buy flood insurance against a potential hurricane. Congress has directed the Corps to study and report on the costs of providing New Orleans with protection against a category 5 hurricane. Early estimates indicate that the costs of such a project would exceed $10 billion. The cost of coastal restoration has been estimated at as much as $20 billion. Even in these days of mega-bailouts, those are big numbers.

The ability to move ahead with the protection and restoration of coastal Louisiana will require substantial funding. The Bush administration’s budgets have kept funding for the water sector flat except for periods when disasters required immediate attention. In constant-dollar terms, the funds available for these projects are going down each year. In the tight funding environment of recent years, budget decisions have been driven largely by the historical record of funding, not an evaluation of the nation’s risks and needs. The current fiscal crisis will only increase the pressure on the limited dollars that are available.

The largest source of funds for dealing with major water projects is found in the budget of the Corps. But the restoration and protection of coastal Louisiana is but one of many flood and hurricane protection, navigation, ecosystem restoration, and other projects that demand Corps and related federal water dollars. Major flood problems in the central valley of California, the reconstruction of levees in the Midwest, and the repair and upgrade of other structures identified in recent levee system inspections provide competition for New Orleans and coastal Louisiana. The aggregate projected costs of restoration projects in the Everglades (now $10.9 billion), upper Mississippi, Chesapeake Bay, Great Lakes, and California Bay Delta exceed $50 billion. Costs for other programs, such as the Missouri River basin, remain to be calculated.

Unfortunately, priority setting is tied to a rudderless system for allocating federal funds and assessing national needs. It is difficult to justify a national priority when objectives at the national level are not clear. Developing a needs assessment is dependent on having national policies that appropriately define national goals for water use. Whom do we protect from flooding? What infrastructure is at risk? What losses and risks will have national consequences? What ecosystems need to be restored or are the most valuable to the economic, ecological, and social well-being of the nation? How important are ports to the economy of the country? Recent National Research Council studies of the Corps’ planning processes and projects have indicated that the Corps is faced with conflicting laws and regulations that make prioritization and description of needs difficult to achieve.

Within the federal government, requests for funds are initiated by the departments and are based on guidance from the Office of Management and Budget, which establishes prioritization criteria for items to be included in the president’s budget. But these priorities are only tangentially related to actual needs and are driven by economic cost/benefit criteria, not national needs. In making decisions on the budget, Congress, as was noted at a recent hearing on watershed planning, tends to deal with the authorizations and appropriations for specific projects with little consideration of the relationship of the projects to the greater needs of the nation or even the watershed in which the projects are to be built. With some exceptions, Congress supports projects on the basis of the political weight they carry.

Prioritizing funding on a watershed basis would not be new to the United States. In 1927, Congress directed the Corps to conduct studies of all U.S. river basins in order to plan for integrated development of the water resources of these basins. These “308 reports” (named for the section of the law that authorized the studies) became the basis for the development of the Tennessee Valley and Columbia River basins, among many others. In cases in which such basin/watershed planning has taken place in a collaborative manner, the results have been outstanding. The Delaware River Basin Commission brings together the states of New York, Pennsylvania, and New Jersey for cooperative management of that important river basin.

In recent years, members of the House and Senate have tried to establish a needs-based approach for allocating funds, but the efforts failed because too few members were interested in giving up the benefits of selecting projects on their political merit. During a 2007 debate on an amendment to a bill to create a bipartisan water resources commission to establish priorities for water project funding, Sen. John McCain (R-AZ) noted that, “We can best ensure safety of our nation’s water resources system by establishing a process that helps us to dedicate funding to the most critical projects. The current system allows more of the same, where members demand projects that are in the members’ interests, but not always in the public’s.” The amendment went nowhere.

Looking for other approaches

Is there a substitute for federal money to support water resource projects? Because of the massive costs of major restoration efforts, doing without Congress doesn’t seem to be a reasonable approach. States are already participating in the funding of major projects. Louisiana has announced its intention to allocate substantial funding to coastal restoration and protection activities (more than $1 billion in the next three years). California recently passed a $5 billion bond issue to repair levees. With federal appropriations slow in coming, Florida has contributed more funding for restoring the Everglades and acquiring critical lands. But states are also in a funding squeeze and cannot provide all that is needed to support projects that are in the national interest.

Several alternative ways of financing infrastructure projects have been proposed and should be seriously considered. Former senator Warren Rudman and New York investment banker Felix Rohatyn have proposed the establishment of a National Investment Corporation (NIC) with the authority to issue bonds with maturities of up to 50 years to finance infrastructure projects. The bonds would be guaranteed by the federal government and, as long-lived instruments, would align the financing of infrastructure investments with the benefits they create. Bond repayment would allow the NIC to be self-financing. In a similar approach begun after Katrina, a working group commissioned by the Corps proposed the creation of a congressionally chartered coastal investment corporation to support needed development projects. In 2007, Louisiana established the Coastal Protection and Restoration Financing Corporation that “will be responsible for selling bonds based on the expected revenue from future oil and gas royalty payments” and that will allow funding of projects over the next 10 years “instead of having to wait until a steady revenue stream arrives from the federal government in 2017.” In the face of the current fiscal crisis and the need to develop a long-term approach, the development of the NIC offers the most realistic method of dealing with the need for the development of a sustainable funding stream.

Another challenge is coordinating federal funding and establishing regional priorities. In the past, the United States successfully established processes to deal with the challenge of developing priorities and funding to deal with water issues of national significance. In 1879, Congress established the Mississippi River Commission with the mission of providing a navigable Mississippi and reducing the ravages of frequent floods. After the 1927 flood, Congress passed the Flood Control Act of 1928, which created a comprehensive a Mississippi River and Tributaries (MR&T) project. This permitted the commission to deal with the lower valley as a whole: one mission, one entity, working cooperatively with all interested parties to integrate the resources needed to meet the challenge. Although the operations and size of government have changed since 1879 and 1928, the need to deal with work in the lower Mississippi Valley in a comprehensive manner remains. The continuous funding of work on the lower Mississippi River for nearly 80 years and the comprehensiveness of the effort show the utility of developing a separate federal project, similar to the MR&T, for restoring and protecting coastal Louisiana.

Protection and restoration of coastal Louisiana should be a major priority for the United States. The nation cannot live without its water resources and deltaic coast. It cannot continue to watch coastal Louisiana disappear. Sooner or later, it will have to address the problem. The longer we wait, the more difficult the problem will become, and the more money the eventual solution will cost.

Recommended reading

J.W. Day Jr., D.F. Boesch, E.J. Clairain, G.P. Kemp, S.B. Laska, W.J. Mitsch, K. Orth, H. Mashriqui, D.R. Reed, L. Shabman, C.A. Simenstad, B.J. Streever, R.R. Twilley, C.C. Watson, J.T. Wells, and D.F. Whigham, “Restoration of the Mississippi Delta: Lessons from Hurricanes Katrina and Rita,” Science 315 (2007): 1679–1684.

Everett Ehrlich, Public Works, Public Wealth: New Directions for America’s Infrastructure (Washington, DC: Center for Strategic and International Studies, 2005).

Committee on Environment and Natural Resources, Integrated Assessment of Hypoxia in the Northern Gulf of Mexico (Washington, DC: National Science and Technology Council, 2000), (available at http://oceanservice.noaa.gov/products/pubs_hypox.html#Intro).

National Research Council, U.S. Army Corps of Engineers Water Resources Planning: A New Opportunity for Service (Washington. DC: National Academies Press, 2004).

National Research Council, Drawing Louisiana’s New Map: Addressing Land Loss in Coastal Louisiana (Washington, DC: National Academies Press, 2005).

National Research Council, Regional Cooperation for Water Quality Improvement in Southwestern Pennsylvania (Washington, DC: National Academy Press, 2005).

Felix G. Rohatyn and Warren Rudman, “It’s Time to Rebuild America. A Plan for Spending More—and Wisely—on Our Decaying Infrastructure,” Washington Post, December 13, 2005, p. A27.

Working Group for Post-Hurricane Planning for the Louisiana Coast 2006, A New Framework for Planning the Future of Coastal Louisiana after the Hurricanes of 2005 (Cambridge, MD: University of Maryland Center for Environmental Science, 2006) (available at ).


Gerald E. Galloway () is the Glenn L. Martin Professor of Engineering at the University of Maryland, a former chief of the U.S. Army Corps of Engineers, and a former member of the Mississippi River Commission. He was recently appointed to the Louisiana Governor’s Advisory Commission on Coastal Protection, Restoration and Conservation. Donald F. Boesch is professor of marine science and president of the University of Maryland Center for Environmental Science. He serves as chair of the Science Board for the Louisiana Coastal Area Ecosystem Restoration Program. Robert R. Twilley is professor of oceanography and coastal sciences, and associate vice chancellor of the Coastal Sustainability Agenda at Louisiana State University, Baton Rouge

Forum – Winter 2009

Budget doubling defended

Richard Freeman and John Van Reenen (“Be Careful What You Wish For: A Cautionary Tale about Budget Doubling,” Issues, Fall 2008) provided a thought-provoking analysis of the budget doubling for the National Institutes of Health (NIH). They raised an important point that we must view future research funding increases in terms of their impact on increasing educational opportunities and financial support for young researchers. However, the NIH doubling was needed because chronic underinvestment in scientific R&D created a situation in which many of our federal scientific agencies were in need of significant short-term increases in funding. We must learn to avoid the complacency that led to the current funding deficiencies. Unfortunately, the flat-funding of NIH since the doubling ended has effectively caused their budget to decline by 13% due to inflation, creating a whipsaw effect after a decade of growth. Research policy is long-term, and we must commit to a sustainable funding model for federal science agencies.

A dramatic increase in the NIH budget was necessary, but focusing on a single agency ignored the interconnectedness of the scientific endeavor. Much of the innovative instrumentation, meth od ology, and workforce needed for advancing biomedical research come through programs funded by the National Science Foundation and other agencies. For instance, the principles underlying magnetic resonance imaging (MRI) were first discovered by physicists and further refined by physicists and chemists; MRI is now a fundamental imaging tool for medical care. From MRI to laser surgery, biomedical advances often rely on knowledge and tools generated by other scientific fields. The recognition of these interconnections drove the passage of the America COMPETES Act, a law that I was proud to support, which authorized balancing the national research portfolio by improving funding for physical sciences and engineering.

The current economic situation has constricted our financial resources. However, a sustained investment in science and our scientific workforce will contribute to our nation’s long-term economic growth and ensure a stronger economy in the future.

REPRESENTATIVE RUSH HOLT

Democrat of New Jersey

www.holt.house.gov


Innovating for innovation

In “Creating a National Innovation Foundation” (Issues, Fall 2008), Robert Atkinson and Howard Wial make a compelling case for public policies that address how research discoveries become innovations, creating economic activity, jobs, and new capabilities. This line of discussion is too often ignored when the case for public support for science is made. The transition from research to innovation to commercial success is far from automatic. That is why this process is called crossing a “valley of death.”

The United States has indeed been losing ground in the race for global leadership in high-tech innovation. The question is not whether we need a national innovation policy but how it should be constructed. The authors propose a new federal entity called the National Innovation Foundation (NIF) that would include the National Institute of Standards and Technology’s (NIST’s) Technology Innovation Program, the Department of Labor’s WIRED program, and perhaps the National Science Foundation’s (NSF’s) innovation programs. This is a substantial challenge for three reasons. First, the history of solving deficiencies in government capability and priority by creating yet another box in the government organization chart takes years, is fraught with failures, and always arouses strong opposition in Congress. Second, as the authors point out, conservatives in Congress are on record as opposing even modest forms of the NIF idea; witness their determination (successful in 2007) to zero out the budget of NIST’s Advanced Technology Program. Third, removing existing programs from their current homes and assembling them into a new agency is more often a way to kill their effectiveness than to enhance it, witness the problems of the Department of Homeland Security.

Even if the skepticism of conservatives about a government role in private markets was not a problem, the scope of issues any unified technology policy agency must encompass is too broad to be brought together in one place. For example, it must embrace not only technology issues but tax, trade, intellectual property, securities regulation, and antitrust policies as well.

How then might a much stronger and better coordinated federal focus on innovation be established in a new administration more convinced of the need for it?

Atkinson and Wial suggest several alternative forms for their NIF. It could be part of the Department of Commerce, a government-related nonprofit organization, an independent agency (such as NSF), or an arm of the Office of the President. Because NIF would be an operating agency, it cannot be in the Executive Office of the President.

I would suggest a more modest approach, built on a greatly strengthened Technology Administration in the Department of Commerce. (It, too, was recently abolished by the Bush administration.) If it was restored (no new legislation is needed) and a secretary of commerce qualified to lead the restoration of an innovation-intensive economy was appointed, NIST could remain a core and important capability for the NIF function. Both the technical and economic dimensions of a NIF are well within the existing authority of Commerce. The Office of Science and Technology Policy should be responsible for helping the president to integrate all the key functions of an innovation policy, including the major technology departments and agencies. Thus, although it would not be as glamorous as a new agency, it could be created quickly, with only marginal need for amendments to current legislation.

LEWIS M. BRANSCOMB

Adjunct Professor

School of International Relations and Pacific Studies

University of California, San Diego


Robert Atkinson and Howard Wial are worried about the current and future state of U.S. innovation. They point to the gradual decline in America’s standing in everything from R&D funding to the publishing of scientific papers. People in the innovation community in the United States, from the research universities to the companies bringing products to market, share their concern. So do I.

Atkinson and Wial do more than worry. They propose a timely set of policies and a new National Innovation Foundation (NIF). They start by praising and urging full funding for the America Competes Act but also call for a more expansive focus on the entire innovation system, from R&D to the introduction of new commercial products, processes, and services.

Their NIF would be designed to take policy several steps farther by bringing coherence to separate innovation-related federal programs, providing support for state-based and regional initiatives, and strengthening efforts to diffuse as well as develop ideas. Atkinson and Wial urge specific funding to promote collaboration among firms, research institutions, and universities.

They provide enough detail to give the reader a good sense of how the NIF could function, but remain agnostic about where it might best fit in a new administration.

Their proposal opens several doors for action:

Fully fund the America Competes

Act. Congress should fully fund the America Competes Act when it turns again to consideration of the fiscal year 2009 budget.

Think systems. The country needs to think about the innovation system as a whole and develop an innovation strategy to build on it. I particularly liked their call for an annual Innovation Report similar to the annual Economic Report of the President. In that same spirit, I would call on the president to make an Annual State of American Innovation address and require a quadrennial articulation (just as the military does) of the nation’s innovation strategy.

Increase support for current innovation programs. We should broaden and increase funding for the National Institute of Standards and Technology’s Manufacturing Extension Program, the Technology Innovation Program, and similar innovation-related programs. Whether or not the new administration establishes a new institution, Congress should establish programs to support state and regional innovation initiatives as well as collaborative ventures.

Start institutional change. Make one of the associate directors in the Office of Science and Technology Policy responsible for innovation policy. Restore and update the Office of Technology Assessment in Congress with a specific mandate to consider the innovation system.

Looking ahead to 2009, as we respond to the financial crisis and expected recession, we need to think about the impact of new policies on our innovation system—the long-term driver of higher wages, the foundation for economic strength, and a key element in national security. Too often, innovation, and the national system that supports it, is not even an afterthought, let alone a forethought.

KENT HUGHES

Woodrow Wilson Center

Washington, DC

Kent Hughes is the author of Building the Next American Century: The Past and Future of American Economic Competitiveness (Wilson Center Press, 2005).


Better environmental treaties

Lawrence Susskind has identified some key problems with the very structure of environmental treaty formulation (“Strengthening the Global Environmental Treaty System,” Issues, Fall 2008). Some of the remedies he proposes are, however, already taking place but attaining mixed results. For example, one of the solutions presented is the involvement of civil society groups as part of the treaty-making process. This is already happening with many environmental agreements, because civil society groups play an essential role at most Conferences of the Parties where treaty implementation is worked out.

Secretariats of environmental agreements such as the Ramsar Convention on Wetland Protection are housed at the International Union for the Conservation of Nature, which boasts over 700 national nongovernmental organizations as its members. Hence, even if voting rights remain with nation-states, civil society groups have considerable influence through such organizational channels. What often happens is that many of these civil society groups are co-opted by the protracted treaty process as well and are thus not as effective as one may expect them to be.

There are also two seemingly contradictory trends in the politics of international treaties. On the one hand, nationalism is gaining strength along linguistic and religio-cultural divides, as exemplified by the emergence of new states within the past few years such as East Timor and Kosovo. On the other hand, the legitimacy of national jurisdiction is gently being eroded by institutions such as the World Trade Organization and the International Criminal Court.

In this regard, Susskind’s critique is most valid regarding the asymmetry of action caused by powerful recalcitrant states and the delinking of environmental issues from security imperatives. Within the United Nations (UN) system, the only institution with a clear mandate for international regulatory action is the Security Council. However, seldom are environmental issues brought to its attention as a cause for intervention. The inertia within the UN system to reform the structure of the Security Council filters down to all levels of international treaty-making.

The economic power of certain nation-states such as India and Brazil is beginning to provide an antidote to the hegemony of the old guard in the Security Council, as exemplified by the recent failure of the Doha round of trade negotiations. Yet environmental negotiations are still largely decoupled from these more powerful international negotiation forums and are thus not affected by this new locus of influence.

As Susskind notes, the role of science in international treaties can often be diluted by the need to have global representation, as exemplified by the Intergovernmental Panel on Climate Change. However, such pluralism is essential despite its drawbacks of diminishing purely meritocratic research output in order to gain acceptance across all member states.

There are some efforts to reconcile the contradictory trends in environmental policymaking that are beginning to emerge and where Susskind’s concerns may have already been adequately addressed. The European Union’s environmental laws exemplify a process by which national sovereignty can be recognized at a fundamental level while acknowledging ecological salience across states with large economic inequalities. Ultimately, if we are to have an efficacious environmental treaty system, a similar approach with clear targets and penalties for noncompliance will be needed to ensure that policy responses can keep up with ecological impact.

SALEEM H. ALI

Associate Professor of Environmental Policy and Planning

Rubenstein School of Environment and Natural Resources

University of Vermont

Burlington, Vermont


International environmental law has been greatly expanded during the past 40 years. Although some success can be noted with respect to, for example, phasing out ozone-depleting substances, many environmental problems remain unabated. Lawrence Susskind correctly notes that the current system “is not working very well.” Based on his assessment of the system’s weaknesses, Susskind offers several practical suggestions for improving the effectiveness of international environmental governance.

Susskind’s suggestions focus on specific ways in which the environmental treaty-making system can be improved without requiring major changes to basic structures of international law and cooperation. Some may criticize this approach as being too modest given the severity of the environmental challenges we face, but it has the advantage of being more realistic in the short to medium term than any call for fundamentally altering the roles and responsibilities of international organizations and states in international lawmaking and implementation of environmental treaties.

Of Susskind’s many constructive proposals, a few stand out as being both important and relatively achievable. These include setting more explicit targets and timetables for mitigation, establishing more comprehensive and authoritative mechanisms for monitoring and enforcement, and developing new structures for formulating scientific advice. None of these issues are unproblematic—if they were easy, they would already have been addressed—but discussions around several of them are advancing under multiple environmental treaties (albeit painstakingly slowly).

At the heart of many difficult discussions lies the fact that states remain reluctant to surrender sovereignty and decisionmaking rights under environmental treaties. This draws attention to the importance of norms and principles guiding collective behavior. Susskind touches on this in his discussion about the United States’ rejection of the principle of common but differentiated responsibilities intended to aid industrialized and developing countries to move forward on specific issues while recognizing that there are fundamental differences between them in terms of their ability to lead and act.

Political science and negotiations analysis tell us that a shared understanding of the cause, scope, and severity of a problem is critical for successful communal problem-solving involving the redistribution of costs and benefits. It is unlikely that global environmental governance will be significantly improved until there is a much greater acceptance among leading industrialized and developing countries about the characters and drivers of environmental problems and shared norms and principles for how they are best addressed (including the generation of funds for mitigation and adaptation).

In other words, many of the practical suggestions for improving global governance put forward by Susskind should be debated and pursued across issue areas, because they would help us address specific environmental problems more effectively. At the same time, the magnitude of collective change ultimately needed to tackle the deepening environmental crisis is unlikely to come about without more widespread global acceptance of common norms and principles guiding political and economic action and policymaking.

HENRIK SELIN

Assistant Professor

Department of International Relations

Boston University

Boston, Massachusetts


Managing military reform

In “Restructuring the Military” (Issues, Fall 2008), Lawrence J. Korb and Max A. Bergmann call the Pentagon “the world’s largest bureaucracy,” implying that it can be managed much like other very large organizations. They then go on to discuss policies that they believe should be put in place, skirting the question of how such policies would be received in the many semiautonomous centers of power within the Department of Defense (DOD), which in reality is more a loose confederation of tribes than a bureaucracy. “Bureaucracy,” after all, signifies hierarchy. Well-defined hierarchies do exist within the DOD, but they are found within the four services and the civilian employees who answer ultimately to the Secretary of Defense. Otherwise, lines of authority are ambiguous and contested, more so than in any other part of our famously fragmented government. To considerable extent, policy in the DOD is what happens, not what is supposed to happen.

Each of the services has its own vision of warfighting. Before World War II, this made little difference. Since then it has, and the stark contrast between chains of command within the services and the tangled arrangements for coordination among them affect almost everything the DOD attempts. Civilians find it hard simply to discern, much less unravel, conflicts within and among the services from which decisions and priorities emerge concerning, for example, acquisition (R&D and procurement), and if civilian decisionmakers cannot understand what is going on, except grossly, they cannot exert much influence over outcomes.

Korb and Bergmann laud the 1986 Goldwater-Nichols reforms for enhancing “coordination” and “cohesion” and call for extension of this “model” to the “the broader bureaucracy that oversees the nation’s warfighting, diplomatic, and aid agencies.” That seems wishful thinking. Indeed, many of the examples they adduce suggest that Goldwater-Nichols changed relatively little. As I argue in Trillions for Military

Technology: How the Pentagon Innovates and Why It Costs So Much, many of the difficulties so evident in acquisition can be traced as far back as World War I. Why, after all, is it that “soldiers in an Army vehicle have been unable to communicate with Marines in a vehicle just yards away” when such problems have existed literally since the invention of radio?

As president, Dwight Eisenhower, uniquely able to see inside the Pentagon, exerted personal oversight over many aspects of military policy. Other presidents have had to rely on their defense secretaries. A few of those, notably Robert McNamara, tried actively to manage the Pentagon. Unfortunately, the organizational learning that began under McNamara was tarred by his part in the Vietnam debacle, and too many of his successors, such as Caspar Weinberg, were content to be figureheads. As a result, and notwithstanding Goldwater-Nichols, the services have nearly as much autonomy today as in the past.

The reasons should not be misunderstood. They stem from professionalism. The aversion of military leaders to civilian “interference” is little different from the aversion of physicians to being told how to practice medicine. The difference is that as consumers we can always switch physicians, whereas military force is not, and let us hope never will be, a commodity that can be purchased in the global marketplace.

It may be that Korb and Bergmann hope that calling for the armed forces to cooperate more effectively with other parts of the government will, if enough money is forthcoming, lead to real change. Any review of the relationship between the DOD and the State Department and Atomic Energy Commission after World War II would indicate how forlorn such a hope must be. Reform must start inside the service hierarchies. That is a precondition for the sorts of steps Korb and Bergmann recommend.

JOHN ALIC

Avon, North Carolina


Not just for kids

Brian Bosworth’s assessment in “The Crisis in Adult Education” (Issues, Summer 2008) is right on target. At a time when our country faces unparalleled economic uncertainty and unprecedented competition, the United States must dramatically increase the number of individuals with postsecondary credentials and college-level skills if we are to maintain our economy’s vitality. We agree with Bosworth that strong state and federal policies can help millions of adult learners reap the benefits of innovative approaches to improving postsecondary learning and adult skill development.

STRONG STATE AND FEDERAL POLICIES CAN HELP MILLIONS OF ADULT LEARNERS REAP THE BENEFITS OF INNOVATIVE APPROACHES TO IMPROVING POSTSECONDARY LEARNING AND ADULT SKILL DEVELOPMENT.

For 25 years, Jobs for the Future and its partners have been at the forefront of innovations in education for low-income and low-skill Americans. We work side by side with practitioners and policymakers in 159 communities in 36 states in programs that provide evidence for the importance of Bosworth’s proposals and models for their application on the ground. In Oregon, for example, Portland Community College offers a wide range of services specifically designed to meet the needs of academically unprepared adult students. As a participant in the 18-state Breaking Through initiative, a collaboration of Jobs for the Future and the National Council for Workforce Education, the college is redesigning developmental education programs to serve as a bridge between adult basic education and credit-bearing courses. Adult students who very likely struggled in high school or didn’t receive a diploma at all are provided with mentors, tutors, and other supports that help them navigate the complex and often intimidating college environment to shore up their academic achievement. The result: More students are staying in school and working toward the credentials they need to succeed.

Portland Community College is aided in these efforts through Oregon state policy, which encourages enrollment in both adult basic education and developmental education programs as paths to better jobs. One way state policy does this is by providing a match of 80% of the roughly $5.6 million the federal government has invested in Oregon’s adult basic education program. Oregon also reimburses colleges at the same per-student rate for adult basic education, developmental education, and credit-level students. This uniform rate raises the academic standing of adult basic education and indicates that it is just as important as other programs.

In Maryland, Community College of Baltimore County is giving occupational training to frontline entry-level workers in health care—not in classrooms but right at the hospitals where they work. Workers do not have to commute and pay little to no cost for their training and college credit. The program is part of Jobs to Careers, a $15.8 million national initiative of the Robert Wood Johnson Foundation, in collaboration with The Hitachi Foundation and the U.S. Department of Labor. A national initiative with 17 sites, Jobs to Careers uses a “work-based learning” model. It embeds learning into workers’ day-to-day tasks, learning that is developed and taught by both employers and the educational institution. This way, employees can move up career ladders, employers benefit from higher retention rates, patients receive better care, and the college has a new way to deliver its services and strengthen its local economy. Bringing college to the work site is not only a groundbreaking strategy; it’s a common-sense solution to the skills gap affecting local economies across the country. Jobs to Careers is matching the jobs that need to be done with the individuals who need them most.

Innovative and cost-effective investments in a skilled workforce are key to keeping high-paying jobs in America. These human capital investments, particularly for low-skill and low-income youth and adults, address two pressing national challenges: greater equity and stronger economic performance. Our thanks to Bosworth for putting a spotlight on these issues.

MARLENE B. SELTZER

President and Chief Executive Officer

Jobs for the Future

Boston, Massachusetts

www.jff.org


Science and foreign policy

Gerald Hane has a valuable piece in the Fall 2008 Issues (“Science, Technology and Global Reengagement”) arguing that the new administration must recognize the critical role of science and technology (S&T) in the conduct of the nation’s foreign policy and that that role must be reflected in the structure of the White House and State Department. It is not a new argument but one that has bedeviled many administrations and many Secretaries of State, with results that have varied but almost always have fallen short of what is needed. Today, it is of even greater importance as the consequences of inadequate response become steadily more damaging in light of the rapid upgrading of scientific and technological competence throughout the world and the emergence of global-scale S&T–rich issues as major elements of foreign affairs. The threat to America’s competitive economic position as well as its national security is real and growing.

Hane asserts the critical importance of the new president’s having in his immediate entourage a science adviser able to participate in formulating policy to deal with the flood of these issues. He calls for an upgrading of the Office of Science and Technology Policy and for the director of that office to also be a deputy assistant to the president for science, technology, and global affairs. Whatever the title, he is quite right that it is not enough only to be at the table; the science adviser must have the clout and the personal drive to frame the discussion and influence decisions. In this setting, the power that comes from proximity to the president is essential in order to be able to cut through often contentious agency debates and congressional opposition.

But Hane’s arguments, absolutely sound, arrive in the new president’s inbox along with those of many others, all clamoring for immediate attention to their needs. In this maelstrom, it is all too possible that science will be seen as just another self-pleading interest.

What the result will be depends on whether the new president believes in and understands the need for this kind of close scientific advice, asserts the leadership required to create the necessary White House climate, and is prepared to include a senior science adviser in White House policy deliberations across a wide swath of subjects. During his campaign, President-elect Barack Obama reflected the intention to deal urgently with science-rich international issues such as climate change and has signaled that he will appoint an individual to oversee technology implementation across government agencies. Early in his campaign, he formed a science advisory committee, led by a successful science administrator and Nobel Laureate, Harold Varmus, which apparently was in communication almost daily with the campaign’s policy leaders. Thus, there is reason to hope that Obama does appreciate well why many of the policies he is most concerned about will require scientists of quality to be centrally involved in the many policy choices he will have to make.

EUGENE B. SKOLNIKOFF

Professor of Political Science Emeritus

Massachusetts Institute of Technology

Cambridge, Massachusetts


Gerald Hane has rung a bell with his article on the need for the United States to pay more attention to international S&T. It reminded me of a phrase we coined in the 1970s when U.S. diplomats were leveraging scientific cooperation to improve relations with China and the Soviet Union: Science and technology is the new international currency. As national commitments to research and innovation strengthen in all parts of the world, this mantra has become ever more relevant.

Hane knows from his years of tending the international portfolio at the Office of Science and Technology Policy that his ambitious vision for an enhanced role for science and technology can be realized only with the president’s direct involvement.Thus, he recommends that the president establish the position of deputy assistant to the president for science, technology, and global affairs and make his science advisor a member of the National Security Council and the National Economic Council.

The president could implement these suggestions immediately. It would be consistent with the new administration’s expressed desire to pay more attention to the soft power dimensions of U.S. foreign policy, and that is where science cooperation can play an especially fruitful role—if there is adequate funding.

Earlier attempts to find funding for cooperative international science projects fell short. In the Carter administration a proposal to establish an Institute for Scientific and Technological Cooperation (ISTC) for this purpose made it through three of the four mandatory wickets in Congress before it failed for lack of appropriations in the Senate. That or a similar federal approach could be tried again.

One thorny question is where to house the effort. The ISTC was intended to be a semi-autonomous body inside the U.S. Agency for International Development. Another option would be the State Department, but Congress is not readily inclined to support science projects through State, even though precedents exist in the Cooperative Threat Reduction (CTR) programs (funded as a nonproliferation measure) and the Support for Eastern European Democracy Act to assist with economic recovery after the breakup of the Soviet Union.

The activity could also be managed through an existing private organization or a nongovernmental organization created specifically for this purpose. For example, the Civilian Research and Development Foundation was created for similar work as part of the CTR initiative and could be expanded to fulfill this larger role. The American Association for the Advancement of Science, which has just created a Center for Science Diplomacy, also has the prestige and the commitment to take on this responsibility.

Other than the government, the only sources of funding would be private foundations. However, such funding would likely be limited in scope and duration. Federal support is clearly the preferred route, though it could be complemented by private sources. One could even conceive of something modeled on the Small Business Innovation Research program, whereby the technical agencies would be encouraged or required to spend a certain percent of their total scientific funding on projects with an international dimension.

To doubters this may sound like more international charity, but the reality is that as scientific capability and research excellence continue to develop abroad, it is certain that the United States can reap great scientific and political benefits from these relationships. The potential for such a double return on these investments in cooperation is very large. It is a concept whose time has surely come and that deserves a serious attempt to make it work.

NORMAN NEUREITER

Director

Center for Science, Technology, and Security Policy

American Association for the Advancement of Science

Washington, DC


Gerald Hane has done a superb job of laying out the steps involved in strengthening the role of science and engineering in the international arena. His concerns are not new and have been documented over many decades. Eugene B. Skolnikoff of the Massachusetts Institute of Technology comes to mind as one of the more thoughtful and eloquent students of interactions of science and technology (S&T) with international affairs. Emilio Q. Daddario, chair of the House Science subcommittee in the 1960s and later the first director of the Office of Technology Assessment, championed closer interaction of S&T with foreign policy in Congress.

Science and engineering interact with foreign policy in two very distinct ways. The first (and easy one) relates to policies that bring international partners together to do science or to provide a policy framework for international cooperation in research. This is perhaps best seen in successful “big science” projects where remarkable international partnerships have been established in diverse fields such as high-energy physics and global change. Permanent or semipermanent international cooperation institutions have been established, some governmental and some nongovernmental.

The second is more difficult and complex: the role of science and engineering in the development and implementation of foreign policy. Forward thinking is difficult, especially in a government bureaucracy. But there have been thoughtful efforts to reorganize the U.S. government (especially the Department of State) to better inject S&T into the foreign policy process. The most successful of these (at least institutionally) probably was P.L. 95-126, the fiscal year 1979 authorization for the State Department. With the strong backing of Congress, it included new high-level positions, reporting requirements, and mandatory agency cooperation. It appeared to be the do-all and end-all for science and diplomacy. The only problem was that under both parties it was mostly ignored by the federal agencies and the White House. Even today the federal government lacks an agency with funding and a personnel system that will support a world-class analytic capability in science and foreign policy.

The current (fall 2008) global financial crisis underscores the crucial role science plays in international relations. Fundamentally the creation of scientists and mathematicians, the complex financial instruments at the root of the problem were not understood by the financial community. This state of affairs has resulted in unintended consequences, in which the U.S. financial system has become more like the Chinese system, reversing a multidecade flow in the opposite direction.

J. THOMAS RATCHFORD

Distinguished Visiting Professor

George Mason University School of Law

Director, Science and Trade Policy Program

George Mason University

Fairfax, Virginia


Gerald Hane argues thoughtfully for greater U.S. leadership in making international science collaboration a foreign policy priority. Hane exhorts the next administration to act quickly and decisively. He calls for the creation of a Global Priorities S&T (Science and Technology) Fund “to support grants to encourage international S&T activities that support U.S. foreign policy priorities.” In these days of global financial turmoil, rising U.S. deficits, an array of competing demands for tax-payer dollars, and an already significant U.S. investment in R&D, is such a fund really critical? The answer is unequivocally yes.

S&T solutions are needed to address many of today’s global challenges—in energy, food security, public health, and environmental protection. The United States cannot tackle these challenges alone. Today’s most vexing problems are global in nature and require global expertise and experience to solve. Many nations, such as Saudi Arabia, China, the United Kingdom, India, and Australia, are investing in science infrastructure and are partnering globally to advance their own competitiveness and national security interests. To remain competitive, the United States must demonstrate leadership in engaging the world’s best scientists and engineers to find common solutions through collaborative research activities. This is good for U.S. science because it gives our scientists and engineers access to unique facilities and research sites and exposes them to new approaches. It is economically sound because it leverages U.S. resources and provides a means to benchmark U.S. capabilities. As a diplomatic tool, we know that scientists and engineers can work together in ways that transcend cultural and political differences. International collaboration helps to build relationships of trust and establish pathways of communication and collaboration even when formal government connections are strained. In short, S&T must be a central component of U.S. foreign policy.

INTERNATIONAL COLLABORATION HELPS TO BUILD RELATIONSHIPS OF TRUST AND ESTABLISH PATHWAYS OF COMMUNICATION AND COLLABORATION EVEN WHEN FORMAL GOVERNMENT CONNECTIONS ARE STRAINED.

Hane is correct to note that making progress in these areas requires new policy approaches and resources that spur government agencies and non-governmental organizations to take action. During congressional testimony this past summer, the U.S. Civilian Research & Development Foundation (CRDF), a nongovernmental organization created by Congress that supports international science collaboration with more than 30 countries, called on the U.S. government to launch a strategic global initiative to catalyze and amplify S&T cooperation for the benefit of the United States and its partners around the world. The Global Science Fund would be a public/private partnership, with the U.S. government taking the lead in challenging private donors and other governments to match the U.S. contribution. This new initiative would provide funding for grants and other activities that would engage scientists internationally to address energy alternatives, food security, vanishing ecosystems, or other global challenges. It would seek to reach young scientists and support a robust R&D infrastructure while building mutually beneficial economic partnerships.

The new U.S. administration will face many challenges. Advancing U.S. economic, security, and diplomatic interests by drawing on one of America’s greatest assets—its scientists and engineers—must be one of them.

CATHLEEN A. CAMPBELL

President and Chief Executive Officer

U.S. Civilian Research & Development Foundation

Arlington, Virginia


Biotech regulation

In “Third-Generation Biotechnology: A First Look” (Issues, Fall 2008), Mark Sagoff raises a number of useful and interesting points regarding ethical, legal, and social concerns about third-generation biotechnology. In my view, however, some of his criticisms of regulatory agencies, particularly the U.S. Department of Agriculture (USDA), are somewhat overstated. It is certainly arguable that we lack a truly coherent regulatory framework for genetically engineered organisms in the United States, with responsibility for regulation and registration divided somewhat arbitrarily among different agencies (the Environmental Protection Agency, the Food and Drug Administration, and the USDA). Even within the latter agency, the Biotechnology Regulatory Service that Sagoff references is part of APHIS, a regulatory branch that is distinct from the more research-focused (and research-funding) Agricultural Research Service. But rather than “systematically” avoiding sponsorship of inquiry into the ethical, legal, and social implications of biotechnology, I would suggest that the USDA and the other agencies have struggled gamely with a fairly miniscule amount of funding that must be apportioned among extensive regulatory and research needs. It is hardly surprising that the USDA’s priority for limited funds has been for ecologically based research rather than social or ethical inquiries, given the agency’s mission and expertise.

The comparison with the National Institutes of Health‘s efforts to foster public acceptance of the Human Genome Project is valid, but should be tempered with an understanding of the disparity in funding for these programs. I would also suggest that obtaining public acceptance of the need to unravel the human genome, with its very demonstrable medical implications, is probably easier than fostering an informed democratic debate about the ecology of genetically engineered microorganisms in the environment. Sagoff provides several excellent examples illustrating one reason why this is so: The scientific research community has the ability to engineer and release some truly scary recombinant organisms into the environment, such as entomopathogenic fungi expressing scorpion toxin or animal viruses engineered for immunosuppressive capabilities.

Sagoff is correct that the old “process/product” distinction, which has been plaguing regulators for more than two decades, remains a conundrum. The question of how much we should focus on the process of genetic engineering versus the resulting products has been a recurrent theme in both regulatory and academic discussions about the potential release of genetically modified organisms. To a large extent, the dichotomy is a misleading one. The process of genetic engineering inherently produces non-native variants of organisms, and I would contend that there is merit in evaluating the environmental introduction of these organisms with scrutiny similar to that used for the introduction of wild-type non-native microbes. Although a particular recombinant “product” may be well characterized in terms of its genotype as well as a variety of phenotypical attributes, the new ecological niche (also a “product”) that it will fill is always a matter for prediction. Enhanced scrutiny of new introductions based on process (that is, based on their recombinant nature) is admittedly a blunt tool, analogous in some ways to the passenger profiling that might take place at an airline check-in counter. But we don’t afford microbes any equal protection rights, and we might just manage to ward off a few bad actors.

GUY KNUDSEN

Professor of Microbial Ecology and Plant Pathology

Soil and Land Resources Division

University of Idaho

Moscow, Idaho


Research on patents

I support the policy direction proposed by Robert Hunt and Brian Kahin in “Reexamining the Patent System” (Issues, Fall 2008), but does their analysis go far enough? Arguably, innovation is as important to the long-run health of the economy as are interest rates. To set interest rates, our country has the autonomous institution of the Federal Reserve, which excels at gathering and analyzing data in support of its financial decisions. An institution with a similar data-driven orientation for the patent system only seems logical.

The kind of data that Hunt and Kahin talk about gathering would, as they propose, help us evaluate the performance of the patent system in different technologies and industries. However, I think such data are important for another reason (perhaps Hunt and Kahin had this in mind): They can guide the refinement of patent institutions. Indeed, some of the most successful applications of economic analysis to policymaking, such as the programs for tradable pollution permits, began with extensive data analysis, but then applied this analysis to improving the structure and effectiveness of these programs. Similarly, extensive patent data and economic analysis can help improve the functioning of the Patent and Trademark Office (PTO) and of the courts by providing crucial feedback. How well do PTO programs to improve patent quality work? What fee structures can improve patent quality, reduce litigation, and also reduce the huge PTO backlog? Do certain court decisions increase the uncertainty of the patent grant, as some have charged, or not?

These questions can be answered and the answers can be used to improve patent performance. The patent system is, after all, an unusual beast; it is a set of legal institutions charged with carrying out an economic policy. But until now, the tools of economic analysis and economic policymaking have been missing.

JAMES BESSEN

Lecturer

Boston University School of Law

Boston, Massachusetts


Yes to an RPS

“A National Renewable Portfolio Standard? Not Practical” (Issues, Fall 2008), by Jay Apt, Lester B. Lave, and Sompop Pattanariyankool, correctly asserts that the United States needs a comprehensive strategy to address climate change and that energy efficiency is a critical component. But the rest of the article, which maintains that a national renewable portfolio standard (RPS) is impractical, is off the mark.

First, numerous studies contradict the authors’ claim that a national RPS would be too expensive for ratepayers. More than 20 comprehensive economic analyses completed during the past decade found that a strong national standard is achievable and affordable. For example, a 2007 Union of Concerned Scientists (UCS) study, using the Energy Information Administration’s (EIA’s) national energy modeling system, found that establishing a 15% national RPS by 2020 would lower electricity and natural gas bills in all 50 states by reducing demand for fossil fuels and increasing competition. Cumulative national savings would reach $28 billion to $32 billion by 2030. An EIA study arrived at similar conclusions despite its more pessimistic assumptions about renewable technologies. That study projected that a 25% RPS by 2025 would slightly lower natural gas bills, more than off-setting slightly higher (0.4%) electricity bills, saving consumers $2 billion cumulatively through 2030.

Second, the authors incorrectly allege that a national RPS would undermine U.S. electricity system reliability by increasing reliance on wind and solar power. EIA and UCS analyses project that base load technologies, such as biomass, geothermal, landfill gas, and incremental hydroelectric plants, would generate 33 to 66% of the renewable electricity under a national standard. Regional electricity systems could easily integrate the remaining power produced by wind and solar at a very modest cost and without storage. Studies by U.S. and European utilities have found that wind penetrations of as much as 25% would add no more than $5 per megawatt-hour in grid integration costs, or less than 10%, to the wholesale cost of wind.

Third, the need for new transmission lines and upgrades to deliver power to urban areas is not unique to renewable energy. Additional capacity would be necessary for many proposed coal and nuclear plants, which are often sited at considerable distances from load centers. A 2007 analysis by Black & Veatch, a leading power plant engineering firm, found that 142 new coal unit proposals at 116 plants were located on average 109 miles from the nearest large U.S. city, with some located 400 to 500 miles away.

For these reasons and others, we disagree with the conclusion that a national RPS would be ineffective in reducing global warming emissions and meeting other national goals. In fact, EIA’s study showed that a 25% national RPS could reduce global warming emissions from coal and natural gas plants by 20% below business as usual by 2025. Because scientists have called on the United States to reduce global warming emissions by at least 80% below current levels by 2050, we need to dramatically increase both efficiency and renewable energy use. Therefore, efficiency measures and RPSs are key complements to federal cap-and-trade legislation.

STEVE CLEMMER

Research Director, Clean Energy Program

Union of Concerned Scientists

Cambridge, Massachusetts


Practical Pieces of the Energy Puzzle: Getting More Miles per Gallon

The answer may require looking beyond CAFE standards and implementing other consumer-oriented policy options to wean drivers away from past habits.

In December 2007, concerns over energy security and human-induced climate change prompted Congress to increase Corporate Average Fuel Economy (CAFE) standards for the first time in 20 years. The new standards aim to reduce petroleum consumption and greenhouse gas (GHG) emissions in the United States by regulating the fuel economy of new cars and light trucks, including pickups, SUVs, and minivans. The standards will require these vehicles to achieve a combined average of 35 miles per gallon (mpg) by 2020, up 40% from the current new-vehicle average of 25 mpg.

Since Congress acted, the nation witnessed a dramatic rise in the prices of petroleum and gasoline, which reached record levels during the summer of 2008, increasing pressure on policymakers to reduce transportation’s dependence on petroleum. Prices have since fallen markedly with the arrival of an economic crisis. But few observers expect prices to stay low when the economy recovers, and many see a future of steadily rising prices, driven by global economic expansion. Thus, the goal of reducing the nation’s thirst for gasoline remains an important goal. And although striving to meet the CAFE standards will be an important part of the mix, other policy initiatives will be necessary to make timely progress.

Although the nation’s collective gas-pump shock has lessened, the lessons from recent experiences are telling. In June 2008, the average price of crude oil doubled from a year earlier, and gasoline prices rose by one third. High fuel costs sharpened the public’s awareness of fuel use in light-duty vehicles, causing them to seek alternatives to gas-guzzling private vehicles. Sales of light trucks during the first half of 2008 were down by 18% relative to the previous year, and total light-duty vehicle sales dropped by 10%. The total distance traveled by motor vehicles fell by 2.1% in the first quarter of 2008 relative to the same period in 2007. At the same time, ridership on public transportation systems showed rapid growth in the first quarter of 2008, with light-rail ridership increasing by 7 to 16% over 2007 in Min neapolis-St. Paul, Miami, and Denver.

Increasing the federal fuel tax over a number of years would encourage consumers to adopt vehicles that get more miles to the gallon.

These changes marked major changes from trends of the past two decades, when fuel prices were low and relatively stable. During this period, fuel economy standards remained unchanged for cars and largely constant for light trucks. Proponents of more demanding CAFE requirements argue that the standards stagnated during this period, allowing automakers to direct efficiency improvements toward off-setting increases in vehicle size, power, and performance rather than improving fuel economy. On the other hand, critics of CAFE standards contend that mandated fuel economy requirements impose costs disproportionately across manufacturers with no guarantee that consumers will be willing to pay for increased fuel economy over the longer term.

Now that renewed CAFE standards have passed and more stringent targets may be on the way, the discourse over CAFE must shift to the critical issues of the changes that will be necessary to achieve the mandated improvements in fuel economy, the costs of these changes relative to their benefits in fuel savings and reductions in GHG emissions, and the implementation of other policy options to help achieve ambitious fuel economy targets.

We have assessed the magnitude and cost of vehicle design and sales-mix changes required to double the fuel economy of new vehicles by 2035—a longer-term target similar in stringency to the new CAFE legislation. Both targets require the fuel economy of new vehicles to increase at a compounded rate of about 3% per year. We argue that the necessary shifts in vehicle technology and market response will need a concerted policy effort to alter the current trends of increasing vehicle size, weight, and performance. In addition to tougher CAFE standards, coordinated policy measures that stimulate consumer demand for fuel economy will likely be needed to pull energy-efficient technologies toward reducing the fuel consumption of vehicles. This coordinated policy approach can ease the burden on domestic auto manufacturers and improve the effectiveness of regulations designed to increase the fuel economy of cars and light trucks in the United States.

Although the term fuel economy (the number of miles traveled per gallon of fuel consumed) is widely used in the United States, it is the rate of fuel consumption (the number of gallons of fuel consumed per mile traveled) that is more useful in evaluating fuel use and GHG emissions. For example, consider improving the fuel economy of a large, gas-guzzling SUV from 10 to 15 mpg; this reduces the SUV’s fuel consumption from one gallon per 10 miles to two-thirds of a gallon per 10 miles, which saves a third of a gallon of gasoline every 10 miles. If, however, a decent gas-sipping small car that gets 30 mpg is replaced with a hybrid that achieves an impressive 45 mpg—the same proportional improvement in fuel economy as the SUV—this corresponds to a fuel savings of only about one-tenth of a gallon every 10 miles. Both improvements are important and worthwhile, but because of the inverse relationship between these two terms, a given increase in fuel economy does not translate into a fixed proportional decrease in fuel consumption. So even as most people probably will continue to talk about fuel economy, it is important to keep the distinction between fuel economy and fuel consumption in mind.

Leverage points

There are three primary ways in which vehicle fuel economy may be improved: ensuring that the efficiency gains from vehicle technology improvements are directed toward increasing fuel economy, rather than continuing the historical trend of emphasizing larger, heavier, and more powerful vehicles; increasing the market share of alternative power-trains that are more efficient than conventional gasoline engines; and reducing the weight and size of vehicles.

Efficiency. Even though sales-weighted average fuel economy has not improved since the mid-1980s, the efficiency (a measure of the energy output per unit of energy input) of automobiles has consistently increased, at the rate of about 1 to 2% per year. This trend of steadily increasing efficiency in conventional vehicles is expected to continue during the next few decades. Lightweight materials and new technologies such as gasoline direct injection, variable valve lift and timing, and cylinder deactivation are making inroads into today’s vehicles and individually achieve efficiency improvements of 3 to 10%. Between now and 2035, gasoline vehicles can realize a 35% efficiency gain through expected technology improvements and moderate reductions in weight.

Unfortunately, efficiency gains in the past 20 years have been used to offset improvements in the weight and power of new vehicles, rather than improving fuel economy. Compared to 1987, the average new vehicle today is 90% more powerful, 33% heavier, and 25% faster. With the help of lightweight materials and efficiency improvements, all of this improvement has been accomplished with only a 5% penalty in fuel economy. Had performance and weight instead remained at 1987 levels, however, fuel economy could have been increased by more than 20% in new 2007 light-duty vehicles.

TABLE 1 Illustrative strategies that double the fuel economy of new vehicles in 2035

The first three strategies maximize different combinations of two of the options (italicized), and set the remaining option to the level necessary to double new vehicle fuel economy. The market shares of alternative powertrains are arbitrarily fixed at a ratio of 5 to 5 to 7 for turbocharged gasoline, diesel, and hybrid gasoline vehicles, respectively. The fourth strategy puts heavy emphasis on hybrid powertrains, which improve vehicle performance slightly and reduce the level of weight reduction required.

% of efficiency from expected technology improvements directed to improving FE % vehicle weight reduction from current weight by 2035 % of new vehicle market share, by powertrain
Strategy (avg. car 0-100 km/hr acceleration time) (avg. car curb weight) Conventional gasoline Turbo-charged gasoline Diesel Hybrid gasoline
Current fleet in 2006 95% 1% 2% 2%

  (9.5 secs) (1,620 kg)

1. Maximize conventional vehicle improvements and weight reduction 100% 35% 66% 10% 10% 14%

  (9.4 secs) (1,050 kg)

2. Maximize conventional vehicle improvements and alternative powertrains 96% 19% 15% 25% 25% 35%

  (9.2 secs) $1,320 kg)

3. Maximize alternative powertrains and weight reduction 61% 35% 15% 25% 25% 35%

  (7.6 secs) (1,060 kg)

4. Emphasize aggressive hybrid penetration 75% 20% 15% 15% 15% 55%

  (8.1 secs) (1,300 kg)

TABLE 2 Retail price increase of conventional vehicle technology improvements and alternative powertrains in 2035

Technology option Description and assumptions Retail price increase [USS 2007]
Cars Light Trucks
Future gasoline vehicle Includes expected engine and transmission improvements; a 20% reduction in vehicle weight; a more streamlined body; and reduced tire rolling friction $2,000 $2,400

ADDITIONAL PRICE INCREASE FROM SHIFTING TO ALTERNATIVE POWERTRAINS

Future turbocharged gasoline vehicle Includes a turbo-charged gasoline engine $ 700 $ 800

Future diesel vehicle Includes a high-speed, turbocharged diesel engine compliant with future emissions standards $1,700 $2,100

Future hybrid gasoline vehicle Includes an electric motor, battery, and control system that supplements a downsized gasoline engine $2,500 $3,200

Powertrains. In addition to steady improvements in conventional vehicle technology, alternative technologies such as turbocharged gasoline and diesel engines and gasoline hybrid-electric systems could realize a 10 to 45% reduction in fuel consumption relative to gasoline vehicles by 2035. These are proven alternatives that are already present in the light-duty vehicle fleet and do not require significant changes in the nation’s fueling infrastructure. Turbocharged gasoline and diesel-powered vehicles are already popular in Europe, and several vehicle manufacturers have plans to introduce them in a wide range of vehicle classes in the U.S. market. More than 1 million hybrid electric vehicles such as the Toyota Prius and Ford Escape have been sold cumulatively in the United States during the past 10 years.

The role that alternative powertrains can play in improving fuel economy, however, depends on how successfully they can capture a sizeable share of new vehicle sales. Currently, approximately 5% of the U.S. market is comprised of diesel and hybrid powertrains. In the past, new powertrain and other vehicle technologies have, at best, sustained average market share growth rates of around 10% per year, suggesting that aggressive penetration into the market might see alternative powertrains account for some 85% of all new vehicle sales by 2035.

Size and weight. Reducing a vehicle’s weight reduces the overall energy required to move it, thus enabling the down sizing of the powertrain and other components. These changes provide fuel efficiency gains that can be directed toward improving fuel economy. Reductions in vehicle weight can be achieved by a combination of substituting lightweight materials, such as aluminum, high-strength steel, or plastics and polymer composites, for iron and steel; redesigning and downsizing the powertrain and other components; and shifting sales away from the larger, heaviest vehicles to smaller, lighter models.

With aggressive use of aluminum, high-strength steel, and some plastics and polymer composites, a 20% reduction in vehicle weight is possible through material substitution and associated component downsizing by 2035. Additional redesign and component downsizing could account for another 10% reduction in vehicle weight. Further, reducing the size of the heaviest vehicles could achieve an additional 10% reduction in average vehicle weight. For instance, downsizing from a large SUV, such as a Ford Expedition, to a mid-sized SUV, such as a Ford Explorer, cuts weight by 15%. Combining these reductions multiplicatively indicates that a 35% reduction in the average weight of new vehicles is possible by 2035.

Increasing costs

Combining these three options to double the fuel economy of new vehicles in 2035 reveals a series of trade-offs among attributes of the light-duty vehicle fleet (see Table 1). No one or two options can reach the target on their own; doubling fuel economy in 2035 requires a major contribution from all of the available options, regardless of the strategy employed. The most sensitive options are directing efficiency improvements directly toward reducing fuel consumption and reducing vehicle weight. These changes can affect all new vehicles entering the fleet, yet during the past two decades these powerful levers for increasing fuel economy have been applied in the opposite direction.

Implementing these improvements will increase the cost of manufacturing vehicles (see Table 2). By 2035, new engine and transmission technologies, a 20% reduction in weight, body streamlining, and reductions in the rolling friction of tires could increase the cost to manufacture a car by $1,400 and by $1,600 for a light truck (in current dollars relative to the same vehicles today). These costs do not take into account the costs of distributing vehicles to retailers, nor profit margins for manufacturers and auto dealers. Adding an additional 40% to these costs gives a reasonable estimate of the retail price increase that could be expected, although the price arrived at in a competitive auto market would be subject to various pricing strategies that may increase or decrease the final price tag. With a strong emphasis on reducing fuel consumption over the next 25 years, the average price of a conventional gasoline vehicle could increase by around 10% relative to today’s mid-sized sedan such as the Toyota Camry or light truck such as the Ford F-150.

Shifting from a conventional gasoline engine to an alternative powertrain would further increase the cost of manufacturing a vehicle. In 2035, the retail price of a vehicle could increase by $700 to $800 for a turbocharged gasoline engine and by $1,700 to $2,100 for a diesel engine. Future hybrid-electric powertrains could increase the manufacturing cost of a conventional gasoline vehicle by $2,500 to $3,200 in 2035. These costs correspond to a retail price increase of 5 to 15% above the price of today’s gasoline vehicle. Achieving a 35% reduction in vehicle weight by 2035 would add roughly $2 to the cost of manufacturing a vehicle for every kilogram of weight removed. This would increase the retail price of a conventional gasoline vehicle in 2035 by roughly 10% compared to today.

Not accounting for fuel savings, the total extra manufacturing cost to double fuel economy in the average vehicle by 2035 would be between $55 billion and $65 billion in constant 2007 dollars in the 2035 model year alone, or an additional 15% to 20% of the estimated baseline manufacturing cost in 2035 if fuel economy were to remain unchanged from today. Over 15 years of vehicle operation, this corresponds to a cost of $65 to $75 to reduce one ton of greenhouse gas emissions.

For the average consumer, this translates into a retail price increase of $3,400 for a car with doubled fuel economy in 2035, and an increase of $4,000 for a light truck. If the fuel savings provided by doubling fuel economy are taken into account, the undiscounted payback period (that is, the length of time required for the extra cost to pay for itself) is rroughly five years for both cars and light trucks at the Energy Information Administration’s long-term gasoline price forecast of $2.50 per gallon. At $4.50 per gallon—a price that didn’t seem out of the question in mid-2008—the undiscounted pay back period shortens to only three years.

Engaging the policy gear

Although it is technically possible to double the fuel economy of new vehicles by 2035, major changes would be required from the status quo. Tough trade-offs will need to be made among improvements in vehicle performance, cost, and fuel economy. Although CAFE is a powerful policy tool, it is also a blunt instrument for grappling with the magnitude and cost of these required changes for two reasons: It has to overcome the market forces of the past two decades that have shown a strong preference for larger, heavier, and more powerful vehicles; and in attempting to reverse this trend, CAFE places the burden of improving fuel economy solely on the auto industry.

As buyers have grown accustomed to current levels of vehicle size and performance, domestic manufacturers have profited from providing such vehicles. In contrast, increasing CAFE standards may require abrupt changes in vehicle attributes from automakers whose ability to comply is constrained by the high cost of rapid changes in technology. More consistent signals that buyers are willing to pay for improved fuel economy would justify the investments needed for compliance.

Such signals can be provided by policy measures that influence consumer behavior and purchase decisions. First, providing financial incentives for vehicles based on their fuel economy would strengthen the market forces pulling efficiency improvements toward improving fuel economy. Second, raising the cost of driving with a predictable long-term price signal would further reduce the popularity of gas-guzzlers, encouraging the adoption of fuel-sipping vehicles over time. These complementary measures can sharpen the bluntness of CAFE by providing clear incentives to consumers that directly influence market demand for fuel economy.

Feebates are one such reinforcing policy that would reward buyers for choosing improved fuel economy when they purchase a new vehicle. Under a feebate system, cars or trucks that achieve better than average fuel economy would receive a rebate against their retail price. Cars or trucks that achieve worse than average fuel economy would pay an extra fee. Effectively, sales of gas-guzzling vehicles subsidize the purchases of models with high fuel economy.

Feebates have several advantages. They can be designed in a revenue-neutral manner so that the amount paid in rebates is equal to the revenue collected from fines. They do not discriminate between vehicles that employ different technologies but focus on improving fuel economy in a technology-neutral manner. And they provide a consistent price incentive that encourages manufacturers to adopt technologies in ways that improve vehicle fuel economy. A drawback is that feebates require administrative oversight in defining how the fees and rebates will be calculated and in setting an increasingly stringent schedule in order to balance revenue against disbursements.

Various revenue-neutral arrangements have been proposed that would see the funds collected from tax increases returned to consumers in the form of income or payroll tax rebates.

Feebates have been proposed in France and Canada. France’s scheme is aimed at achieving the European Commission’s objective of reducing new vehicle carbon dioxide emissions. Canada introduced a national feebate system in the spring of 2007, but the government has since decided to phase out the system in 2009 because of complaints about how the fees and rebates were structured and a lack of consultation with industry.

Measures that influence the cost of driving are another reinforcing lever for improving fuel economy. As petroleum prices rise, which many observers expect over the longer term, politicians and consumers alike typically show increased interest in improving fuel economy. In a similar way, increasing the federal fuel tax over a number of years would encourage consumers to adopt vehicles that get more miles to the gallon, even if fuel prices themselves do not go back up dramatically.

Historical data indicate that over the short term, the immediate response to high gasoline prices is small. If higher prices are sustained for several years, however, the reduction in demand for gasoline is estimated by econometric studies to be four to seven times larger as consumers retire existing vehicles and replace them with newer fuel-sipping models. Although the actual response to changes in price is uncertain, recent studies suggest that a 10% increase in gasoline prices would reduce consumption by 2 to 4% over 10 to 15 years. This consumer-driven reduction would be achieved almost entirely through the purchase of vehicles with improved fuel economy.

Although higher fuel taxes would stimulate demand for fuel economy over the long term, substantial increases have proven politically infeasible to date. Gasoline taxes affect all consumers, and some observers argue that higher taxes will hit people with low incomes the hardest. Fuel tax increases are also met with cynicism because they generate significant revenue for the government. Any policy proposal advocating an increase in federal or state fuel taxes must clearly outline how the revenues generated from tax increases will be used to benefit consumers or rebated.

One compelling rationale for substantial increases in fuel taxes is the need for greater investment in the nation’s surface transportation infrastructure. In January 2008, the National Surface Transportation Policy and Revenue Study Commission, a blue ribbon panel that examined the future needs of national surface transportation, supported as much as a 40 cent increase in the federal fuel tax over five years. In justifying the increase, the commission noted that the Highway Account of the Highway Trust Fund will have a negative balance of $4 billion to $5 billion by the end of the 2009 fiscal year and is in desperate need of the revenue that would be generated from increased taxes on transportation fuel.

Alternatively, various revenue-neutral arrangements have been proposed that would see the funds collected from tax increases returned to consumers in the form of income or payroll tax rebates. A “pay at the pump” system would offer a separate revenue-neutral approach. This system would roll registration, licensing, and insurance charges into the price of gasoline paid at the pump. Annual or semiannual costs of vehicle ownership would become a variable cost per gallon of fuel consumed, encouraging the purchase of vehicles with higher fuel economy without requiring the average driver to pay more. California is considering similar “pay as you drive” legislation that would allow insurers to offer premiums based on the actual annual mileage driven by an individual. A study by the Brookings Institution found that this measure could result in an 8% reduction in light-duty vehicle travel and $10 billion to $20 billion in benefits, primarily among low-income drivers.

Boosting miles per gallon

To see the possible benefits of such policy actions, it is useful to consider the combined effect of two of these policies alongside the mandated 35 mpg CAFE target by 2020. The two policies are a feebate system that provides a $1,000 incentive against the retail price of a vehicle for every one-hundredth of a gallon shaved off the amount of fuel consumed per-mile (roughly ranging from a maximum rebate of $1,200 to a maximum fee of $3,000 dollars per vehicle), and an annual 10 cent per gallon increase in federal fuel taxes, sustained for 5 to 10 years.

Based on our cost assessment, the feebate measure would be strong enough to neutralize the retail price increase of enhancements of conventional gasoline engines that improve fuel economy and most of the increased price from purchasing a more fuel-efficient turbocharged gasoline engine. It would offset roughly half of the price increase of a diesel engine and more than a third of the price of a hybrid-electric powertrain. By effectively subsidizing manufacturers to adopt technologies in ways that improve fuel economy, such feebates would ease the internal pricing strategies of automakers while sending consumers a clear price signal at the time of vehicle purchase.

The second measure, increased fuel taxes, would send a continuous signal to consumers each time they fill up at the pump. Under our suggested policy package, the federal government would increase its fuel tax by roughly 10 cents a gallon annually over five or more years. This would provide a moderate but consistent signal to consumers over the longer term. Such a policy alone could stimulate a 4 to 8% reduction in annual gasoline consumption over 10 to 15 years, given recent estimates of the sensitivity of gasoline demand to changes in price. Alongside CAFE, sustained fuel tax increases could match the public’s desire for more miles per gallon to fuel economy regulations that the public might not otherwise prefer.

The combined effect of these two policies is consistent and reinforcing: Consumers respond to feebates and fuel prices in a way that aligns their desire for fuel economy with requirements placed on manufacturers. These demand-side measures would encourage consumers to choose vehicles that achieve more gallons per mile, an approach that harnesses market forces to pull efficiency gains in vehicles toward improved fuel economy alongside the regulatory push provided by CAFE. A sustained demand for better fuel economy from consumers would also assuage the fears of automakers that they will be stuck with CAFE’s price tag.

Just as there is no silver bullet in the various technology options now available or just over the horizon, controversy over CAFE that has persisted for two decades suggests that one dominant strategy is unlikely to satisfy the necessary political and economic constraints while sustaining long-term reductions in petroleum consumption and GHG emissions. Broadening the policy debate to include measures such as feebates and fuel taxes that stimulate consumer demand for fuel economy through price signals will enhance the prospects of achieving CAFE’s goal of 35 mpg by 2020 and further targets beyond. A coordinated set of fiscal and regulatory measures offers a promising way to align the interests of government, consumers, and industry. Achieving Congress’s aggressive target will not be easy, but overcoming these barriers is essential if the nation is to deliver on the worthy goal of reducing the fuel use and emissions of greenhouse gases from cars and light trucks.

Recommended reading

J. Bordoff, P. J. Noel, “The Impact of Pay As You Drive Auto Insurance in California”, The Hamilton Project, The Brookings Institution, 2008. http://www.brookings.edu/papers/2008/07_payd_california_bordoffnoel.aspx.

Congressional Budget Office, “Effects of Gasoline Prices on Driving Behavior and Vehicle Markets”, Congress of the United States, January 2008, .

U.S. Environmental Protection Agency, “Light-Duty Automotive Technology and Fuel Economy Trends: 1995 through 2007,” Office of Transportation and Air Quality, U.S. Environmental Protection Agency, 2007, http://www.epa.gov/otaq/fetrends.htm.

G. E. Metcalfe, “A Green Employment Tax Swap: Using a Carbon Tax to Finance Payroll Tax Relief”, The Brookings Institution and World Resources Institute Policy Brief, June 2007, http://pdf.wri.org/Brookings-WRI_GreenTaxSwap.pdf.

U.S. Government Accountability Office, “Reforming Fuel Economy Standards Could Help Reduce Oil Consumption by Cars and Light Trucks, and Other Options Could Complement These Standards,” U.S. Government Accountability Office, GAO-07-921, 2007, http://www.gao.gov/new.items/d07921.pdf.

K. A. Small, K. Van Dender, “If Cars Were More Efficient, Would We Use Less Fuel?,” Access, Issue 31, University of California Transportation Center, 2007, .

L. Cheah, C. Evans, A. Bandivadekar, J. Heywood, “Factor of Two: Halving the Fuel Consumption of New U.S. Automobiles by 2035,” Laboratory for Energy and Environment report, 2007, http://web.mit.edu/sloan-auto-lab/research/beforeh2/files/cheah_factorTwo.pdf.

National Surface Transportation Policy and Revenue Study Commission, “Transportation for Tomorrow: Report of the National Surface Transportation Policy and Revenue Study Commission”, 2008, http://www. transportationfortomorrow.org/final_report/.


Christopher Evans () is a recent masters graduate, Lynette Cheah is a Ph.D. student, Anup Ban divadekar is a recent Ph.D. graduate, and John Heywood is Sun Jae Professor of Mechanical Engineering at the Massachusetts Institute of Technology.

The High Road for U.S. Manufacturing

Manufacturing employment could be stabilized with more widespread use of advanced production methods. Government policy can play a key role.

The United States has been losing manufacturing jobs at a stunning rate: 16% of the jobs disappeared in just the three years between 2000 and 2003, with a further decline of almost 4% since 2003. In all, the nation has lost 4 million manufacturing jobs in just more than 8 years. This was some of the best-paying work in the country: The average manufacturing worker earns a weekly wage of $725, about 20% higher than the national average. Although manufacturing still pays more than average, wages have fallen relative to the rest of the economy, especially for non-college workers. Manufacturing also employs significant numbers of white-collar workers: One in five manufacturing employees is an engineer or manager.

Continued hemorrhaging is not inevitable. The United States could build a high-productivity, high-wage manufacturing sector that also contributes to meeting national goals such as combating climate change and rebuilding sagging infrastructure. The country can do this by adopting a “high-road” production system that harnesses everyone’s knowledge—from production workers to top executives—to produce high-quality innovative products.

Promoting high-road strategies will strengthen manufacturing and the U.S. economy as a whole. Through coordination with highly skilled workers and suppliers, firms achieve high rates of innovation, quality, and fast response to unexpected situations. The resulting high productivity allows firms to pay fair wages to workers and fair prices to suppliers while still making fair profits.

Many U.S. firms can close the remaining cost gap with low-wage competitors. Some firms are already doing so, and there is evidence that a few widely applicable and teachable policies account for a lot of their success.

How can this be done? Start with more investment in education, training, and R&D. But education alone will not allow firms to overcome the market failures that block the adoption of efficient high-road practices. Nor will it reinvigorate income growth, which even for college-educated men has risen only 0.5% annually since 1973 at the median. Similarly, increased R&D spending by itself won’t get innovative products to market.

More is needed. Competing with low-wage nations is not as daunting as one might think. Research by the Michigan Manufacturing Technology Center suggests that most manufacturers have costs within 20% of their Chinese competitors. Reducing costs by this magnitude is well within the range achievable by high-road programs, and a key institution that can help bridge this gap is already in operation. The federal Manufacturing Extension Partnership (MEP) program teaches companies to develop new products, find new markets, and operate more efficiently—and it pays for itself in increased tax revenue from the firms it helps. This program will not save all the manufacturing at risk, but it will increase the viability of much of it, while increasing the productivity and wages of those who perform this important work.

The low-wage fallacy

Two main forces have caused U.S. manufacturing employment to fall: the growth of productivity during a period of stagnant demand and the offshoring of work to other nations, especially China. Economists differ as to the relative contribution of the two forces, but as Nobel Laureate Paul Krugman argued in the Brookings Papers on Economic Activity, there is growing consensus that both are important.

Two groups of policy analysts argue that nothing should be done about the stunning fall in manufacturing employment, but for opposite reasons. One group, exemplified by a 2007 study by Daniel Ikenson of the Cato Institute, argues that the employment decline is a sign of soaring productivity, and that manufacturing is actually “thriving.” Another view, exemplified by New York Times columnist Thomas Friedman, says it is simply impossible to compete with countries whose wages are so much lower than ours. It is inevitable, he argues, that manufacturing will go the way of agriculture, employing a tiny fraction of the workforce.

Neither of these views is correct. Although U.S. manufacturing is not thriving, with appropriate policies it could be. First, there are problems with the Cato study’s statistical analysis. Second, a significant number of firms are holding their own, and more could do so with appropriate policies.

The Cato study says that U.S. manufacturing output reached an all-time high in 2006, but it fails to subtract the value of imported inputs. When one looks at manufacturing value added, even Cato’s data show that output has fallen since 2000. And even these data, drawn from U.S. government sources, paint far too rosy a picture because U.S. statistical agencies do not track what happens to goods outside U.S. borders. The result of this limitation (and of some complex statistical interactions) means that official statistics could be substantially overestimating growth in manufacturing output.

U.S. firms can and do compete with China and other low-wage countries, in part because direct labor costs are only 5 to 15% of total costs in most manufacturing. Many U.S. firms have costs not so different from those of Chinese firms. Therefore, it is not naïve to think that manufacturing can and should play an important role in the U.S. economy during the next several decades.

A 2006 study by the Performance Benchmarking Service (PBS) suggests that most small U.S. manufacturers are competitive with Chinese firms or could become so. Similarly, a 2004 McKinsey study found that in many segments of the automotive parts industry, the “China price” is only 20 to 30% lower than the U.S. price for a similar component. Note that neither this study nor the PBS study takes into account most of the hidden costs discussed below. Thus, low-wage countries are not necessarily low-cost countries. U.S. companies can continue to pay higher wages for direct labor and offset the added cost with greater capabilities—capabilities that lead to outcomes such as higher productivity, fewer quality problems, and fewer logistical problems.

Unfortunately, firms are handicapped in deciding where they should locate production because they often do not take into account the hidden costs of offshoring. A number of studies have found that most firms, even large multinationals, use standard accounting spreadsheets to make sourcing decisions. These techniques focus on accounting for direct labor costs, even though these are a small percentage of total cost, and ignore many other important costs.

Consider some of the hidden costs of having suppliers far away. First, top management is distracted. Setting up a supply chain in China and learning to communicate with suppliers requires many long trips and much time, time that could have been spent on introducing new products or processes at home. Second, there is increased risk from a long supply chain, especially with just-in-time inventory policies. Third, there are increased coordination and “handoff costs” between U.S. and foreign operations. More difficult communication among product design, engineering, and production hinders serendipitous discovery of new products and processes. Quality problems may be harder to solve because of geographic and cultural distance. Time to market may increase.

These costs can be substantial: One study by Fanuc, a robotics manufacturer, found that they added 24% to the estimated costs of offshoring. The challenges of dealing with a far-flung supply base make it difficult for firms to innovate in ways that require linked design and production processes. For example, one Ohio firm had based its competitive advantage on its ability to quickly add features to its products (cup holders in riding mowers, to take a nonautomotive example). But when they sourced to China, these last-minute changes wreaked havoc with suppliers, and the firm was forced to freeze its designs much earlier in the product development process.

Why would firms systematically ignore these costs? One reason is to convince outside investors that the company is serious about reducing costs by taking actions that are publicly observable, such as shutting factories in the United States and moving to countries with demonstrably lower wages. However, as the U.S.-China price differential shrinks because of exchange rate revaluations, higher Chinese wages, and increased transportation costs, firms (such as Caterpillar) are turning more to suppliers closer to home.

Many U.S. firms can close the remaining cost gap with low-wage competitors. Some firms are already doing so, and there is evidence that a few widely applicable and teachable policies account for much of their success.

For example, in the metal stamping industry, a firm at the 90th percentile has a value added per worker of $125,000—a large enough pie to pay workers well, invest in modern equipment and training, and earn a fair profit. In contrast, the median firm has a value-added per worker of about $74,000 per year. This is barely enough to pay the typical compensation for a worker in this industry (about $40,000) and still have money left for equipment and profit. This differential in performance is typical: The PBS consistently finds that the top 10% of firms have one and a half times the productivity of the median firms, even within narrowly defined industries. Moreover, the same practices (designing new products, having low defect rates, and limiting employee turnover) explain much of the differential in productivity across a variety of industries.

Building high-productivity firms

U.S. firms cannot compete by imitating China by cutting wages and benefits. Instead, they should build on their strengths by drawing on the knowledge and skills of all workers. Many of this country’s high-productivity firms prospered by adopting a high-road production recipe in which firms, their employees, and suppliers work together to generate high productivity. Successful adoption of these policies requires that everyone in the value chain be willing and able to share knowledge. Involving workers and suppliers and using information technology (IT) are key ways of doing this.

Workers, particularly low-level workers, have much to contribute because they are close to the process: They interact with a machine all day, or they observe directly what frustrates consumers. For example, a study of steel-finishing lines by Casey Ichniowski, Kathryn Shaw, and Giovanna Prennushi found that firms with high-road practices had 6.7% more uptime (generating $2 million annually in net profits for a small plant) than did lines without them. The increase in uptime is due to communication and knowledge overlap. In a firm that does not use high-road practices, all communication may go through one person. In contrast, in high-road facilities, such as the one run by members of the United Steelworkers at Mittal Steel in Cleveland, workers solve problems more quickly because they communicate with each other directly in a structured way.

Involving suppliers is also important. Take, for example, the small supplier to Honda that had problems with some plastic parts. On an irregular basis, parts would emerge from molding machines with white spots along the edge of the product or molds not completely filled in. These problems, which had long plagued the company, were not solved until Honda organized problem-solving groups that pooled the diverse capacities and experiences of people in the supplier’s plant. They quickly solved the problem. Molding machine operators noticed condensation dripping into the resin container from an exhaust fan in the ceiling, quality control technicians then saw that the condensation was creating cold particles in the resin, and skilled trade people designed a solution.

The continuing use of IT will be critical in improving manufacturing practice, but it will not necessarily boost productivity unless it is accompanied by a decentralization of production, a key element of high-road production. For example, a study by Ann Bartel, Casey Ichniowski, and Kathryn Shaw of valve producers found that more-efficient firms adopted advanced IT-enhanced equipment while also changing their product strategy (to produce more customized valves), their operations strategy (using their new IT capability to reduce setup times, run times, and inspection times), and human resource policies (employing workers with more problem-solving skills and using more teamwork). The success of the changes in one area depended on success in other areas. For example, customizing products would not have been profitable without the reduced time required to change over to making a new product, a reduction made possible both by the improved information from the IT and the improved use of the information by the empowered workers. Conversely, the investments in IT and training were less likely to pay off in firms that did not adopt the more complex product.

A key reason why the high road’s linked information flow is so powerful is that real production rarely takes place exactly according to plan. A manufacturing worker may be stereotyped as someone who pushes the same button every 20 seconds, day after day, year after year, but even in mature industries, this situation rarely occurs. For example, temperatures change, sending machines out of adjustment; customers change their orders; a supplier delivers defective parts; a new product is introduced. All of these contingencies mean that the perfect separation of brain work and hand work envisioned by efficiency guru Frederick Taylor does not occur.

In mass production, managers have often tried to minimize these contingencies as well as worker discretion to deal with them. In contrast, the Toyota production system, which accepts that the very local information that workers have is crucial to running and improving the process, sets up methods for the sustained and organized exploration of that information. Although these methods require substantial overlap of knowledge and expertise that may seem redundant, they produce substantial benefits.

For example, at Denso, a Japanese-owned supplier in Battle Creek, Michigan, someone approved a suggestion that a supplier be able to deliver parts in standard-size boxes, thus reducing packaging costs. Although these boxes were only two inches deeper than the previous boxes, the difference created a significant problem. Denso’s practice (following the just-in-time philosophy) was that a worker would deliver the boxes from the delivery truck directly to a rack above the line. The worker who assembled these parts had to reach up and over and down into the box an extra two inches 2,000 times per shift, which proved quite painful. The situation was corrected quickly, because of an overlap of knowledge. Denso had a policy that managers worked on the line once per quarter, and the purchasing manager had done that job in the past. Thus, the worker knew whom to contact about the problem (since she had worked next to him for a day), and the purchasing manager understood immediately why the extra two inches was a problem. He directed the supplier to go back to the previous containers. In a world of perfect information, Denso’s rotation policy would be a waste of managerial talent; but in a world in which much knowledge is tacit and things change quickly, the knowledge overlap allowed quick problem identification and resolution.

This high-road model of production provides an alternative to the current winner-take-all model, with corporate executive “stars” at the top supported by workers considered to be disposable at the bottom. In this view, there are no jobs that are inherently low-skill or dead-end.

Diffusing high-road practices

The practices discussed above are not new. In response to the Japanese competitive onslaught in the 1980s and 1990s, some U.S. manufacturers had begun to use them. But they have not been as widely adopted as they could be.

Markets alone fail to provide the proper incentives for firms to adopt high-road policies for two main reasons. First, the high road works only if a company adopts several practices at the same time. It must improve communication skills at all levels, create mechanisms for communicating new ideas across a supply chain’s levels and functions, and provide incentives to use them. Merely getting the prices right (adding taxes or subsidies to correct for market failures) is not sufficient to build these capabilities. Instead, it makes sense to provide technical assistance services to firms directly.

Second, many of the benefits of the high-road strategy accrue to workers, suppliers, and communities in the form of higher wages and more stable employment. Profit-maximizing firms do not take these benefits into account when deciding, for example, how much to invest in training. Many firms will provide less than the socially optimal amount of general training because they fear trained employees will be hired away by other firms.

For these reasons, there is a theoretical case that government services could outperform competitive markets in promoting high-road production. There is also practical evidence that this potential has in many cases been realized.

Several in-depth studies have found that MEP pays for itself in increased tax revenue generated by the firms it serves. MEP could be even more effective if its scope were expanded, so that it could link together the disparate skills that firms must learn to master high-road production.

MEP has had significant success in helping manufacturers overcome many of these problems. Established in 1989 as part of the National Institute of Standards and Technology, the program was loosely modeled on the agricultural extension program. There are manufacturing extension centers in every state, providing technical and business assistance to small and medium-sized manufacturers. The centers help plants adopt advanced manufacturing technologies and quality-control programs, as well as develop new products. For example, the Wisconsin Manufacturing Extension Program has provided classes and consultants to help firms dramatically reduce their lead times (the time from order to delivery). A study by Joshua Whitford and Jonathan Zeitlin found that participants have cut their lead times by 50% and their inventory by 70%, improving their profit margins substantially while also improving performance for their customers.

Several in-depth studies have found that MEP pays for itself in increased tax revenue generated by the firms it serves. However, MEP remains a tiny program; its budget for fiscal year 2008 is only $90 million, less than $7 per manufacturing worker. This low level of funding makes it difficult for MEP to subsidize its services enough to capture their true social benefit. Currently, only marketing and facility costs are subsidized; this (very approximately) works out to be about a 33% rate of subsidy for first-time clients. Firms pay market rates for services actually delivered, meaning that they often buy services piecemeal when they have some extra cash. An increased rate of subsidy would allow the MEP to reach out with an integrated program to small firms that lack the capability to plan a coherent change effort. Such a program would enable MEP to teach skills such as brainstorming and problem-solving to a wider audience.

A market for private consultancy services to teach lean production has developed, but a 2004 study by Janet Kiehl and myself found that these consultants do not obviate the need for MEP. First, consultants tend to focus on areas that provide a quick cash return (such as one-time inventory reductions) rather than longer-term capability development (whose payoff would be harder for consultants to capture). Second, consultants are in practice a complement to MEPs, not substitutes. The reason is that MEPs expand the market for the outside provision of expertise by providing evaluations of firms, exposing firms to new ideas, and providing referrals to vetted consultants.

MEP could be even more effective if its scope were expanded so that it could link together the disparate skills that firms must learn to master high-road production. Some of these programs are already under way, but only on a pilot basis. Below are some key priorities:

Organize training by value chain in addition to focusing on individual firms. In the Wisconsin example above, the training was developed and candidate firms identified in conjunction with six large customer firms, including John Deere and Harley Davidson. This supply chain modernization consortium trains supplier firms in general (rather than firm-specific) competencies and promotes mutual learning by harmonizing supplier certification and encouraging cross-supplier communication. This framework meets diverse supplier needs through multiple institutional supports. For example, having customers agree on training priorities and encouraging suppliers to apply what they learn in class helps suppliers retain a focus on long-term improvement rather than short-term firefighting.

Include training on manufacturing services, because a key part of what high-productivity manufacturing firms offer is not just production itself but also preproduction work (learning what customers want and designing the products) and postproduction work (delivering goods just in time and handling warranty issues efficiently). These additional activities are often more tied to the location of consumers, who (at least for now) are usually in the United States. These activities also benefit from close linkages within and between plants. For example, skilled production workers and trade people can ramp up the production of high-quality products more quickly, produce more variety on the same lines, reduce lead times for customized products, reduce defects, and so forth.

With the right policies, the United States can have a revitalized manufacturing sector that brings with it good jobs, rapid innovation, and the capacity to pursue national goals.

Develop new products and find new markets. This is an especially important type of manufacturing service. These skills help high-road firms avoid competing with low-wage commodity producers. They also enable firms to make use of the additional capacity freed up by “lean” initiatives. MAGNET (the MEP center in Northern Ohio) has had significant success in this area. It employs a staff of 15 (plus four subcontractors) that can take a small company through all steps of the product development process. The MAGNET staff draws on ideas from several industries and technologies to help develop a diverse array of products, such as a light fixture that can be easily removed from the ceiling to enable bulb-changing without a ladder and a HUMVEE engine that can be replace in one hour, rather than the previous standard of two days.

Other possibilities include creating a national standard for evaluating the total cost of acquisition for components and teaching firms how to use energy more efficiently.

Creating discussion forums

High-road production techniques have been codified and shown to work. But this process of codification takes a long time. How will the next generation of programs be developed? In addition, the exact ingredients of the high-road recipe vary by industry and over time. Thus, it is useful to have forums for discussion so that industry participants can make coordinated investments, both subsidized and on their own. The forums could elicit the detailed information necessary to design good policies, thus avoiding government failure. However, organizing the forums is subject to market failures, because the benefits of coordinated investment are diffuse and thus hard for a profit-making entity to capture.

Federal and state governments could establish competitive grant programs in which industries compete for funding to establish such forums. Also, MEP should encourage cities and regions to apply to create such forums. A large literature, including case studies and statistical work, has found that firms concentrated in the same geographical area (including customers, suppliers, rivals, and even firms in unrelated industries) are more productive. The advantages of geographical proximity include the ability to pool trained workers and the ease of sharing new ideas. These advantages can be magnified if institutions are created that organize these exchanges, facilitating the communication and development of trust.

Several prototypes of these discussion forums already exist in a number of stages of the value chain, including innovation (Sematech), upstream supply [the Program for Automotive Renaissance in Tooling (PART) in Michigan], component supply (Accelerate in Wisconsin), and integrated skills training (the Wisconsin Regional Training Partnership and the Manufacturing Skills Standards Council).

PART includes communities, large automakers, first-tier suppliers, and small tool and die shops. Its membership reflects how much of manufacturing is organized today, with large firms outsourcing work to smaller suppliers, who remain geographically concentrated. The program, funded by the Mott Foundation, coordinates joint research among members and provides benchmarking and leadership development for small firms. It helps organize “coalitions” of small tooling firms that do joint marketing and develop standardized processes. The state of Michigan offers significant tax breaks for firms located in a Tooling Recovery Zone.

A bill to encourage the formation of discussion forums was introduced by U.S. Senators Sherrod Brown (D-OH) and Olympia Snowe (R-ME) in the summer of 2008. Called the Strengthening Employment Clusters to Organize Regional Success (SECTORS) Act, the legislation would provide grants of up to $2.5 million each for “partnerships that lead to collaborative planning, resource alignment, and training efforts across multiple firms” within an industry cluster.

Expanding MEP and creating discussion forums would cost about $300 million. I have calculated that if just half the firms increase their productivity by 20% as a result (the low estimate from Ronald Jarmin’s study of MEP’s effectiveness) and can therefore compete with China, the United States would save 50,000 jobs at a cost of only $6,000 per job, a cost that would be offset by increased tax revenue. This $300 million is a tiny amount of money. State and local governments currently spend $20 to $30 billion on tax abatements to lure firms to their jurisdictions. That spending generally does not improve productivity. Moreover, it is much cheaper to act now to preserve the manufacturing capacity we have than to try to reconstruct it once it is gone.

This $300 million expenditure can also be compared with that for agricultural extension: $430 million in 2006 for an industry that employs 1.9% of the workforce and produces 0.7% of gross domestic product (GDP). In contrast, manufacturing is 10% of the workforce and 14% of GDP.

Paving the high road

A number of observers have noted the fragility of high-road production in the United States. Cooperation, especially between labor and management, may flourish for a while but then collapse, or cooperation may be limited because management wants to keep its options open regarding the future of the facility. Low-road options (either in the United States or in low-wage nations overseas) remain attractive to firms, even if they impose costs on society. After a few failures, unions often become reluctant to trust again. Similar problems plague customer/supplier relations.

Therefore, we must look at broader economic policies that affect the stability of the high road in manufacturing and in other sectors. These policies can be divided into those that “pave the high road” (reduce costs for firms that choose this path) and those that “block the low road” (increase costs for firms that choose the low road, thus reducing their ability to undercut more socially responsible competitors).

Some examples of policies that pave the high road are universal health care, increased funding of innovation, and investments in training. Some policies that block the low road would be including in trade agreements protections for workers and the environment and strengthened safety regulations for workplaces and consumer products. Implementing these policies would require large investments but would benefit the entire economy, not just manufacturing.

Coordinated public effort to develop productive capabilities in the United States is an effective way of confronting the twin problems of shrinking manufacturing and stagnant income for most U.S. workers. With the right policies, the United States can have a revitalized manufacturing sector that brings with it good jobs, rapid innovation, and the capacity to pursue national goals.

Rather than abandon manufacturing, the nation can transform it into an example for the rest of the economy. The rationale for high-road policies is applicable to most industries in the United States. The policies outlined here could ensure that all parts of the economy remain strong and that all Americans participate in a productive way and reap the rewards of their efforts.

Recommended reading

AFL-CIO, Manufacturing Matters to the U.S. (Washington, DC: Working for America Institute, AFL-CIO, 2007) ().

Susan Helper and Janet Kiel, “Developing Supplier Capabilities: Market and Non-Market Approaches,” Industry & Innovation 11, no. 1-2 (2004):89–107.

Susan Helper, Renewing U.S. Manufacturing: Promoting a High-Road Strategy (Washington, DC: Economic Policy Institute, 2008) http://www.sharedprosperity.org/bp212/bp212.pdf)

Casey Ichniowski and Kathryn Shaw, “Beyond Incentive Pay: Insiders’ Estimates of the Value of Complementary Human Resource Management Practices,” Journal of Economic Perspectives 17, no. 1 (2003): 155–180.

Ronald S. Jarmin, “Evaluating the Impact of Nanufacturing Extension on Productivity Growth.” Journal of Policy Analysis and Management18, issue 1 (1999): 99–119.

Daniel Luria, Matt Vidal, and Howard Wial with Joel Rogers, FullUutilization Learning Lean” in Component Manufacturing: A New Industrial Model for Mature Regions, & Labor’s Stake in Its Success (Sloan Industry Studies Working Papers Number WP-2006-3, 2006) (http://www.cows.org/pdf/rp-amp_wai_final.pdf).

The Manufacturing Institute, the National Association of Manufacturers, and the Deloitte Consulting LLP, 2005 Skills Gap Report – A Survey of the American Manufacturing Workforce (November 2005) ().

John Paul MacDuffie and Susan Helper, Collaboration in Supply Chains With and Without Trust (New York: Oxford University Press, 2005).

Rajan Suri, “Manufacturers Can Compete vs. Low-Wage Countries” The Business Journal, February 13, 2004.

Josh Whitford and Jonathan Zeitlin, “Governing Decentralized Production: Institutions, Public Policy, and the Prospects for Inter-firm Collaboration in U.S. Manufacturing,” Industry & Innovation 11, no. 1 (2004): 11–14.

James Womack, Daniel Jones, and Daniel Roos, The Machine That Changed the World: The Story of Lean Production (New York: Harper Perennial, 1991).


Susan Helper () is the AT&T Professor of Economics at the Weatherhead School of Management, Case Western Reserve University.

From the Hill – Winter 2009

Research funding flat in 2009 as budget stalls

Fiscal year (FY) 2009 began on October 1 with final budget decisions for most federal agencies postponed until at least January 2009. To keep the government operating, lawmakers combined three final appropriations bills into a continuing resolution (CR) that extends funding for all programs in the remaining unsigned 2009 appropriations bills at 2008 funding levels through March 6. President Bush signed the measure into law on September 30.

The CR contains final FY 2009 appropriations for the Departments of Defense (DOD), Homeland Security (DHS), and Veterans Affairs (VA); all three will receive substantial increases in their R&D portfolios. Other federal agencies covered by the remaining appropriations bills will be operating temporarily at or below 2008 funding levels for several months. The CR excludes from its FY 2008 base most supplemental appropriations. Thus, agencies that received additional funds in the mid-year supplemental funding bill, including the National Institutes of Health, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy’s (DOE’s) Office of Science, will see a decrease under the CR. The CR, however, does allow the Food and Drug Administration to count the $150 million FY 2008 supplemental it received as part of its base.

The CR provides $2.5 billion for the Pell Grant program, which gives aid to college students, and $5.1 billion for low-income heating assistance. A $25 billion loan program for the auto industry is also part of the CR, as is $22.9 billion in disaster relief funding.

Overall, the federal government enters FY 2009 with an R&D portfolio of $147.3 billion, an increase of $2.9 billion or 2%, due entirely to an increase for DOD’s R&D, which will rise by $3 billion or 3.6% to $86.1 billion in 2009. The flat-funding formula of the CR results in a $61.2 billion total for non-defense R&D at the start of FY 2009, a cut of 0.1% as compared to 2008.

Excluding development funds, the federal investment in basic and applied research could decline for the fifth year in a row in 2009, after adjusting for inflation, if the CR’s funding levels hold for the entire year.

The flat funding levels of the CR put requested increases for the three agencies in the Bush administration’s American Competitiveness Initiative on hold. Although congressional appropriators had endorsed and even added to large requested increases for NSF, DOE’s Office of Science, and the Department of Commerce’s National Institute of Standards and Technology laboratories in early versions of the 2009 appropriations bills, the next Congress may have to start over again. In the meantime, the three key physical sciences agencies begin FY 2009 with funding levels at or slightly below those of 2008.

NASA funding boost authorized

Emphasizing the important role that a balanced and adequately funded science program at NASA plays in the nation’s innovation agenda, Congress in October approved a bill with broad bipartisan support that could significantly increase NASA’s funding. The National Aeronautics and Space Administration Authorization Act of 2008 authorizes $20.2 billion for FY 2009, far more than the $17.1 billion appropriated to the agency in the FY 2009 continuing resolution.

The bill authorizes an 11% increase above the president’s request in scientific research and strengthens NASA’s earth science, space science, and aeronautics programs. It contains provisions on scientific integrity, expressing “the sense of Congress that NASA should not dilute, distort, suppress, or impede scientific research or the dissemination thereof.” It also includes a plan for the continuation of the Landsat remote-sensing satellite program and reauthorizes the Glory mission to examine the effects of aerosols and solar energy on Earth’s climate.

Congressional Action on R&D in the FY 2009 Budget as of September 30, 2008 (budget authority in millions of dollars)

Action by Congress

FY 2008

FY 2009

FY 2009

Chg. from Request

Chg. from FY 2008

Estimate

Request

Congress

Amount

Percent

Amount

Percent

Defense (military) * 79,347 81,067 82,379 1,311 1.6% 3,032 3.8%

(“S&T” 6.1,6.2,6.3 + Medical) * 13,456 11,669 14,338 2,669 22.9% 882 6.6%

(All Other DOD R&D) * 65,891 69,398 68,040 -1,358 -2.0% 2,149 3.3%

National Aeronautics & Space Admin. 12,251 12,780 12,188 -592 -4.6% -63 -0.5%

Energy 9,724 10,519 9,661 -858 -8.2% -63 -0.6%

(Office of Science) 3,637 4,314 3,574 -740 -17.1% -63 -1.7%

(Energy R&D) 2,369 2,380 2,369 -11 -0.5% 0 0.0%

(Atomic Energy Defense R&D) 3,718 3,825 3,718 -107 -2.8% 0 0.0%

Health and Human Services 29,966 29,973 29,816 -157 -0.5% -150 -0.5%

(National Institutes of Health) 28,826 28,666 28,676 10 0.0% -150 -0.5%

(All Other HHS R&D) 1,140 1,307 1,140 -167 -12.8% 0 0.0%

National Science Foundation 4,501 5,175 4,479 -696 -13.5% -23 -0.5%

Agriculture 2,359 1,955 2,412 457 23.4% 53 2.2%

Homeland Security * 992 1,033 1,085 52 5.0% 93 9.4%

Interior 676 618 676 59 9.5% 0 0.0%

(U.S. Geological Survey) 586 546 586 41 7.5% 0 0.0%

Transportation 820 902 820 -81 -9.0% 0 0.0%

Environmental Protection Agency 548 541 548 7 1.3% 0 0.0%

Commerce 1,138 1,152 1,138 -14 -1.2% 0 0.0%

(NOAA) 581 576 581 5 0.9% 0 0.0%

(NIST) 521 546 521 -25 -4.5% 0 0.0%

Education 321 324 321 -3 -0.9% 0 0.0%

Agency for Int’l Development 223 223 223 0 0.0% 0 0.0%

Department of Veterans Affairs * 891 884 952 68 7.7% 61 6.8%

Nuclear Regulatory Commission 71 77 71 -6 -7.8% 0 0.0%

Smithsonian 203 222 203 -19 -8.6% 0 0.0%

All Other 322 299 322 23 7.7% 0 0.0%

TOTAL R&D * 144,354 147,743 147,295 -449 -0.3% 2,941 2.0%

Defense R&D * 83,065 84,892 86,097 1,204 1.4% 3,032 3.6%

Nondefense R&D * 61,288 62,851 61,198 -1,653 -2.6% -91 -0.1%

Basic Research * 28,846 29,656 28,952 -704 -2.4% 106 0.4%

Applied Research * 29,218 27,626 29,281 1,655 6.0% 63 0.2%

Total Research * 58,064 57,282 58,233 951 1.7% 169 0.3%

Development * 81,814 85,745 84,605 -1,140 -1.3% 2,791 3.4%

R&D Facilities and Capital Equipment * 4,476 4,716 4,457 -260 -5.5% -19 -0.4%

AAAS estimates of R&D in FY 2009 appropriations bills. Includes conduct of R&D and R&D facilities. All figures are rounded to the nearest million. Changes calculated from unrounded figures. FY 2008 figures have been adjusted to reflect supplementals enacted in Public Law 110-252 and contained in the FY 2009 CR. These figures have been revised since the publication of AAAS Report XXXIII: R&D FY 2009.

The bill calls for continuing NASA’s approach toward completing the International Space Station and making the transition from the Space Shuttle to the new Constellation launch system. The legislation authorizes the agency to fly two additional Shuttle missions to service the space station and a third flight to launch a DOE experiment to study charged particles in cosmic rays.

The Senate version of the bill added language that directs NASA to suspend until April 30, 2009, any activities that could preclude operation of the Space Shuttle after 2010, in order to provide an opportunity to the incoming administration to evaluate the shuttle’s planned retirement, thus providing another opportunity for reassessment and redirection of the agency.

Climate change proposals multiply

As the 110th Congress wrapped up, legislators were already looking ahead to the next session, releasing drafts of climate change proposals they hope to advance. The measures reflect growing interest in Congress in addressing the broad spectrum of concerns about climate change legislation so that a successful compromise can be reached.

On October 7, House Committee on Energy and Commerce Chair John Dingell (D-MI) and Subcommittee on Energy and Air Quality Chair Rick Boucher (D-VA) released a proposal for a cap-and-trade system to control U.S. greenhouse emissions. The bill would cap emissions at 80% below 2005 levels by 2050, which is more aggressive than the bill proposed by Sens. Joe Lieberman (I-CT) and John Warner (R-VA) that garnered a great deal of attention earlier in 2008.

Many of the bill’s provisions are similar to those of the Lieberman-Warner bill in terms of the mechanisms by which emissions would be controlled. This includes the creation of a market-based system of emissions permits that can be traded from one firm to another in order to remain within a cap set by the government. The cap would decline each year until reaching its ultimate reduction goal in 2050. The bill would give control of the carbon-permit allocation process to the Environmental Protection Agency (EPA), although the bill does not settle on a means of determining permit price. Instead, it offers four possible scenarios, ranging from initially offering the permits for free to limit burdens on covered firms to a proposal to use the allowance values entirely as rebates for consumers. Regardless of the option chosen, the proposal would invest in energy efficiency and clean energy technology, return value from permits back to low-income consumers, and auction all permits after 2026.

The bill would permit the purchasing of EPA-approved domestic and international carbon offset credits, although firms would be limited to off-setting only 5% of their emissions in the first five years, eventually increasing to 35% in 2024. The Lieberman-Warner bill would have permitted 15% of emissions to be offset with domestic carbon offset credits and up to 5% with international credits, but not until after 2012.

In addition to the Dingell-Boucher bill, an outline of principles for climate change legislation, based on adhering to greenhouse gas reductions that will limit global temperature rise to two degrees Celsius, was sent to House Speaker Nancy Pelosi (D-CA) by a group of 152 representatives, led by Reps. Henry Waxman (D-CA), Jay Inslee (D-WA), and Edward Markey (D-MA). These recommendations include a cap on carbon emissions of 80% below 1990 levels by 2050, more rapid policy responses to climate science, better international cooperation, investment in clean energy technology, and economic measures to protect consumers and domestic industries.

Markey, the chairman of the Select Committee on Energy Independence and Global Warming, released his own climate change bill earlier in 2008. In an October 7 press release in which he welcomed the Dingell-Boucher bill, he said that, “The draft legislation lays out a range of options for structuring a cap-and-trade system that are likely to trigger a vigorous and healthy debate about how best to reduce global warming pollution.”

In the Senate, a group of 16 Democrats known as the “Gang of 16” is attempting to craft a new bill that will address concerns that arose during the debate on the Lieberman-Warner bill, namely the distribution process for emissions allowances and the desire to use low-carbon energy development to stimulate job creation.

The proposal would focus on critical areas of concern, namely examining carbon offsets, containing the costs of abatement, and protecting consumers. Provisions addressing these issues include creating incentives for farmers to produce marketable offset credits, investing in low-carbon energy technology such as clean coal, providing flexibility for businesses if new technology is not available or is too expensive, and providing energy assistance to low-income families in order to offset any rise in energy costs that results from the legislation.

Big boost in energy R&D funding supported

Although most of the energy policy debate in Congress during the fall of 2008 centered on expanding offshore drilling and renewable tax incentives, both of which were approved in the waning days of the 110th Congress, legislators also examined the role of energy R&D in advancing the nation’s energy independence.

At a September 10 hearing of the House Select Committee on Energy Independence and Global Warming, witnesses testified about the importance of having a broad portfolio of energy R&D programs to meet the challenges of energy security, climate change, and U.S. competitiveness. Committee Chairman Ed Markey (D-MA) noted that during the past 25 years, energy R&D has fallen from 10% of total R&D spending to only 2%, an amount Rep. Jay Inslee (D-WA) called “pathetic.” The witnesses all agreed on the need for additional funds for energy R&D, with estimates ranging from 3 to 10 times as much as current funding.

Susan Hockfield, president of the Massachusetts Institute of Technology, testified about the importance of energy investments in motivating students. “The students’ interest is absolutely deafening,” she said, “and one of my fears is that if we don’t fund the kind of research that will fuel innovation, these very brilliant students will see that a bright future actually lies elsewhere.”

University of Michigan Vice President of Research Stephen Forrest discussed the willingness of the university community to join industry and government to discover solutions to address energy security. He called on Congress to fully fund the Advanced Research Projects Agency–Energy, a program targeting high-risk energy research authorized in the America Competes Act. Daniel Kammen of the University of California at Berkeley explained the role of federal R&D in sparking private-sector investment, stating that government funding is necessary to “prime the pump” before industry will increase R&D.

In addition to hearing testimony, legislators have received input from a variety of sources. The Council on Competitiveness released a “100 Day Action Plan” for the next administration. It calls for increased R&D investment and the creation of a $200 billion National Clean Energy Bank. More than 70 universities and scientific societies under the umbrella of the Energy Science Coalition released a petition to presidential candidates highlighting the importance of basic energy research in addressing energy issues. At a press conference hosted by the Science Coalition and the Task Force on the Future of American Innovation, leaders from universities, industry, and national labs described the role energy R&D can play in achieving U.S. energy independence.

During the presidential campaign, president-elect Barack Obama released his New Energy for America plan that calls for a $150 billion federal investment during the next 10 years in clean energy research, development, and deployment. The plan includes basic research to develop alternative fuels and chemicals, new vehicle technology, and next-generation nuclear facilities.

Despite calls from both parties and chambers of Congress, increased levels of funding for energy research are not included in the continuing resolution approved in October. However, Congress did address several energy issues in provisions in the Emergency Economic Stabilization Act of 2008. It includes extensions of the investment tax credit for solar energy and production tax credits for wind, solar, biomass, and hydropower, and expands the residential energy-efficient property credit. It also includes tax credits for oil shale, tar sands, and coal-to-liquid fuels, areas that may advance energy security and economic competitiveness but are at odds with addressing climate change, illustrating some of the difficulties in meeting these intertwined challenges.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Practical Pieces of the Energy Puzzle: Low Carbon Fuel Standards

The most direct and effective policy for transitioning to low-carbon alternative transportation fuels is to spur innovation with a comprehensive performance standard for upstream fuel producers.

When it comes to energy security and climate change concerns, transportation is the principal culprit. It consumes half the oil used in the world and accounts for almost one-fourth of all greenhouse gas (GHG) emissions. In the United States, it plays an even larger role, consuming two-thirds of the oil and causing about one-third of the GHG emissions. Vehicles, planes, and ships remain almost entirely dependent on petroleum. Efforts to replace petroleum—usually for energy security reasons but also to reduce local air pollution—have recurred through history, with little success.

The United States and the world have caromed from one alternative to another, some gaining more attention than others, but each one faltering. These included methanol, compressed and liquefied natural gas, battery electric vehicles, coal liquids, and hydrogen. In the United States, the fuel du jour four years ago was hydrogen; two years ago it was corn ethanol; now it is electricity for use in plug-in hybrid electric vehicles. Worldwide, the only non-petroleum fuels that have gained significant market share are sugar ethanol in Brazil and corn ethanol in the United States. With the exception of sugar ethanol in Brazil, petroleum’s dominance has never been seriously threatened anywhere since taking root nearly a century ago.

The fuel du jour phenomenon has much to do with oil market failures, overblown promises, the power of incumbents, and the short attention spans of government, the mass media, and the public. Alternatives emerge when oil prices are high but whither when prices fall. They emerge when public attention is focused on the environmental shortcomings of petroleum fuels but dissipate when oil and auto companies marshal their considerable resources to improve their environmental performance. When President George H. W. Bush advocated methanol fuel in 1989 as a way of reducing vehicular pollution, oil companies responded with cleaner-burning reformulated gasoline and then with cleaner diesel fuel. And when state air regulators in California and federal officials in Washington adopted aggressive emission standards for gasoline and diesel engines, vehicle manufacturers diverted resources to improve engine combustion and emission-control technologies.

The fuel du jour phenomenon also has much to do with the ad hoc approach of governments to petroleum substitution. The federal government provided loan and purchase guarantees for coal and oil shale “synfuels” in the early 1980s when oil prices were high, passed a law in 1988 offering fuel-economy credits for flexible-fuel cars, launched the Advanced Battery Consortium and the Partnership for a New Generation of Vehicles in the early 1990s to accelerate development of advanced vehicles, promoted hydrogen cars in the early years of this decade, provided tens of billions of dollars in federal and state subsidies for corn ethanol, and now is providing incentives for plug-in hybrids.

State governments also pursued a variety of options, including California’s purchases of methanol cars in the 1980s and imposition of a zero-emission vehicle requirement in 1990. These many alternative-fuel initiatives failed to move the country away from petroleum-based transportation. The explanation has much to do with government prescribing specific solutions and not anticipating shifts in fuel markets. More durable policies are needed that do not depend on government picking winners. The needed policies should be performance-based, stimulate innovation, and reduce consumer and industry risk and uncertainty. A more coherent and effective approach is needed to orchestrate the transition away from oil.

Policy strategy

The path to reducing oil dependence and decarbonizing transportation involves three related initiatives: improving vehicle efficiency, reducing vehicle use, and decarbonizing fuels. Here we focus on decarbonizing fuels, which has the additional benefit of reducing oil use.

To succeed, any policy approach must adhere to three principles: It must inspire industry to pursue innovation aggressively; it must be flexible and performance-based so that industry, not government, picks the winners; and it should take into account all GHG emissions associated with the production, distribution, and use of the fuel, from the source to the vehicle.

We believe that the low carbon fuel standard (LCFS) approach that is being implemented in California provides a model for a national policy that can have a significant near-term effect on carbon emissions and petroleum use. The LCFS is a performance standard that is based on the total amount of carbon emitted per unit of fuel energy. Critically, the standard includes all the carbon emitted in the production, transportation, and use of the fuel. Although upstream emissions account for only about 20% of total GHG emissions from petroleum, they represent almost the total lifecycle emissions for fuels such as biofuels, electricity, and hydrogen. Upstream emissions from extraction, production, and refining also comprise a large percentage of total emissions for the very heavy oils and tar sands that oil companies are using to supplement dwindling sources of conventional crude oil. The LCFS is the first major public initiative to codify lifecycle concepts into law, an innovation that must increasingly be part of emission-reduction policies if we are to control the total carbon concentration in the atmosphere.

To simplify implementation, the LCFS focuses as far upstream as possible, on the relatively small number of oil refiners and importers. Each company is assigned a maximum level of GHG emissions per unit of fuel energy it produces. The level declines each year to put the country on a path to reducing total emissions. To maximize flexibility and innovation, the LCFS allows for the trading of emission credits among fuel suppliers. Oil refiners could, for instance, sell biofuels or buy credits from biofuel producers, or they could buy credits from an electric utility that sells power to electric vehicles. Those companies that are most innovative and best able to produce low-cost, low-carbon alternative fuels would thrive. The result is that overall emissions are lowered at the lowest cost for everyone.

A clear advantage of this approach is that it does not have to be revised every time a new alternative appears. Any cost-effective energy source that moves vehicles with lower GHG emissions can benefit from the LCFS. The combination of regulatory and market mechanisms makes the LCFS more politically acceptable and more durable than a strictly regulatory approach.

With the exception of sugar ethanol in Brazil, petroleum’s dominance has never been seriously threatened anywhere since taking root nearly a century ago.

The California Air Resources Board adopted the LCFS in concept in June 2007 and began a rulemaking process, with the final rule scheduled for adoption in March 2009 and implementation in January 2010. California’s LCFS proposal calls for at least a 10% reduction in emissions per unit of energy by 2020.

The European Union has in parallel unveiled a proposal similar to the LCFS in California, and the Canadian provinces of British Columbia and Ontario as well as several states in the Northeast are considering similar approaches. The proposed 2007 Lieberman-Warner Climate Security Act (S. 2191) included an LCFS program.

Why not a renewable fuel standard?

To appreciate the wisdom of the LCFS approach, compare it to the alternatives. Congress adopted a renewable fuels standard (RFS) in 2005 and strengthened it in December 2007 as part of the Energy Independence and Security Act (EISA). It requires that 36 billion gallons of biofuels be sold annually by 2022, of which 21 billion gallons must be “advanced” biofuels and the other 15 billion gallons can be corn ethanol. The advanced biofuels are required to achieve at least 50% reduction from baseline lifecycle GHG emissions, with a subcategory required to meet a 60% reduction target. These reduction targets are based on lifecycle emissions, including emissions from indirect land use. Although the RFS is a step in the right direction, the RFS volumetric mandate has three shortcomings. First, it targets only biofuels and not other alternatives. Second, setting the target of 50 and 60% GHG reductions is an admirable but clumsy approach. It forces biofuels into a small number of fixed categories and thereby stifles innovation. Third, it exempts existing and planned corn ethanol production plants from the GHG requirements, essentially endorsing a massive expansion of corn ethanol. This rapid expansion of corn ethanol not only stresses food markets and requires massive amounts of water, but also pulls large quantities of land into corn production. The ultimate effect of increasing corn ethanol production will be diversion of prairie lands, pastures, rainforests, and other lands into intensive agricultural production, likely resulting in higher overall GHG emissions than from an equivalent amount of gasoline and diesel fuels.

Other strategies that have won attention are a carbon tax and a cap and trade program. Economists argue that carbon taxes would be the more economically efficient way to introduce low-carbon alternative fuels. Former Federal Reserve chairman Alan Greenspan, car companies, and economists on the left and the right all have supported carbon and fuel taxes as the principal cure for both oil insecurity and climate change. But carbon taxes have shortcomings. Not only do they attract political opposition and public ire, they are of limited effectiveness. Taxing energy sources according to how much carbon dioxide (CO2) they admit certainly sounds sensible and straightforward, but this strategy is not effective in all situations. A carbon tax could work well with electricity generation because electricity suppliers can choose among a wide variety of commercially available low-carbon energy sources, such as nuclear power, wind energy, natural gas, or even coal with carbon capture and sequestration. A tax of as little as $25 per ton of CO2 would increase the retail price of electricity made from coal by 17%, which would be enough to motivate electricity producers to seek lower-carbon alternatives. The result would be innovation, change, and decarbonization. Carbon taxes promise to be effective in transforming the electricity industry.

But transportation is a different story. Producers and consumers would barely respond to even a $50-a-ton tax, which is well above what U.S. politicians have been considering. Oil producers wouldn’t respond because they have become almost completely dependent on petroleum to supply transportation fuels and can’t easily or quickly find or develop low-carbon alternatives. Equally important, a transition away from oil depends on automakers and drivers also changing their behavior. A carbon tax of $50 per ton would raise the price of gasoline by only about 45 cents a gallon. This wouldn’t induce drivers to switch to low-carbon alternative fuels. In fact, it would barely reduce their consumption, especially when price swings of more than this amount have become a routine occurrence.

Carbon cap and trade programs suffer the same shortcomings as carbon taxes. This policy, as usually conceived, involves placing a cap on the CO2 emissions of large industrial sources and granting or selling emission allowances to individual companies for use in meeting their capped requirements. Emission allowances, once awarded, can be bought and sold. In the transportation sector, the cap would be placed on oil refineries and would require them to reduce CO2 emissions associated with the fuels. The refineries would be able to trade credits among themselves and with others. As the cap is tightened over time, pressure would build to improve the efficiency of refineries and introduce low-carbon fuels. Refiners are likely to increase the prices of gasoline and diesel fuel to subsidize low-carbon fuels, creating a market signal for consumers to drive less and for the auto companies to offer more energy-efficient vehicles. But unless the cap was very stringent, this signal would be relatively weak for the transportation sector.

Economists might characterize the LCFS approach as second best because it is not as efficient as a carbon tax or a cap and trade program. But given the huge barriers to alternative fuels and the limited impact of increased taxes and prices on transportation fuel demand, the LCFS is the most practical way to begin the transition to alternative fuels. Some day, when advanced biofuels and electric and hydrogen vehicles are commercially viable options, cap and trade and carbon taxes will become an effective policy with the transport sector. But until then, more direct forcing mechanisms, such as a LCFS for refiners, are needed to stimulate innovation and overcome the many barriers to change.

The LCFS cannot stand alone, however. It must be coupled with other policies, including efficiency and GHG gas emission standards for new cars, infrastructure to support alternative fuel penetration, and incentives to reduce driving and promote transportation alternatives. That is California’s approach, and it would also be an effective national policy in the United States and elsewhere.

Designing an LCFS

In the California case, the proposed 10% reduction in life-cycle GHG emissions by 2020 is imposed on all transport fuel providers, including refiners, blenders, producers, and importers. Aviation and certain maritime fuels are excluded because California either does not have authority over them or including these fuels presents logistical challenges.

There are several ways that regulated parties can comply with the LCFS. In the California model, three compliance strategies are available. First, refiners can blend low-GHG fuels such as biofuels made from cellulose or wastes into gasoline and diesel. Second, refiners can buy low-GHG fuels such as natural gas, biofuels, electricity, and hydrogen. Third, they can buy credits from other refiners or use banked credits from previous years. In the EU’s design, producers may also gain credit by improving energy efficiency at oil refineries or by reducing upstream CO2 emissions from petroleum and natural gas production, for instance by eliminating flaring.

LCFS is simple in concept, but implementation involves many details. The LCFS requires a system to record and verify the GHG emissions for each step of fuel production and distribution. California is using a “default and opt-in” approach, borrowed from a voluntary system developed in the United Kingdom, whereby fuels are assigned a conservative default value. In other words, the regulations estimate the carbon emissions associated with each fuel. The fuel producer can accept that estimate or provide evidence that its production system results in significantly lower emissions. This places the burden of measuring and certifying GHG emissions on the oil distributors, biofuel producers, and electricity generators.

A major challenge for the LCFS is avoidance of “shuffling” or “leakage.” Companies will seek the easiest way of responding to the new LCFS requirements. That might involve shuffling production and sales in ways that meet the requirements of the LCFS but do not actually result in any net change. For instance, a producer of low-GHG cellulosic biofuels in Iowa could divert its fuel to California markets and send its high-carbon corn ethanol elsewhere. The same could happen with gasoline made from tar sands and conventional oil. Environmental regulators will need to account for this shuffling in their rule making. This problem is mitigated and eventually disappears as more states and nations adopt the same regulatory standards and requirements.

Some day, when advanced biofuels and electric and hydrogen vehicles are commercially viable options, cap and trade and carbon taxes will become an effective policy with the transport sector.

Perhaps the most controversial and challenging issue is indirect land-use changes. When biofuel production increases, land is diverted from agriculture to energy production. The displaced agricultural production is replaced elsewhere, bringing new land into intensive agricultural production. By definition, this newly farmed land was previously used for less-intensive purposes. It might have been pasture, wetlands, or perhaps even rainforest. Because these lands sequester a vast amount of carbon in the form of underground and aboveground roots and vegetation—effectively storing more than twice the carbon contained in the entire atmosphere—any change in land use can have a large effect on carbon releases.

If biofuel production does not result in land-use changes—for instance when fuel is made from crop and forestry residues—then the indirect land-use effects are small or even zero. But if rainforests are destroyed or vegetation burned, then the carbon releases are huge. In the more extreme cases, these land-use shifts can result in each new gallon of biofuel releasing several times as much carbon as the petroleum diesel fuel it is replacing. In the case of corn ethanol, preliminary analyses suggest that ramping up to meet federal RFS targets will add about 40% more GHG emissions per unit of energy. Cellulosic fuels would have a much smaller effect, and waste biomass, such as crop and forestry residues and urban waste, would have no effect.

The problem is that scientific studies have not yet adequately quantified the indirect land-use effect. One could ignore the carbon and other GHG releases associated with land diversion in calculating lifecycle GHG emissions, but doing so imputes a value of zero to this effect. That is clearly wrong and inappropriate. The prudent approach for regulators is to use the available science to assign an initial conservative value and then provide a mechanism to update these assigned values as the science improves. Meanwhile, companies are advised to focus on biofuels with low GHG emissions and minimal indirect land-use effects, fuels created from wastes and residues or from degraded land, or biofuels produced from algae and renewable hydrocarbons. These feedstock materials and lands, not intensively farmed food crops, should be the heart of a future biofuels industry.

A broader concern is the environmental and social sustainability of biofuels. Many biofuel programs, such as those in the Netherlands, UK, and Germany, have or are adopting sustainability standards for biofuels. These sustainability standards typically address issues of biodiversity, soil, air, and water quality, as well as social and economic conditions of local communities and workers. They require reporting and documentation but lack real enforcement teeth. And none address effects on land and food prices and the market-mediated diversion of land to less sustainable uses. The effectiveness of these standards remains uncertain. New and better approaches are needed.

Those more concerned with energy security than with climate change might be skeptical of the LCFS. They might fear that the LCFS disadvantages high-carbon alternatives such as tar sands and coal liquids. That concern is valid, but disadvantaging does not mean banning. Tar sands and coal liquids could still be introduced on a large scale with an LCFS. That would require producers of high-carbon alternatives to be more energy efficient and to reduce carbon emissions associated with production and refining. They could do so by using low-carbon energy sources for processing energy and could capture and sequester carbon emissions. They could also opt for ways of converting tar sands and coal resources into fuels that facilitate carbon capture and sequestration. For instance, gasifying the coal to acquire hydrogen allows for the capture of almost all the carbon, because none remains in the fuel itself. In this way, coal could be essentially a zero-carbon option.

In a larger sense, the LCFS encourages energy producers to focus on efficiency and methods for reducing carbon. It stimulates innovation in ways that are in the public interest. Even with an LCFS policy in place, a region or nation might still produce significant quantities of fossil alternatives but those fuels would be lower carbon than otherwise, and they would be balanced by increasing quantities of other non-fossil fuels.

Going global

The principle of performance-based standards lends itself to adoption of a national or even international LCFS. The California program is being designed to be compatible with a broader program. Indeed, it will be much more effective if the United States and other countries also adopt it. Although some countries have already adopted volumetric biofuel requirements, these could be readily converted into an LCFS. It would require converting the volumetric requirements into GHG requirements. In the United States that would not be difficult because GHG requirements are already imposed on each category of required biofuels. For the EU programs, efforts are under way to complement their biofuel directive with an LCFS-like fuel-quality directive that would require a 10% reduction in GHG intensity by 2020 for transport fuels.

An important innovation of the California LCFS is its embrace of all transportation fuels. The U.S. and European RFS programs include only biofuels, including biogas. Although it is desirable to cast the net as wide as possible, there is no reason why all states and nations must target the same fuels. Indeed, the northeastern U.S. states are exploring the inclusion of heating oil in their LCFS.

Broader-based LCFS programs are attractive for three reasons. First, it would be easier to include fuels used in international transport modes, especially fuels used in jets and ships. Second, a broader LCFS would facilitate standardization of measurement protocol. At present, California is working with fuel-exporting nations to develop common methods for specifying GHG emissions of fuels produced in those countries. The fuels of most relevance at this time are ethanol and biodiesel from Brazil, but tar sands from Canada will also be of interest. Third, the broader the pool, the greater the options available to regulated entities, and more choice means lower overall cost, because there will be a greater chance of finding low-cost options to meet the targets.

The ad hoc policy approach to alternative fuels has largely failed. A more durable and comprehensive approach is needed that encourages innovation and lets industry pick winners. The LCFS does that. It provides a single GHG performance standard for all transport-fuel providers, and it uses credit trading to ensure that the transition is accomplished in an economically efficient manner.

Although one might prefer more theoretically elegant policies such as carbon taxes and cap and trade, those instruments are not likely to be effective in the foreseeable future with transport fuels. They would not be sufficient to induce large investments in electric vehicles, plug-in hybrids, hydrogen fuel cell vehicles, and advanced biofuels.

The LCFS is amenable to some variation across states and nations, but standardization of the measurement protocol is necessary for the LCFS performance standard to be implemented and enforced fairly and reliably. The LCFS not only encourages investments in low-carbon fuels, but it also accommodates high-carbon fossil fuels, with strong incentives to produce them more energy efficiently and with low-carbon energy inputs. The enormity of the threat of global climate change demands a policy response that encompasses all viable options.

Recommended reading

California Air Resources Board, Low Carbon Fuel Standard Program, 2008. http://www.arb.ca.gov/fuels/lcfs/lcfs.htm

Alexander E. Farrell and Daniel Sperling, A Low-Carbon Fuel Standard for California, Part 1: Technical Analysis. Institute of Transportation Studies, University of California, Davis, Research Report UCD-ITS-RR-07-07, 2007.

Alexander E. Farrell and Daniel Sperling, A Low-Carbon Fuel Standard for California, Part 2: Policy Analysis. Institute of Transportation Studies, University of California, Davis, Research Report UCD-ITS-RR-07-08, 2007.

Timothy Searchinger, Ralph Heimlich, R. A. Houghton, Fengxia Dong, Amani Elobeid, Jacinto Fabiosa, Simla Tokgoz, Dermot Hayes, and Tun-Hsiang Yu, “Use of U.S. Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land Use Change,” Science 319 (5867), 2008:1238 – 1240.


Daniel Sperling (), a professor of civil engineering and environmental science and policy and founding director of the Institute of Transportation Studies (ITS) at the University of California, Davis, is co-author of Two Billion Cars: Driving Toward Sustainability (Oxford University Press, 2009). Sonia Yeh is a research engineer at ITS.

Science on the Campaign Trail

In November 2007, a group of six citizens decided to do something to elevate science and technology in the national dialogue. They created Science Debate 2008, an initiative calling for a presidential debate on science policy. They put up a Web site, and began encouraging friends and colleagues to sign a petition calling for the debate. Within weeks 38,000 scientists, engineers, and other concerned citizens had signed on. The American Association for the Advancement of Science (AAAS), the National Academies, and the Council on Competitiveness (CoC) joined as cosponsors, although Science Debate 2008 remained independent, financed by individual contributions and volunteer labor. Within months it grew to represent virtually all of U.S. science, including almost every major science organization, the presidents of over 100 universities, prominent corporate leaders, Nobel laureates, and members of Congress. All told, the signatory organizations represented over 125 million Americans, making it arguably the largest political initiative in the history of U.S. science.

The need could not have been clearer. Science and technology dominate every aspect of our lives and thus heavily influence all of our policy considerations. Yet although nearly every major challenge facing the nation revolves around science policy, and at a time when the United States is falling behind in several key measures, the candidates and the news media virtually ignored these issues.

Others noted this problem as well. The League of Conservation Voters analyzed the questions asked of the then-candidates for president by five top prime-time journalists—CNN’s Wolf Blitzer, ABC’s George Stephanopoulos, MSNBC’s Tim Russert, Fox News’ Chris Wallace and CBS’s Bob Schieffer—who among them had conducted 171 interviews with the candidates by January 25, 2008. Of the 2,975 questions they asked, only six mentioned the words “climate change” or “global warming,” arguably the largest policy challenge facing the nation. To put that in perspective, three questions mentioned UFOs.

Armed with their list of supporters, the Science Debate team pitched the story to hundreds of news outlets around the country. The blogosphere buzzed over the initiative and ScienceDebate2008.com eventually rose to the top one-quarter of 1% of most visited Web sites worldwide. By any measure, coming off the Bush administration’s fractured relationship with U.S. science, the tremendous number of prominent individuals publicly calling for a presidential science debate was news, at least to some news outlets. But while “netroots” coverage exploded and foreign press picked up the story, not a single U.S. political news page and very few political blogs covered it. The idea of a science debate was being effectively shut out of the discussion by the mainstream press. The question was why.

The team investigated and identified a problem in U.S. news that goes beyond the fact that many news outlets are cutting their science sections. Even in outlets that still have one, editors generally do not assign political reporters to cover science stories, and science reporters don’t have access to the political pages. The business and economics beat and the religion and ethics beat have long since crossed this barrier onto the political page. But the science and technology beat remains ghettoized. Today, in an era when many of the biggest policy stories revolve around science, the U.S. press seems to be largely indifferent to science policy.

This situation tends to have an echo-chamber effect on candidates. Science Debate organizers secured broadcast partners in PBS’s NOW and NOVA and a venue at Philadelphia’s Franklin Institute. But the candidates responded that it wouldn’t work for their schedules. Tellingly, it did work for Barack Obama and Hillary Clinton to attend a “Compassion Forum” at Harrisburg’s Messiah College just days before the cancelled science debate, where, ironically, they answered questions about science. John McCain ignored both events.

Probing further, the Science Debate team learned that science was seen as a niche topic by the campaigns, and a presidential debate dedicated to science policy issues such as climate change, innovation, research, health care, energy, ocean health, stem cells, and the like was viewed as requiring extensive preparation and posing high risk for a limited return.

The tide turns

Science Debate 2008 wanted to test this assumption, so it partnered with Research!America and hired Harris to conduct a national poll. The results were astounding: Fully 85% of U.S. adults said the presidential candidates should participate in a debate to discuss key policy problems facing the United States, such as health care, climate change, and energy, and how science can help tackle them. There was virtually no difference across party lines. Contrary to the candidates’ assumptions, science is of broad concern to the public.

Next, Science Debate worked to reassure the campaigns that it was not out to sandbag one or another candidate by showing the candidates the questions in advance. The team culled the roughly 3,400 questions that had been submitted by supporters online into general categories and, bringing in the AAAS, the Academies, CoC, Scientists and Engineers for America, and several other organizations, developed “the 14 top science questions facing America.”

Armed with the results of the national polling, the continuing stream of prominent new supporters, and the 14 questions, the Science Debate team went back to the two remaining candidates and asked them to answer the questions in writing and to attend a televised forum.

Although the candidates still refused to debate, instead attending yet another faith forum at Saddleback Church in California, Science Debate 2008 was able to obtain written answers from both candidates. The Obama campaign tapped the expertise of his impressive campaign science advisory team to help him answer. The McCain campaign relied on their brilliant and multitasking senior domestic policy advisor, the economist and former Congressional Budget Office director Douglas Holtz-Eakin.

Once the answers were in hand, the Science Debate initiative was finally “news” from a political editor’s perspective. It was providing the candidates’ positions in their own words on a wide variety of substantive issues, and suddenly the floodgates opened. In the final month of the campaigns, reporters were looking for ways to differentiate the candidates, and political reporters started taking apart the nuances in the answers’ rhetoric. Obama, for example, expressly talked about a variety of international approaches to addressing climate change, and reporters noted that McCain remained silent on international issues and steered far away from the Kyoto Protocol.

The responses highlighted other, broader differences between the candidates. Senator Obama stressed his plans to double the federal agency research budgets, whereas Senator McCain stressed further corporate deregulation and tax credits to stimulate more corporate R&D, coupled with big money prizes to reward targeted breakthroughs. This philosophical difference carried through in answers on energy policy, education, innovation, and other areas. Senator Obama’s team further refined his answers into his official science policy platform. Senator McCain’s answer to the stem cell question came briefly into play in the race when his running mate, Governor Sarah Palin, contradicted it in an interview with James Dobson and was subsequently described as “going rogue.” In another answer and followup interview, Senator McCain claimed to have been responsible for the development of wi-fi and Blackberry-like devices, which caused a minor tempest. Senator Obama made news when 61 Nobel laureates, led by Obama science advisory team leader Harold Varmus, signed a letter in support of his campaign, and the answers of both candidates to the questions of Science Debate 2008 served as the basis for a letter signed by 178 organizations urging the winner to appoint a science advisor by January 20 and elevate the post to cabinet level.

References to the candidates’ science policy views eventually appeared in almost every major U.S. paper and in a wide variety of periodical and broadcast outlets across the country and around the world. All told, Science Debate 2008 generated over 800 million media impressions and was credited with elevating the level of discourse. No matter which candidate one supported, this level of discussion is healthy, some might even say critical, for a 21st-century United States.

Looking forward, much work remains to be done to repair America’s fractured relationship with science, and the Science Debate initiative and others like it should continue. Scientists must participate in the national dialogue, which requires a plurality of voices to be successful. President-elect Obama has laid out an ambitious science policy focused on some of the greatest challenges facing the nation, but harsh economic times and continued ideological opposition to science may make implementing that policy difficult. To succeed, the president will need the support of Congress, and members of Congress, in turn, the support of their constituents. In such an environment, the public’s understanding and appreciation of science policy will be important to the nation’s success, and the involvement of scientists will be critical in that process.


Shawn Lawrence Otto () is a cofounder and chief executive officer of Science Debate 2008. Sheril Kirshenbaum is a cofounder of Science Debate 2008 and a marine biologist at Duke University.

Overcoming Stone Age Logic

Through a remarkable manipulation of limited knowledge, brute force, and an overwhelming arrogance, humans have shaped a world that in all likelihood cannot sustain the standard of living and quality of life we have come to take for granted. Our approach to energy, to look at only one sector, epitomizes our limitations. We remain fixated on short-term goals and a simplistic model governed by what I call “Stone Age logic”: We continue to dig deep holes in the ground, extract dark substances that are the remains of pre-historic plants and animals, and deliver this treasure to primitive machines for combustion to maintain the energy system on which we base our entire civilization. We invest immense scientific and technological effort to find it more efficiently, burn it more cleanly, and bury it somewhere we will never have to see it again within a time horizon that might concern us. Find it, burn it, bury it. Our dependency on fossils fuels would be worthy of cavemen.

Fortunately, we seem to be slowly moving out of the final decades of the Stone Age, and discussions about whether our planet will be able to continue to sustain human societies at our present scale are no longer limited to environmentalists and apocalyptic religious groups. Prominent corporate, government, academic, and environmental leaders gathered during September 2008 in Washington to consider some of the most serious challenges facing humanity in a summit convened by Arizona State University. Among the host of concerned leaders were Minnesota governor Tim Pawlenty; Ford Motors executive chairman Bill Ford Jr.; Wal-Mart chairman Rob Walton; John Hofmeister, former president of Shell Oil and now president of Citizens for Affordable Energy; Massachusetts congressman Edward Markey, chair of the U.S. House Select Committee on Energy Independence and Global Warming; Michigan congressman Fred Upton, a member of the House Energy and Commerce Committee; and Frances Beinecke, president of the Natural Resources Defense Council.

Although there was broad agreement at the summit that Washington has abandoned its traditional environmental leadership role, leaving us reliant on a patchwork quilt of local or regional-scale solutions from cities and states, there was nevertheless a recognition that informed and carefully considered federal efforts will be essential if we are to meet our societal needs within the limits of our environment. However well-intentioned the motivation for immediate action may be, I would argue that without some grounding of public policy in the discourse of sustainability, we are likely to dig ourselves deeper into the holes we have already dug.

OUR UNIVERSITIES REMAIN DISPROPORTIONATELY FOCUSED ON PERPETUATING DISCIPLINARY BOUNDARIES AND DEVELOPING INCREASINGLY SPECIALIZED NEW KNOWLEDGE AT THE EXPENSE OF COLLABORATIVE ENDEAVORS TARGETING REAL-WORLD PROBLEMS.

Sometimes mistakenly equated with an exclusive focus on the environment, the term “sustainability” tends to be used so casually that we risk diluting its power as a concept. Its implications are far broader than the environment, embracing economic development, health care, urbanization, energy, materials, agriculture, business practices, social services, and government. Sustainable development, for example, means balancing wealth generation with continuously enhanced environmental quality and social well-being. Sustainability is a concept of a complexity, richness, and significance comparable to other guiding principles of modern societies, such as human rights, justice, liberty, and equality. Yet, as is obvious from our failure to embrace the concept in our national deliberations, sustainability is clearly not yet a core value in our society or any other.

Although the general public and especially our younger generations have begun to think in terms of sustainability, the task remains to improve our capacity to implement advances in knowledge through sound policy decisions. We have yet to coordinate transnational responses commensurate with the scale of looming problems such as global terrorism, climate change, or possible ecosystem disruption. Our approach to the maddening complexity of the challenges that confront us must be transformative rather than incremental and will demand major investment from concerned stakeholders. Progress toward sustainability will require the reconceptualization and reorganization of our ossified knowledge enterprises. Our universities remain disproportionately focused on perpetuating disciplinary boundaries and developing increasingly specialized new knowledge at the expense of collaborative endeavors targeting real-world problems. If we in the academic sector hope to spearhead the effort, we will need to drive innovation at the same time as we forge much closer ties to the private sector and government alike.

The summit in Washington is heartening evidence that such collaboration is possible. The involvement of corporate visionaries such as Bill Ford and Rob Walton as well as government leaders from both sides of the aisle represents an expanded franchise not only of individuals but of institutional capabilities for response. But more flexibility, resilience, and responsiveness will be required of all institutions and organizations. Society will never be able to control the large-scale consequences of its actions, but the realization of the imperative for sustainability positions us at a critical juncture in our evolutionary history. Progress will occur when new advances in our understanding converge with our evolving social, cultural, economic, and historical circumstances and practices to allow us to glimpse and pursue new opportunities. To realize the potential of this moment will require both a focused collective commitment and the realization that sustainability, like democracy, is not a problem to be solved but rather a challenge that requires constant vigilance.


Michael M. Crow is president of Arizona State University, where he also serves as professor of public affairs and Foundation Leadership Chair. He is chair of the American College and University Presidents Climate Commitment.

Practical Pieces of the Energy Puzzle: Reduce Greenhouse Gases Profitably

A regulatory system that rewards energy companies for innovations that boost efficiency can appeal to environmentalists and industry alike.

After the Senate’s failed effort to pass the Lieberman-Warner climate change bill, Congress could conclude that reducing greenhouse pollution is a political impossibility—the costs too high, the benefits too uncertain, the opposition too entrenched. But that would ignore a convenient truth: Technology already exists to slash carbon emissions and energy costs simultaneously. With a little political imagination Congress could move beyond Lieberman-Warner and develop an energy plan that satisfies both pro-business and pro-environment advocates.

The Lieberman-Warner bill would create a cap-and-trade system to govern carbon emissions from power plants and major industrial facilities. What the bill does well is to limit greenhouse gas emissions to 19% below the 2005 level by the year 2020. The bill further demands a 71% reduction by 2050. Some argue that these goals should be stricter or looser, but the legislation does at least set clear targets and timetables.

Then the legislation becomes needlessly complicated. In order to provide “transaction assistance” (or what might be described as bribes for the unwilling), the bill offers massive subsidies to utilities, petrochemical refiners, natural gas distributors, carbon dioxide (CO2) sequesterers, state governments updating their building codes, and even Forest Service firefighters wanting to prepare for climate changes that spark more blazes. The gifts may garner political support from key constituencies, but they induce little clean energy generation. The same criticism can be levied against a carbon tax, even one that returns some of the receipts to taxpayers and spends the rest researching emissions-mitigating technologies.

Nearly 70% of U.S. greenhouse emissions comes from generating electricity and heat, whereas only 19% comes from automobiles. Electricity generation is a particular problem because only one-third of the energy in the fuel used to produce electricity is converted to useful electric power. Enhancing the efficiency of electric generation is essential in the battle against global warming, and making use of the wasted thermal energy produced in power plants is the key to improving efficiency. The technology to capture and use that excess thermal energy already exists. The nation needs a policy that encourages every electricity generator and every industrial user of thermal energy to follow this approach. An elegant market-oriented approach that avoids the quicksand of government picking technology winners would be a system founded on output-based allocations of carbon emissions.

First, each producer of electricity and thermal energy would obtain initial allowances equal to the previous year’s national average output of CO2 emissions per delivered megawatt-hour of electricity and Btu of thermal energy.

Second, every plant that generates heat and/or power would be required to obtain total allowances equal to its CO2 emissions. As with the trading system in the Lieberman-Warner bill, high-carbon facilities would need to purchase extra allowances from clean plants at market prices.

Third, these allowances would be cut every year to insure total emission reductions. Under this output allocation system, companies using clean energy such as wind turbines or industrial waste-energy–recovery plants can sell their pollution allowances, thus improving their economic position. Combined heat and power units, by earning allowances for both electric and thermal output, would have spare allowances to sell, increasing their financial attractiveness. Improving efficiency at any energy plant would lower emissions (and fuel costs) without lowering output, thereby saving allowance purchases or creating allowances to sell. In contrast, a dirty power plant that did not increase its efficiency would have to buy allowances.

Output-based allocations create carrots and sticks, additional income for low-carbon facilities that sell allowances, and additional costs for high-carbon facilities that must purchase allowances. Lieberman-Warner or a carbon tax, in contrast, impose a cost on polluters but provide no direct incentive for the use of clean energy sources or to companies like mine that boost energy output and efficiency by merging electric and thermal energy production.

Establishing such a system is relatively simple. Measurement and verification for electric and thermal output and CO2 are easy, since all plants have fuel bills and electric meters, and thermal output can be calculated. Continuous emission meters, moreover, are now affordable and proven. Regulators simply need to require energy plants to submit annual audited records, along with allowances covering actual emissions of each pollutant.

How it works

Each electric producer would receive initial allowances of 0.62 metric ton of CO2 emissions per delivered megawatt-hour of electricity, which are the 2007 average emissions. Each thermal energy producer would obtain initial allowances of 0.44 metric ton of CO2 emissions per delivered megawatt-hour of thermal energy, roughly the 2007 average emissions.

At the end of each year, a plant’s owner must turn in allowances for each pollutant equal to actual output. Consider CO2. Every producer of thermal energy and/or electricity would keep track of all fossil fuel burned in the prior year and calculate the total CO2 released. Each plant also would record the megawatt-hours of electricity produced, subtracting the amount for line losses, and record each unit of useful thermal energy produced and delivered. The plant would automatically earn the scheduled allowance of CO2 per megawatt-hour and per unit of thermal energy, but it must turn in allowances for every ton of carbon dioxide actually emitted in the prior year.

The allowance credits would be fully tradable and interchangeable between heat and power. Note that efficiency improvements reduce the burning of fossil fuel and thus reduce carbon emissions, but they do not decrease the plant’s output, and thus would not decrease total output allowances. Any production of heat or power without burning additional fossil fuel would earn an emission credit but produce no added emissions, which enables the producer to sell the allowance and improve the profitability of cleaner energy.

Heat and power producers, of course, have many options. By increasing efficiency, a company can reduce CO2 emissions, save fuel, reduce purchases of allowances, or add revenue from sold allowances. By installing a combined heat and power unit sized to the facility’s thermal load, it would earn additional allowances, providing revenues above the value of the saved fuel.

Consider a typical carbon black plant that produces the raw material for tires and inks. It currently burns off its tail gas, producing no useful energy service. If the owner built a waste energy recycling plant to convert the flare gas into electricity, it would earn 0.62 ton of CO2 allowance for every delivered megawatt-hour. A typical carbon black plant could produce about 160,000 megawatt-hours per year of clean energy. At a value of $20 per ton of CO2, the plant would earn $3.2 million per year from the output allowance system.

Now consider the options for a coal-fired electric-only generator that emits 1.15 tons of CO2 per delivered megawatt-hour. It receives only 0.62 tons of CO2 allowance and must purchase an additional 0.53 tons, costing $10.60 per delivered megawatt-hour (with $20-per-ton CO2). To reduce carbon emissions and save money, it could invest in devices to improve the plant’s efficiency and lower the amount of coal burned per megawatt-hour. Second, it could entice a thermal-using factory or commercial building to locate near the power plant and sell some of its presently wasted thermal energy, earning revenue from that sale and added CO2 allowances for the useful thermal energy. Third, it could invest in a wind farm or other renewable energy production facility and earn CO2 credits. Fourth, it could pay for an energy recycling plant to earn added allowances. Fifth, it could purchase allowances. Or, sixth, it could consider operating the plant for intermediate instead of base load. All of these options reduce total U.S. CO2 emissions.

Rather than collect and distribute trillions of dollars, Congress would have only two key tasks: to set fair rules for calculating useful output and to establish the decline rate for the allowances per unit of useful output. Current scientific thinking suggests that we must reduce total carbon emissions by 70% or more over the next 50 years. If initial output allowances are set equal to average outputs in 2006 for each megawatt-hour of electricity and useful thermal energy, allowances would need to decline by 2.38% per year for the next 50 years in order to reach the 70% reduction. If there were no increase in the amount of useful energy consumed for the next 50 years, this reduction would cause CO2 emissions to drop to 30% of 2006 emissions. Of course, if the nation’s total energy use increases, allowances would have to decline more rapidly to reach the 2050 goal.

Advantages

Output-based allowances are simple, keep government from picking technology (which is always a bad bet), allow maximum flexibility for the market to lower fossil-fuel use, and encourage profitable greenhouse gas reduction. As with cap-and-trade systems, output-based allowances can be ratcheted down to ensure greenhouse gas reductions. Consider the faults of other approaches.

A carbon tax requires legislators to determine the precise price per ton of CO2 emissions that would cause the desired reduction of fossil-fuel consumption. Congress then must decide how to spend the collected money, creating an atmosphere ripe for mischief.

A cap-and-trade system that allocates initial allowances to existing emitters, as was done with sulfur emissions in 1990, rewards pollution rather than clean energy. A new combined heat and power facility, although emitting half as much CO2 per megawatt-hour as do older plants, would receive no baseline allowances, be required to purchase carbon allowances for all CO2 emissions, and then would compete with an old plant that was granted sufficient allowances to cover all emissions. Such an allocation approach is favored by owners of existing plants, for obvious reasons, but it retards efficiency.

A system of allowances per unit of input fuel, such as the Clean Air Act’s approach toward criteria pollutant emissions, pays no attention to energy productivity and gives no credit for energy efficiency. In contrast, an output-based allowance system rewards every approach that emits less CO2 per megawatt-hour, regardless of technology, fuel, location, or age of plant. Thus, the output allowance approach will produce the lowest-possible-cost CO2 reductions.

An output allowance system is quintessentially American, solidly based on market forces and rewarding power entrepreneurs for “doing the right thing.” It leverages the U.S. innovative and creative spirit by encouraging all actions that lower greenhouse gas emissions per unit of useful output and penalizing above-average pollution per unit of output. The Lieberman-Warner approach, in contrast, has government picking winners and distributing up to $5.6 trillion to a hodgepodge of political interests.

The output-allowance system, moreover, sends powerful signals to every producer of heat as well as every producer of power. The total money paid for allowances exactly matches the total money received from the sale of allowances, so the average consumer pays no added cost for electricity. The impact on consumer impacts will vary and will be higher for those with few current alternatives to dirty fossil-fuel plants. The market decides the clearing price of the allowances, and every producer, regardless of technology, fuel, age of plant, or location, receives the same price signals.

Output-based allocations could also improve several provisions of the Clean Air Act, which has achieved impressive results but has blocked investments in energy productivity. The current approach, crafted in 1970 when global warming was not yet a concern, gives existing energy plants the right to continue dirty operations but forces new facilities to achieve significantly lower emissions. By forcing any plant that undergoes significant upgrading to become subject to stricter emission standards, the law’s New Source Review has effectively blocked investments to increase efficiency.

A transition away from a carbon-intensive economy will doubtless hurt some businesses, particularly big polluters. But others will prosper. Rather than having environmentalists focusing on the moral need to reduce pollution and industrialists responding that change will hurt the economy, a better way to structure the climate change debate is to ask how the nation can profitably reduce greenhouse gas emissions. On this point, environmentalists and industrialists should be able to find common ground. Output-based allocations, by unleashing market forces and sending clear signals, can muster such a political agreement as well as stimulate an investment boom in increased energy productivity.


Richard Munson is senior vice president of Recycled Energy Development (www.recycled-energy.com) and author of From Edison to Enron: The Business of Power and What It Means for the Future of Electricity (Praeger, 2005).

Climate Change: Think Globally, Assess Regionally, Act Locally

Climate change is here to stay. No matter how effectively governments and the private sector limit greenhouse gas emissions, average global temperatures will rise during the next several decades. Scientists know less well how climate change effects will be manifested regionally. And this information is critical because each region will have to decide how to adapt to change.

The evidence that global warming is already here and that its effect varies by region is strikingly apparent at the poles. Average temperature in the Arctic increased at nearly twice the global rate during the past 100 years, summer sea-ice area has decreased by 7.4% since satellite observations begin in 1978, and buildings and highways are threatened as the permafrost beneath them melts. The Greenland and Antarctic land ice sheets are changing rapidly, which is contributing to sea-level rise.

Change, somewhat less dramatic, is taking place across the globe. Mountain glaciers are retreating, glacial lakes are warming and growing, and spring runoff is occurring earlier. Spring events such as bird arrival and leaf unfolding are occurring earlier, and the summer growing season is lengthening. Plant and animal species are moving poleward or to higher elevations. Forests have increased in many areas but decreased in parts of North America and the Mediterranean basin. Oceanic primary production, the base of the marine food chain, has declined by about 6% in the past three decades, and the acidification of the oceans due to increased capture of carbon dioxide is making the fates of corals and other shelled creatures more precarious.

A sea change in public opinion is also in progress. People no longer focus exclusively on whether humans are responsible for climate change. The more pressing and practical question is how the world can adapt to the inevitable consequences of climate change and mitigate the most undesirable ones. The answers to those questions depend on where one is living.

Not only will climate change affect each community differently, but each community has a unique combination of environmental, economic, and social factors and its own ways of reaching decisions. Each community will have to decide how it can respond, so each needs information about how, when, and where climate change will affect the specific things it cares about. How will citizens know when they need to make decisions, or if they do?

Many of the responses to climate change will be local, and the variety of items that need attention is daunting. Infrastructure resilient to single stresses has been known to fail in a “perfect storm,” where vulnerability and multiple stresses combine. By analogy, localities are subject to social and environmental stresses that change simultaneously at different rates. These effects are often not simply additive; they can interact and reinforce one another in unexpected ways that can lead to potentially disastrous threshold responses or tipping points.

Not only do different multiple stresses interact differently in different places, but the ways in which people make decisions differ as well. Key decisions are made locally about land use, transportation, the built environment, fire management, water quality and availability, and pollution. For perfectly good reasons, local officials focus on the most concrete local trends and most visible social forces, and many of them perceive global warming as distant and relatively abstract.

All too often, local social, economic, political, legal, and cultural forces overshadow the warnings of the international scientific community. Besides, local officials understandably see global warming as an international issue that should be addressed by national and world leaders. And if local leaders were motivated to act, the effects of climate change do not respect jurisdictional boundaries, and they would find it difficult to marshal the necessary information and expertise to craft and harmonize their responses.

For these and other reasons, decision processes become dangerously long and complex. But time has run out for ponderous decisionmaking when every generation will have to adapt to a different climate. The scientific community needs to help by providing local leaders with the specific regional climate information they need to motivate and inform coordinated action.

Challenge of regional assessment

Regional climate differs in complexity and character from global climate. The factors that combine to drive global climate may have a different balance regionally. Today’s global models clearly delineate differences between the responses of oceans and continents and of high-latitude and tropical zones to climate change. A true regional assessment, however, differs from a regionalized global assessment in its spatial specificity; topography and coastal proximity create local climatic and ecological zones that cannot be resolved by contemporary global models, yet must be evaluated to make a regional impact assessment meaningful. Increasing global models’ spatial resolution is helpful but not sufficient; new analytic tools are needed to provide useful regional climate forecasts. Scientists must develop truly regional climate impact models that will help local leaders see what the future holds and understand how actions they can take will make a difference in their region.

Understanding how climate changes at the regional level is only the beginning of the evaluation of the ensuing ecological, economic, and social impacts. The next question to be answered is how climate change affects key natural systems such as watersheds, ecosystems, and coastal zones. Assessing the effect on natural systems is the starting point for assessing impacts on regionally important socioeconomic sectors such as health, agriculture, and infrastructure. For example, agriculture, a managed ecosystem, is subject to multiple environmental stresses: human practices, changes in water availability and quality, and the lengthening of the growing season.

And these human activities then influence local climate. Deforestation, irrigation, and agriculture affect local moisture concentrations and rainfall. The burning of fossil fuels plays a particularly complex role, only one dimension of which is its contribution to overall global warming. Inefficient combustion in poor diesel engines, open cooking fires, and the burning of coal and biomass produce aerosols with organic soots, or “black carbon,” as well as atmospheric brown clouds.

It is vital that scientists understand the complex and varied effects that such pollutant products will have on regional and global climates. Atmospheric brown clouds intercept sunlight in the atmosphere by both absorbing and reflecting it, thus cooling the surface and heating the atmosphere. The reduction in solar radiation at the surface, called dimming, strengthens in the presence of atmospheric moisture because aerosols nucleate more cloud drops, which also reflect radiation back to space. Because dimming cools ocean surface temperatures as well as land, Asian pollution has contributed to the decrease of monsoon rainfall in India and the Sahel. In addition, aerosols are carried from their local sources across entire ocean basins in a few days, and thus they have a global effect; the cooling due to dimming may have counteracted as much as 50% of the surface temperature increase expected from greenhouse warming.

Another powerful reason to undertake regional climate assessments is the impact of climate change on water availability. Global climate models predict that even if the total fresh water circulating in the hydrological system remains the same or even increases, there will be a redistribution of rainfall and snow, with more precipitation at high and equatorial latitudes and drying at mid-latitudes. If only because of redistribution, the study of changing water availability must be regional.

Topography, coastal and mountain proximity, land cover, prevailing storm tracks, and other factors all make regional water climate distinctive. These issues are best addressed on a watershed-by-watershed basis. At mountain altitudes, black carbon heats the air and turns white snow gray, which absorbs more sunlight. These effects are contributing to the melting of the Himalayan snowpack and glaciers, and this melting is, in turn, affecting the river water supply of more than 2 billion people in Asia.

The regional impacts of a change in water availability will depend on factors such as the number and types of ecological provinces, the balance of irrigated and nonirrigated agriculture, the urban/rural population balance, the state of water distribution infrastructure, and regulatory policy.

The decisions to be made will be locally conditioned. How should managed irrigation systems adjust to changes in the timing and volume of spring runoff? Which farmers and crops will be affected? Should farmers change their crop mix? How and when should investments be made in water delivery capacity, agricultural biotechnology, or monitoring systems? Rigorous and detailed regional climate change impact assessments are necessary to answer these questions.

Leading the way

The state of California has been a national and global leader in modeling, assessing, and monitoring potential climate change effects at its regional scale. California, home to extensive scientific expertise and resources, began to study the issues 20 years ago and two years ago committed to biennial formal assessments to identify and quantify effects on its massive water-supply systems, agriculture, health, forestry, electricity demand, and many other aspects of life. The accompanying illustration details findings of the first California assessment, Our Changing Climate, published in 2006. The complete results were published in March 2008 in California at a Crossroads: Climate Science Informing Policy, a special supplement to the journal Climatic Science.

The prediction in Our Changing Climate that the snow cover in the northern Sierra Nevada will decline by 50 to 90% by mid-century is particularly compelling, because California’s Central Valley, the nation’s most productive agricultural region, derives most of its water from rivers with headwaters in these mountains. In addition, northern Sierra water is a major source for the 20 million people living in arid southern California.

Our Changing Climate motivated California’s leaders to enact a series of climate-related measures and to forge cooperative programs with neighboring states and even some countries. This example shows how powerful regional assessments can be, because Californians learned how they will be affected, and this, in turn, motivated political action. Until people can answer the question, “What does it mean for me?,” they are unlikely to develop their own strategies for adaptation.

Since 2000, a multiagency team has monitored a suite of factors, including pollution and management practices as well as climate change, that affect the Sacramento River Delta and its interaction with San Francisco Bay. The Bay-Delta system transports water southward to the Central Valley and southern California. The team’s report, The State of Bay-Delta Science 2008, challenges many conventional assumptions about integrated ecosystem management, argues that the desire to maintain a steady state is misplaced, and suggests that present practices should be replaced by adaptive management based on comprehensive monitoring.

California and other well-equipped regions should translate their knowledge and techniques to other parts of the world. California’s experience in modeling, monitoring, and assessment could be useful to others. And California can continue to blaze the trail by expanding its efforts. For example, extending California’s assessment to include aerosols and black carbon would enable a more rigorous comparison with similar issues in Asia, Africa, and elsewhere. Such efforts should begin by monitoring the effects of black carbon and other aerosols on California’s climate and snowpacks. It also will be important to simulate regional climate change with and without aerosols and to connect the simulations to a variety of (already extant) models that link climate projections to snow and watershed responses to reservoirs and water-supply outcomes.

Although California has for many years actively managed all but one of its major rivers, it is not yet making adequate use of recent scientific research to inform its management decisions. The state’s water policies and practices are based in the experience of the 20th century and need to be adapted to the changing water climate of the 21st century. This process is beginning. The observational and modeling infrastructure that supported Our Changing Climate has already been applied to adaptive management of the state’s water supply and is now being extended to cope with the challenges ahead. California has learned that the capacity to assess and the capacity to manage are intimately related.

Building a mosaic

Global climate models have met the highest standards of scientific rigor, but there is a new need to extend that effort to create a worldwide mosaic of regional impact assessments that link the global assessment process to local decisionmaking.

The effects of climate change will be felt most severely in the developing world. Although developing nations may not always have the capacity to assess regional climate change by themselves, they understand their social, economic, and political environment better than do outsiders. Thus, developing nations should take the lead by inviting developed-world scientists to collaborate with them in conducting regional assessments that can influence local actions.

The world needs a new international framework that encourages and coordinates participatory regional forecasts and links them to the global assessments. Such a framework for collaboration will not only help build assessment capacity in the nations and regions that need it, but will also generate the local knowledge that is a prerequisite for making the response to climate change genuinely global.

Of course, the globe cannot be subdivided neatly into nonoverlapping regions with sharp boundaries, nor will regions be able to restrict themselves to the same geographical area for the different kinds of sectoral assessments they need. Each physical, biological, and human system has a natural spatial configuration that must be respected. The focus therefore should be on developing a complex hierarchical network of loosely connected, self-assembled regional assessments rather than a unitary project.

In moving toward a suitable international framework for regional assessments, it will be useful to examine a number of questions. What lessons can be learned from the regional assessments done to date? How should global and regional assessments relate to one another? Should regional assessment panels be connected to the Intergovernmental Panel on Climate Change, and if so, how? What are good ways for the international community to incubate regionally led assessments? Are there best practices that promote interaction between scientists and decisionmakers? Do these differ regionally? What are good ways to encourage coordination among regional assessments? What standards should regional assessments adhere to? Who should define them? Who should certify compliance? How should assessment technologies be transferred? How should assessment results be disseminated and archived? How can assessments be designed so that assessment infrastructure can be used later in decision support?

Before formal framework discussions can take place, these and other issues will have to be debated in a variety of international and regional forums. These will certainly include the World Meteorological Organization, the Group on Earth Observations, and the United Nations Environment Programme. It is equally critical that discussions be organized in every region of the world and that partnerships among groups from industrialized and developing regions be struck.

It is clear, however, that the world must not wait for the creation of the perfect framework. It is by far preferable to learn by doing. Ideally, the framework will be an emergent property of a network of already active regional assessments that connect global assessments to local decisionmaking.

A good place to start is the critical issue of water. The effects of climate on water must be understood before turning to agriculture and ecosystems. The capacity to model and monitor exists, and it can be translated relatively easily. The path from assessment to decision support to adaptive management has been reasonably well charted. All parties now need to do all they can to launch assessments of the climate/water interface in every region of the world.

Practical Pieces of the Energy Puzzle: Energy Security for American Families

Helping moderate-income households invest in energy-efficient cars, appliances, and home retrofits would benefit financially struggling families as well as the U.S. economy.

In July 2008, Americans were paying $4.11 per gallon of gasoline—nearly three times the price six years earlier, according to the Energy Information Administration. Most people have felt the pinch of higher energy prices, but those hurt the most have been moderate-income families who struggle to buy gas for their cars and to heat and cool their homes. Although prices have now fallen because of the current global economic problems, the longer-term trend of rising energy prices will continue. Many of the 70 million U.S. households making less than $60,000 a year will find it increasingly difficult to cope.

The United States needs a strategy to help moderate-income households adjust to the new reality of higher energy prices. A proposal I call the Energy Security for American Families (ESAF) initiative would give moderate-income households the power to control their long-term energy costs, largely by improving household energy efficiency. Specifically, the initiative would offer a combination of vouchers, low-interest loans, and market-based incentives to help families invest in energy-efficient cars, homes, and commutes. These investments would allow workers to save money, year after year, gaining economic security.

Energy costs are a drain on the economy, leading to increases in prices and unemployment, and most of the money spent on oil leaves the U.S. economy. Channeling money toward investments in energy efficiency will not only help families cut costs but also create jobs and reduce energy demand, pollution, and greenhouse gases. The ESAF initiative represents a long-term investment in the health and resilience of the U.S. economy.

The fastest and easiest way to reduce the consumption of petroleum right now is to remove the vehicles with the worst gas mileage from the road and replace them with more efficient cars.

After enjoying more than two decades of relatively cheap energy, U.S. consumers have struggled to pay monthly gasoline bills that rose (in constant dollars) from $21 billion in July 2003 to $50 billion in July 2008, according to the Oil Price Information Service. Increased energy prices have hurt the economy as a whole, squeezing the credit and housing markets, depressing auto sales, and raising unemployment. They have had a negative multiplier effect on the economy, increasing inflationary pressures and shifting spending, so that money once spent on consumer goods is now going to pay mostly non-U.S.-based oil producers. Growing global demand for energy, coupled with a cramped supply infrastructure, means that volatile energy prices are here to stay, and they require a thoughtful policy response.

Hit hardest by high energy prices are U.S. households making less than $60,000 a year. These people spend a higher percentage of their income on energy than do wealthier Americans, and a lack of capital limits their ability to reduce the amount of energy they consume. Transportation, including vehicle costs, eats up about a fifth of household budgets for all U.S. households, but for those making $20,000 to $50,000 a year, the total cost of transportation may top 30%, according to a 2006 study of 28 metropolitan areas by the Center for Neighborhood Technology. Part of the reason for this disparity is that many moderate-income people find cheaper housing in exurban areas far from their workplaces.

The rapid increase in gasoline prices hit this group disproportionately hard. In 2006, households making $15,000-$40,000 a year spent 9% of their income on gasoline (double the national average of 4%); by the summer of 2008, they were spending between 10 and 14% of their income on gasoline alone, according to the Bureau of Labor Statistics’ Consumer Expenditure Survey and price figures from the EIA. For rural families, who drive nearly 10,000 miles a year more than do urban households, the cost is even higher.

On the home front, low-and moderate-income families are again at an energy disadvantage. Poor insulation and old equipment cause lower-income families to spend more per square foot to heat their homes than middle-income families, according to the EIA. In the 7% of U.S. homes heated with oil, moderate-income families can be at a disadvantage when purchasing fuel. Higher-income families are often able to hedge their spending on heating oil by locking in prices in advance, whereas struggling families are at the mercy of the market, buying when their tanks are empty.

Higher fuel prices are reducing the standard of living for these U.S. families. A 2008 survey by the National Energy Assistance Directors Association (NEADA) found that 70% of low-and moderate-income families said that energy prices had caused them to change their food-buying habits, and another 30% said that they had cut back on medicine. Utilities have recently become more aggressive in collecting unpaid bills. Between May 2007 and May 2008, an unprecedented 8% of U.S. households had their utilities cut off for nonpayment, according to NEADA.

Increasingly, moderate-income families find themselves in a Catch-22: Despite being squeezed by high energy costs, they are unable to reduce the amount of energy they use. Often living paycheck to paycheck, they lack the capital to invest in a more efficient vehicle or furnace, or in a home closer to their work, even when they know it would ease their monthly budget.

The credit crisis has added to their troubles by further limiting their ability to borrow money. For example, the sub-prime auto lending market is now experiencing the highest foreclosure rates in 19 years, and lenders are cutting back on auto loans. People who can’t qualify for loans are increasingly resorting to buying cars at “buy here, pay here” lots, where some 10,000 dealers nationwide charge interest of 25% or more and may impose additional finance charges that push the total much higher.

Another component of the crisis is the change in the pricing of fuel-efficient vehicles. In the past, used economy cars were relatively inexpensive. But the increasing size of U.S. vehicles during the past decade, combined with high gasoline prices, has changed the used car market. Used fuel-efficient cars are now relatively expensive compared to gas guzzlers, which may be the most affordable cars for lower-income buyers. The National Association of Automobile Dealers estimates that every $1 increase in the price of gas deflates the resale value of large pickups by $2,200 and increases the resale value of smaller cars by $980. This cruel trick of the market means that lower-income families, unable to spend much on their vehicles, may be forced to spend even more of their income on gasoline. Although economically rational decisions regarding the purchase of an automobile, commute length, and home energy efficiency may be options for those in higher-income brackets, moderate-income households do not have the same range of choices or access to capital.

For most, going without a car is not an option. Nine out of ten U.S. workers have cars, but for low-wage workers, owning a car may seal their fate. Owning a reliable vehicle has been shown to be more important for high school dropouts than earning a GED in getting and keeping a job, and on average, those with cars made $1,100 more per month than those without, according to a 2003 study by Kerry Sullivan for the National Center for the Study of Adult Learning and Literacy.

The problem with conventional fixes

The energy crisis facing moderate-income families has three components: These families are more dependent on energy than are wealthier families; increased energy costs eat up a higher percentage of their income; and high energy costs threaten their economic stability and standard of living. Market forces have exacerbated the first two problems, neither of which the government has addressed. The government has attempted to address the third problem through direct or indirect emergency energy payments, but existing government programs are stretched beyond their capacity to deliver emergency funding.

The Low Income Home Energy Assistance Program is the main federal program providing emergency funds for heating for low-income families. It has been funded at $5.1 billion for FY 2009, but this will not be enough to meet all requests. Around the country, needs have risen dramatically with rising energy prices and the economic downturn. In Nevada, for example, applications for assistance were up 79% in 2007.

Proposed solutions to alleviate the pain of high energy costs have fallen short. Republicans have suggested gas tax holidays, whereas Democrats have favored $1,000 subsidy checks. Neither addresses the underlying problems facing the families disproportionately affected by volatile energy prices. Gas tax holidays encourage more gasoline use and have been shown to create larger profits for gasoline marketers and minimal price reductions for buyers. Stimulus checks temporarily ease family finances, but they don’t help families change their consumption or spending habits. Early studies of how families spent the $600 tax rebate in 2008 reveal that they spent more than half on gasoline, food, and paying down credit card debt. These short-term measures also strike many voters as gimmicky, election-year ploys. At the very least, they are effectively government overrides of market forces that may actually delay the kind of investment and behavioral changes necessary to cope with higher energy costs in the long run.

The only way to overcome the unique energy disadvantages moderate-income families face is to help them invest in energy-efficient cars, appliances, and home retrofits. Reducing energy consumption pays for itself in energy savings and by making homes more comfortable. For about $2,800, the Department of Energy’s (DOE’s) Weatherization Assistance Program seals air leaks, adds insulation, and tunes and repairs heating and cooling equipment to reduce household heating energy consumption by an average of 23%, for a savings of $413 in heating and cooling costs the first year. Despite the program’s success, it has been poorly funded during its 30-year lifetime, reaching about 5 million homes out of the 35 million the DOE estimates are eligible. Of these, DOE estimates that 15 million homes owned by low-income families would benefit from this retrofit.

Investing in more efficient appliances offers further savings. For example, a programmable thermostat, which costs about $150, has a payback period of less than a year. A $900 flame-retention burner saves about $300 per year. Replacing a pre-1980 refrigerator with an Energy Star model can save more than $200 in electricity annually. Nationwide, pilot energy-efficiency programs have decades of experience, reducing energy bills by more than 20%. In California and New York, efficiency programs save families an average of $1,000 and $600 a year, respectively. These savings act like a stimulus program, year in and year out.

Whereas energy spending is a drain on the economy, yielding fewer jobs than other types of spending, every dollar spent on energy efficiency returns two dollars in benefits to the state, according to the California Public Utilities Commission. According to estimates by the DOE’s weath-erization program, every dollar invested produces $2.72 in savings and benefits, and every $1 million invested creates 52 direct jobs and 23 indirect jobs. Residents also see other, less-tangible returns, including cleaner air and less demand on the power grid, leading to fewer brownouts.

Helping working families reduce their dependence on fossil fuels is a good investment strategy for the United States. Moreover, moderate-income families appear to be willing to adopt energy-efficient and energy-saving habits: They take public transit at two to four times the rate of more affluent families. They also report closing off parts of their homes and keeping their living spaces either hotter or cooler than they feel is safe, according to a survey by NEADA. Thus, targeting this group of households for energy-efficiency investment may yield large financial and social dividends, as well as immediate and significant reductions in energy use and carbon dioxide emissions.

Policy specifics

The centerpiece of the ESAF initiative is a federal government-guaranteed loan program that would enable qualified lenders to make low-interest loans to moderate-income families for the purchase of energy-efficient autos, appliances, and home renovations. In addition, a system of vouchers and state-based incentives would be used to influence purchasing decisions. To create flexible transportation options beyond private cars, the initiative would reward those who don’t drive their cars to work with a yearly voucher and would provide seed money to the public and private sectors to develop alternative transit programs.

The target of these programs should be families or multiple-person households earning $60,000 per year or less. However, in the interests of geographic fairness, because the costs of living are higher in some parts of the country than others, the cutoff point could be raised to $75,000. Vouchers and state-based “nudges” could be tailored to reach certain income levels such as households of two or more people earning less than $60,000.

Automobile vouchers and loans. Private cars and trucks consume 18% of the energy used in the United States and the majority of the petroleum that is burned. The average fuel economy for new cars and trucks is now just 20 miles per gallon (mpg). The fastest and easiest way to reduce the consumption of petroleum right now is to remove the vehicles with the worst gas mileage from the road and replace them with more efficient cars. Toward that end, the ESAF initiative would offer a $1,000 voucher, low-interest auto loans, and state-run “clunker credit” programs to help families buy a car that achieves 30 mpg or more. This sort of government investment in private cars is far from unprecedented. The $3,150 tax rebates offered to buyers of Toyota Prius hybrids were essentially rewards to well-off buyers, many with incomes of $100,000 or above.

The cornerstone of the proposed auto program is very low-interest loans, backed by a government guarantee but provided by private lenders, for cars that achieve 30 mpg or more. Loans would be similar to the Small Business Administration’s 7A loans, with the federal government offering a guarantee on most of the value of the loan, thus reducing the risk to authorized lenders. Funds could be directed to favored lenders, such as credit unions, which have a track record of making auto loans to moderate-income car buyers. At least 8 million low-and moderate-income households already receive auto loans yearly, according to an Aspen Institute study, but most pay far higher interest rates. The standard auto loan rate is about 6%, but the sub-prime rate that is often the only option for less-affluent borrowers is usually above 17%. Because loans in the program would be guaranteed by the federal government, interest rates could be as low as 2% APR for a loan of up to $15,000.

Qualifying for a loan would be easy. Buyers could apply for the loans online and receive a notice of financing from a local bank or credit union as well as a list of cars eligible for purchase or trusted dealers in their area. Some of the country’s 8,500 credit unions already offer similar services that could be expanded. The loans would include clear rules to discourage predatory lending or sales. For example, used cars would not be financed at more than Blue Book value. The easy availability of low-cost capital may in itself discourage some predatory lending.

The ESAF initiative would also offer money to states to administer clunker-credit programs. Many states, including Texas and California, already operate such programs, which pay owners of old cars to turn them in to salvage yards, where they are dismantled. Texas has successfully offered payments of up to $3,500 per car as part of a pollution-abatement program, and similar programs are in place in Virginia, Colorado, Delaware, and Illinois. Combined with the low-interest loan program, a clunker-credit program would be an effective way of removing less-efficient and dirtier cars from the road and leading buyers to make a leap up in fuel efficiency. The advantage of state implementation is that the states would be able to adjust to local market conditions and be creative in finding the best mix of carrots and sticks.

A new car can dramatically improve the finances and lives of working families. The Bonnie CLAC (Car Loans and Counseling) auto loan program in New Hampshire has helped a thousand drivers obtain lower-interest loans for new cars, reducing their auto payments and maintenance costs. For some households, the savings in fuel have been enormous: One couple made a daily 130-mile commute in a 1998 Ford Explorer with a fuel efficiency of 10 mpg. The higher-mileage Honda Civic they bought to replace it reduced their monthly spending on gasoline from $800 to $200.

Relatively small shifts in market behavior could have a profound effect on U.S. energy consumption. For example, the scrap rate for light trucks, sport utility vehicles, and vans is now around 5% a year. Bumping that to 8% and encouraging 75% of those 8 million households to buy a 30-mpg vehicle would reduce U.S. gasoline consumption by 3.33 billion gallons a year. On a macro level, the U.S. economy would avoid spending $10 billion on fuel (at a gasoline price of $3 a gallon). The program would also assure automakers that there would be long-term demand for fuel-efficient vehicles, creating a market incentive for them to create more vehicles with higher fuel economy than current standards require.

Home efficiency vouchers and loans. U.S. homes consume 21% of the energy used in the United States. The average household spends nearly $2,000 on energy and produces twice the greenhouse gases of an average car. Modest investments in energy efficiency could reduce home energy bills, and emissions, by a fifth.

Toward that end, the ESAF initiative would offer a $1,000 voucher to eligible households to spend on immediate weatherization or appliance upgrades; underwrite a home equity loan program offering low-cost loans for energy-fefficiency renovations and efficient appliances; and support a state-run incentive program to encourage cooperation between utilities and homeowners.

The voucher could be issued in the form of an electronic debit card that could be used to buy energy-saving supplies, appliances, and weatherization retrofits that have been approved as cost effective by the EPA’s Energy Star program. Obviously, certain measures would need to be put in place to prevent fraud and waste, but ideally state regulators, utilities, contractors, and appliance dealers would offer packages combining energy audits, approved appliances, and cost-effective retrofits.

The ESAF initiative would require Fannie Mae or a comparable institution to provide low-interest home equity loans and mortgages for energy-efficient home improvements. In the 1990s, Fannie Mae had an effective energy-efficiency mortgage program that proved that investing in efficiency improved families’ ability to pay back their loans by lowering their bills. This time around, Fannie Mae should renew that program and make it accessible to all moderate-income borrowers. If Congress approves a mortgage rescue plan to help with the current financial crisis, energy-efficiency investments should be included in renegotiated mortgages and new ones as well. Like the auto loan program, the home efficiency loan would be backed by a government guarantee. In addition, owners of rental properties could be offered loans to upgrade the efficiency of their properties. This could be a requirement for Section 8 housing, which receives government subsidies.

As with the auto program, applying for a loan should be easy and fast. Families could initiate the process by applying online and having their request routed to nearby banks or credit unions to follow up. Once given a loan, families could purchase Energy Star appliances from approved dealers or contract with a bonded contractor to do construction work on their homes.

Utility companies are in an ideal position to help homeowners perform energy audits and make decisions about efficiency purchases. Utilities have data on all homes in the area they serve, knowledge of energy-demand patterns, and in some states already collaborate with households to reduce energy use. When proper incentives are in place, utilities profit by helping to reduce energy demand because they can avoid investing in power plants and transmission lines. The ESAF initiative would require state regulators to create incentives and rules to encourage utilities to help reduce energy demand. Ideally, utilities would establish partnerships with ratepayers, helping them figure out how to reduce energy demand by 20% and rewarding households that met the reduction targets by lowering their rates.

Innovative transit. Three-quarters of Americans commute to work alone in their cars, 5% take public transit, and 15% commute by car pool, in van pools, by bicycle, by telecommuting, and on foot. If just 3 million more Americans left their cars in the garage, the nation would have net savings of at least a billion gallons of gasoline a year. And the $3 billion those drivers would have spent on gasoline would be directed toward more productive spending. The United States needs to develop alternatives to private cars and mass transit for commuters.

Toward that end, all workers who don’t drive themselves to work would be given a $750 tax rebate every year to offset their transit costs. Drivers are already offered tax breaks of nearly $1,000 a year to offset the cost of parking, but by leaving their cars at home, non-car commuters do society several favors. They reduce road congestion and therefore commute times for everyone else, reduce pollution and greenhouse gas emissions, and reduce petroleum demand, which may make gasoline cheaper for other drivers. A recent study of the San Francisco Bay Area’s 9,000 “casual carpoolers,” who share rides during the morning commute, found that they directly and indirectly save 900,000 gallons of gasoline a year.

The transit subsidy, which could be delivered to recipients’ bank accounts as a tax refund or as a debit card, would reward non-drivers for making a decision that benefits everyone. It would replace the $1,380 tax break the federal government already offers on employer reimbursements for carpooling, a benefit rarely claimed because the rules and paperwork requirements are so cumbersome. Fraudulent claims could be discouraged by requiring written assurances from employers or other proof that applicants commuted by transit.

To encourage new ways of traveling to work, startup funds would be provided to local governments, businesses, and nonprofits to help them design innovative, self-supporting, “mini-transit” programs. Such flexible transit programs might include neighborhood car sharing, casual carpool programs, employer-based carpool programs, van pools, and jitneys. Ride sharing can be made easy, convenient, and safe through the use of mobile phones, GPS devices, and transportation affinity networks—a Facebook for carpoolers. It is even possible to pay drivers by using cell phones to transfer funds, as the program goloco.org already does with its 10,000 members. With nurturing, these programs could fill in the considerable gaps in the mass transit system.

Some city buses, which sometimes travel with only a few passengers, may actually use 25% more energy per passenger mile than a private car, according to Oak Ridge National Laboratory. A van pool, in which seats are much more likely to be filled, removes between 6 and 13 cars from the road, according to the EPA. Large employers of moderate-income workers, such as Wal-Mart, could work with other employers and local governments to create van pools to carry their workers to and from work, eliminating the need for employee parking spaces and easing scheduling problems caused by workers with transportation problems. Cities would benefit from reduced congestion, more readily accessible jobs, and less pollution. Workers would benefit because they would not need to shoulder the cost of owning a car and might be able to count on more regular working hours. Many commuters who use van pools say that they make their day less stressful. The market for these services could be significant. A 2003 study in the Puget Sound area of Washington State found that the existing fleet of 1,200 vanpools, which accounted for 1.4% of the commuter trip market, could be expanded up to six-fold with more aggressive incentives.

Making efficiency pay for itself

The ESAF initiative could assist most moderate-income households if it were funded at $45 billion a year for three years. The bulk of the funding would go toward transit tax rebates and vouchers for autos and home efficiency improvements; the low-interest loan program would cost far less. The initiative would provide 20 million transit riders with $750 rebates and offer vouchers for 6 million autos and 10 million home-efficiency projects. Over three years, the program could reach close to 70 million households by helping to upgrade 30 million homes, purchase 18 million cars, and subsidize 20 million commuters a year. The cost of these vouchers would be $31 billion per year.

Channeling money toward investments in energy efficiency will not only help families cut costs but also create jobs while reducing energy demand, pollution, and greenhouse gases.

The auto purchases and home-energy retrofits would be made possible by a $300-billion loan-guarantee program, which would cost approximately $3 billion over five years. Another $9 billion a year would be distributed to states to create incentives and flexible transit.

A typical household taking advantage of both the auto and home efficiency programs could reap annual savings of $1,235 on gasoline and nearly $400 on home energy costs. Some would save much more, either in energy costs or on auto or home financing. Members of a household commuting to work by carpool or vanpool would save at least 187 gallons of gasoline a year and receive a $750 voucher in addition.

At the end of three years, the auto program would have reduced U.S. gasoline consumption by 6.35 billion gallons, or 4.5% of total consumption. Likewise, by the third year of the home-efficiency program, 30 million homes would be saving more than $12 billion in energy costs.

The ESAF initiative could be funded as part of a federal stimulus program aimed at the auto and construction industries, through a carbon tax or auction, or by a windfall profits tax on energy companies. Although the public often opposes taxes on gasoline, ESAF could also be funded by a modest tax on imported oil. The United States imports 13.6 million barrels of oil a day, and Americans might be induced to tax those imports as part of a package to send a message to foreign oil producers. A tax of $6 a barrel would yield a fund of nearly $30 billion the first year, at a cost to drivers of just 9 cents a gallon. During the course of a year, the average U.S. family would pay less than $100 toward the tax, an amount that could be entirely offset by a decline in gasoline prices.

The primary purpose of the tax would be to provide a stable source of funds for energy-efficiency investments, but it would have several other important effects as well. First, it would signal to the oil market and oil producers that the United States intends to overcome domestic political inertia and begin aggressively decreasing oil demand. An initiative of this scale would also send a signal to other oil-consuming countries that the United States no longer intends to support cheap-by-any-means-necessary gasoline and is moving toward containing demand through market measures. The tax would also provide an opportunity to educate the public about the loan programs and other ways to reduce gasoline consumption. Driving habits and auto maintenance influence vehicle fuel efficiency by as much as 15%. Printing notices of the tax and tips for reducing fuel consumption on gas receipts has the potential to significantly increase driver awareness and reduce demand, as previous government education projects have reduced demand for tobacco, alcohol, and even water during droughts.

The ESAF initiative represents a long-term investment in the well-being of U.S. families as the nation heads into an era of real uncertainty about energy security and climate change. By shifting spending from energy bills to investment, the initiative will stimulate the economy and encourage businesses that provide smart energy solutions. The relatively low cost will not only reduce household bills but also yield big dividends in greenhouse gas emissions reduction. This is particularly important because a number of studies indicate that carbon cap-and-trade schemes will disproportionately burden lower-income households, rural households, and those living in coal-dependent states in the south and Midwest. The advantage of this initiative is that it addresses this burden directly by enabling moderate-income families to take control of their finances and emissions. What’s more, the net effect to society could be large: If 60 million families take advantage of the program to lower their energy consumption by just 10%, the total reduction of 132 million tons of carbon dioxide would be the equivalent of the emissions of Oregon, South Dakota, Vermont, Maine, Idaho, Delaware, Washington, D.C., and Maine combined. Empowering moderate-income households to be active agents in ensuring the nation’s energy security will strengthen the overall economy and assure a greener, more prosperous future.


Lisa Margonelli () is a California-based Irvine Fellow with the New America Foundation and the author of Oil on the Brain: Petroleum’s Long Strange Trip to Your Tank (Broadway Books, 2008).

Practical Pieces of the Energy Puzzle: A Full-Court Press for Renewable Energy

Transformation of the energy system will require steady and generous government support across technological, economic, and social domains.

Any effort to move the United States away from its current fossil-fuel energy system will require the promotion of renewable energy. Of course, renewable energy alone will not solve all problems of climate change, energy security, and local pollution; policies must also stress greater energy efficiency, adaptation to existing and future changes in climate, and possibly other options. But greatly increased reliance on renewable energy will certainly be part of the mix. The nation’s vast resources of solar and wind imply that renewable energy could, over time, replace a large part of the fossil-fuel energy system. Policies that encourage and guide such changes need to think about energy as a technological system and include a portfolio of policies to address all of the components of that system.

For more than 20 years, economists, historians, and sociologists have been analyzing technologies as systems. Although each discipline has its own emphasis, framework, and nomenclature, they all converge on a central insight: The materials, devices, and software that usually are thought of as “technology” are created by and function within in a larger system with economic, political, and social components. Policies that seek to change technological systems need also to address these nontechnological components, and moving toward the extensive use of renewable energy would constitute a major system change.

The existing energy system includes economic institutions such as banks and capital markets that know how to evaluate an energy firm’s financial status and are knowledgeable about prices. Politically, the system requires such measures as technical standards for a range of items, such as voltage and octane, as well as regulatory rules and structures for environmental protection and worker health and safety. At the social level, the system needs people with diverse skills to operate it, as well as university departments to train these workers and associations to promote their professional growth. Also needed are institutions that can interact successfully with the many populations affected by energy developments.

Along every dimension, the size of the existing energy system almost defies imagination, creating what historian Thomas P. Hughes characterizes as the system’s momentum, the extent to which it resists change. Most obviously, the system moves and processes huge quantities of various fuels and in so doing generates trillions of dollars of revenues worldwide. The many institutions in the system have created well-established norms, rules, and practices, which also resist change. The individuals in the many professions that make the system work have, in addition to their incomes, their identities linked to the existing system, which system change would put at risk. Changing this large, deeply entrenched system will take time, major shifts in incentives, and considerable political and business effort.

One could describe this system as emergent: Instead of being planned from the top down, it evolved out of the fragmented efforts of innovators, firms, governments, and nonprofit organizations responding to a complex set of technological, economic, political, and social challenges and incentives. But that process of emergence was anything but smooth or easy. For all of its benefits, it also entailed wrenching economic disruptions, rampant pollution, and sometimes violent labor relations. The nation can do better.

Public policies can influence and guide these changes but cannot determine them. The energy system spans and links together all sectors of society, of which government policy is only a part. The response of businesses, social groups, and even the culture to government policies will drive their effects, as will planned or unexpected technological developments. It is quite impossible to predict accurately and with precision all of the effects of policies, thus leaving open the certainty of unplanned and probably unwelcome results from even the most carefully developed policies. All policies are born flawed. Therefore, governments need to design flexible policies and create institutions that can learn and change. Good policies are ones that get better over time, because no one gets it right the first time.

However, and in tension with the previous point, public policies that seek to change large systems must be long-term and consistent. Flexibility and learning do not mean lurching from one fad to the next. Whether government policy aims to create new technologies through funding for R&D, foster new cohorts of technical experts through education funding, or change the incentives that firms and consumers face through providing targeted financial incentives, it will need to push in the same general direction for decades. Such consistency has paid off in fields such as information technology and biotechnology. Science policy scholars also can point to the heavy costs of volatile funding, as research groups that take years to assemble will disband after one year of bad funding. No one possesses a simple formula for reconciling the need for flexibility and the need for consistency. However, studying policies that successfully do both can inform the creation of new policies and institutions.

Finally, policies that seek to change the energy system need to stay focused on policy goals beyond simple market efficiency. Not surprisingly, debates over energy often involve discussions of the prices of competing energy sources. However, the energy system entails many other important social consequences, such as environmental and social equity problems. Policy analysts Barry Bozeman and Daniel Sarewitz have proposed a framework called Public Values Mapping in an effort to articulate the nonmarket values that policies should seek. This is not to say that market efficiency is inherently a poor standard, but that market goals may not always align with other goals, and policymakers will need to negotiate those conflicts.

An integrated strategy

To address all of the parts of the energy system requires a four-part policy strategy: improving technology, improving markets, improving the workforce, and improving energy decisionmaking. Each part will entail many specific policies and programs. Many of these policies will come out of or be implemented by firms, trade and professional associations, or advocacy groups, but governments will be centrally involved in all of them.

Improving technology. The level of funding, public and private, for renewable energy R&D is abysmally low, when seen in the context of the size of the energy market. The nation cannot transform a $1 trillion industry with a $1 billion investment. To make matters worse, public and private energy R&D has been declining for decades around the world, including in the United States. Innovative industries spend upward of 10% of their revenues on R&D. Industries such as computers and pharmaceuticals also enjoy the benefits of large government R&D programs.

The volatility of federal spending on renewable energy R&D has also contributed to problems. Such volatile budgets damage any research program. When funding fluctuates, laboratories lose good research teams and, to make matters worse, it is hard to recruit the best researchers and graduate students. Moreover, it is possible for the government to spend lots of money on R&D without producing much social benefit. To succeed, R&D programs need to pay close attention to public/private linkages and to the public social values they promote.

Improving markets. Improving markets for renewable energy technologies means removing impediments to their diffusion, making them more cost-effective, and making economic institutions more sophisticated in dealing with them. A number of market conditions impede the development and diffusion of innovation, usually by increasing transaction costs or placing renewable energy at a financial disadvantage beyond the costs of the devices themselves. For example, a home with solar photovoltaic panels may generate more electricity during the day than the household needs. If so, does a utility have to buy back the excess power, and at what price? If every home or business that puts in solar panels has to individually negotiate those questions with the utility, that greatly increases the transaction costs of renewable energy. A variety of well-tested policies can overcome these and related impediments. These policies include “net metering” that provides home or business owners with retail credit for any excess power they provide to the grid, interconnection standards, building codes, technology standards, and installation certification.

Like all other new technologies, renewable energy technologies in their early stages of development can benefit from subsidies (which all other energy sources get anyway) or regulatory mandates. Thus, policies such as production tax credits, “feed-in tariffs” that obligate utilities to buy energy from any renewable energy–generating facilities at above-market prices, and renewable portfolio standards come into play. Many of these policies are problematic, but more than a decade of experience in individual U.S. states and in Europe are enabling analysts to sort out the merits of various policies. In some cases, the economically optimal policies may not be the most politically feasible.

In addition to providing subsidies, governments can use their power of procurement to simply buy renewable energy, creating a large revenue stream for the industry. Government procurement has been a huge driver for many high-tech industries, and it could be for renewable energy as well. The federal government is an immense energy consumer, perhaps the largest in the world. What it buys influences and even creates markets.

Recently, some high-profile venture capitalists have become involved in renewable energy projects. Can other economic institutions, such as banks and insurance companies, assess and finance renewable energy deployment? Part of improving markets means ensuring that, for example, mortgage lenders have the ability and incentive to properly evaluate the effect on operating costs of adding renewable energy to a home or business.

In addition, government policy will need to be deeply involved in developing the appropriate infrastructure, the most obvious part of which is the electrical grid. This is not to say that the government will in any simple sense pay for that infrastructure, but policies will influence who does and how its components are built. Public goods, from lighthouses to highways, have always posed these collective action problems, and public policies are involved in solving them.

Discussions of subsidies or other forms of government aid raise the issue of whether renewable energy can compete in markets, and to some extent that is a serious issue, but not in the pure sense. First, government has been involved in energy markets for more than a century through procurements, subsidies, regulations, tax benefits, and other means. All forms of energy have enjoyed many tens of billions of dollars of government largess. It makes no sense to say that renewable energy has to do it on its own. Second, every major technological revolution—and that is what changing the nation’s energy system will be—of the 20th century had government deeply involved. Energy will be no exception to that rule.

Improving the workforce. The development and deployment of renewable energy technologies will require an ever-growing and diverse workforce: wind-turbine installers, solar design engineers, systems analysts, Ph.D.-level researchers, and so on down a long list. Does the nation have the programs, in quantity and quality, to train, certify, and provide professional development for such a workforce? For scientists and engineers, the federal government has typically funded graduate and undergraduate education indirectly through research assistantships tied to government research grants. Will that policy work for renewable energy, and is the flow of funds large enough? This gets back to the point about volatile funding for R&D. That volatility hurts education as well as the research itself. To attract the best researchers and the best graduate students into this field, it needs relatively steady funding. The nation would neglect this part of the system at its peril. The current policy of volatility seems to be based on the Homer Simpson philosophy of education: “Our children are our future. Unless we stop them now.” Surely, our society can do better.

Improving energy decisionmaking. Improving purchasing decisions is usually thought of as a matter of consumer education. Instead, the focus should be on the many people in the economy whose decisions drive substantial energy use: vehicle fleet managers, architects, heating and cooling engineers, building and facility managers, and many others who purchase energy for institutions. Private groups are doing some of this education; one example is the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) certification program for environmental standards in buildings. Government policy could work with the many professional associations to help energy decisionmakers be up to date on the opportunities and problems they might encounter in adopting renewable energy. Some of this is already happening, but it needs to expand greatly in guiding energy-related decisions.

Each piece of this four-part strategy will have numerous policies within it, and developing those policies for the array of technologies involved in renewable energy will be a large and multidisciplinary task. Configuring government institutions to implement and, as need be, adapt those policies will generate another set of challenges. Some policies in this strategy will be controversial, and no doubt some people will argue that the government should stay out of the way entirely and let the process unfold as it will. After all, the energy system has changed before and will change again.

But the true bottom line is that it would be irresponsible for government to take such a hands-off position. The change in the energy system will be a huge and wrenching event, and government has an obligation to push the change in a socially desirable direction as well as to try to alleviate some of the negative fallout of the changes. In an important sense, government policy will be unavoidably involved in this change. Government regulations and mandates structure the world that businesses and social groups encounter, and therefore they play important roles in resolving the conflicts that such changes inevitably entail. Because government policy cannot help but be involved, it should push for a system that protects the public interest and reflects values that markets by themselves will neglect. The nation simply cannot afford the waste of resources and environmental damage of past system changes and should not tolerate the human costs.

Recommended reading

Barry Bozeman and Daniel Sarewitz, “Public Values and Public Failure in U.S. Science Policy,” Science and Public Policy 32, no. 2 (April 2005): 119–136.

Thomas P. Hughes, “Technological Momentum,” in Albert H. Teich, Technology and the Future, ed. 8 (Boston/New York: Beford/St. Martin’ Press): 26–35.

Richard Nelson, National Innovation Systems (New York: Oxford University Press, 1993).

Gregory F. Nemet and Daniel M. Kammen, “U.S. Energy Research and Development: Declining Investment, Increasing Need, and the Feasibility of Expansion,” Energy Policy 35 (2007): 746–755.


Frank N. Laird () is an associate professor in the University of Denver’s Josef Korbel School of International Studies.

Science, Technology, and Global Reengagement

The new administration should move quickly to give science and technology (S&T) a prominent role in foreign policy. Historic shifts are under way in S&T capabilities around the globe. Those shifts create unprecedented opportunities for discovery and innovation, for responding to common challenges, and for U.S. leadership. Yet rather than being poised to lead the way, the United States is in a weak position.

The new administration will probably reformulate U.S. global policies, giving a higher priority to international engagement instead of unilateralism. International links in S&T can play a central role in this global reengagement. But to realize this potential, S&T issues related to foreign policy can no longer just be at the table. They must be in the lead.

A number of studies during the past few decades have stressed the importance of U.S.-international partnerships in S&T. But follow-up actions have been modest at best. Why haven’t past recommendations had a significant impact? What can the incoming administration do to achieve better success, leveraging global trends and U.S. S&T capabilities to more fully advance common interests?

To be meaningful, S&T policy changes must reflect power and process in the government. S&T interests must be able to define policies at the highest levels. They must be able to influence budgets, spur action throughout the federal government, and work with partners, both international and domestic.

Science, technology, and diplomacy intertwined at high levels throughout the second half of the 20th century. President Kennedy launched the first bilateral science agreement with Japan after World War II, and it led to one of the nation’s strongest international partnerships. President Nixon promoted building scientific links with China as he began normalizing relations, and Chinese universities have become a leading source of graduate students in U.S. science and engineering programs. President Clinton leveraged decades of scientific ties with the former Soviet Union to assist in the safer disposition of hundreds of tons of weapons-grade nuclear material. Today, there are many more possibilities for win-win collaboration.

Asia’s investment in R&D is on the verge of surpassing that of North America. China has exceeded Japan in its national S&T investment and now trails only the United States. The World Technology Evaluation Center recently assessed research in China in fields such as nanotechnology, catalysis, and the brain-computer interface. In each case, China is doing research that is defining the state-of-the-art and is developing facilities second to none.

In South Korea, the government elevated the S&T minister to deputy prime minister. Economies from India to Indonesia have devised policies to advance S&T. India has passed South Korea in total R&D expenditures while launching a massive program to expand higher education. Indonesia held its first National Innovation Summit in the summer of 2006. Singapore continues to advance as the world-class biotech hub in Asia while Malaysia continues to be the information technology leader. Vietnam is a hot spot for new ventures.

In 2007, the 22 nations of the Arab League announced a 10-year plan to increase support for scientific research 12-fold, to an average of 2.5% of GDP. Egypt’s President Hosni Mubarak has declared 2007-2017 as Egypt’s “Decade of Science,” and Qatar—despite a population of less than 1 million—has pledged a $1.5 billion annual allocation to science. In Saudi Arabia, the King Abdullah University of Science and Technology is being launched in 2009, with an initial endowment of $10 billion. Private sources are also moving to play a major role. Sheikh Mohammed bin Rashid al Maktoum of the United Arab Emirates has created a pan-Arab educational foundation with an endowment of $10 billion.

In the African Union, nations developed a consolidated S&T action plan with the theme “Science, Technology and Scientific Research and Climate Change” for the 2007 Summit of Heads of State. In Latin America, Brazil continues to expand its investment in S&T and its global leadership in biomass renewable energy. The presidents of Chile and Argentina have launched programs to promote development of their S&T capabilities.

Accompanying this increased capability around the globe is the heightened recognition that humanity now faces many common challenges that can be addressed most effectively if nations pool and leverage their assets. In the battle against infectious diseases, the need to work closely with nations such as Indonesia and Vietnam is critical in dealing with avian influenza. In the search for new medications, cooperation can expand exploration of tropical organisms, which are the source of 25% of Western pharmaceuticals. The United States could learn much from Europe and Japan about using energy more efficiently, and many countries are eager to find ways to capture and sequester carbon. Penrose Albright, the first assistant secretary for S&T in the Department of Homeland Security, has observed that “international cooperation in S&T must underpin any U.S. counterterrorism strategy. … the needed talent (and understanding of the threat) exists in the broader international community.”

Helping countries prepare for natural disasters can be enhanced through global monitoring and the expertise of other nations, such as Japan’s capabilities in earthquake mitigation. To improve the food supply and nutrition, cooperation will speed genome projects to decode the DNA of food staples from wheat to rice to kiwis. With emerging fields such as nanotechnology and biotechnology, cooperation would help prepare international policies from the outset rather than having to harmonize a maze of national regulations. As National Science Foundation (NSF) director Arden Bement has observed, “International cooperation in science is not a luxury. It is a necessity.”

Turgid processes

If S&T are to be seriously integrated into global affairs, the OSTP director must be a member of the National Security Council as well as the National Economic Council.

Although the science community often feels that the importance of these international issues should compel action, action does not necessarily follow. Take the example of the U.S. government’s initiative to address emerging infectious diseases. In response to a growing array of these scourges, the United States in the mid-1990s launched an initiative to better address them where they arise. But the budget of the Centers for Disease Control and Prevention (CDC) allocated to addressing global emerging infections was only about $5.6 million. (By contrast, Dustin Hoffman received a reported $8 million for his role in the movie Outbreak, which dealt with the danger of an epidemic.)

Through the National Science and Technology Council (NSTC), a U.S. government strategy on emerging diseases was developed. But this was only a first step. At an initial meeting at the Office of Management and Budget (OMB), a young OMB budget examiner initially dismissed the issue, saying he “did not hear infectious disease was a problem.” When it was noted that an emerging disease program would also address vulnerabilities in U.S. domestic and global health infrastructure that made the nation more vulnerable to bioterrorism, a senior OMB official called that argument alarmist and irresponsible. When emerging diseases was suggested as a topic for policy discussions at the Asia Pacific Economic Forum (APEC), the U.S. ambassador to APEC said in a White House meeting, “I just don’t get this infectious disease issue.” This attitude retarded development of a dialogue on the disease problem. Meanwhile, congressional staff declared that they were interested in the subject but wanted to wait for the administration to define the next steps.

Momentum shifted into higher gear after a concerted effort on several fronts. The director of the CDC made the issue a top priority, and other agencies echoed the need for greater action. A presidential decision directive (comparable to an executive order) was issued, top officials at the National Security Council (NSC) took an interest in actively addressing emerging diseases as a national security issue, and ultimately the president held a White House meeting on the matter. Once the president has become engaged, no room is big enough to contain all the people who have suddenly discovered the importance of an issue.

Budget support was ultimately increased at several agencies, with CDC reaching $168 million for this effort by 2000. This strategy laid the foundation for the government’s post 9-11 response to countering bioterrorism. Post 9-11, the issue also became a central topic at the APEC forum as well as in the global community.

However, it should not take half a decade of bureaucratic tussling—and a national disaster—to put in place sensible S&T-based policy. S&T needs to be in a leadership role. It is essential to define policy in a way that ensures resources and incentives are in place to spur government agencies and nongovernmental partners into action.

Yet trends have been moving in the opposite direction. At the State Department, despite the establishment in 2000 of the post of science advisor to the secretary of state, little has been done to reverse decades of decay in S&T priorities. Career incentives have not yet been reestablished since the elimination in the mid-1990s of career tracks in oceans, environment, and science and the downgrading of science counselor positions at U.S. embassies around the world. Science at State is borne on the shoulders of temporary science fellows.

The U.S. Agency for International Development (USAID) eliminated its Research and Development Bureau in 1993 and subsequently cancelled other S&T budget items, including a successful international fellowship program, which had more than 3,200 African professionals earning graduate degrees at U.S. universities. In 2003-2004, while the U.S. National Academy of Sciences (NAS) was studying and validating the value of S&T to U.S. international development priorities, USAID eliminated more of its S&T functions. The once active USAID Science Fellows program has all but disappeared.

With emerging fields such as nanotechnology and biotechnology, cooperation would help prepare international policies from the outset rather than having to harmonize a maze of national regulations.

The White House also stepped back. In 2001, the White House eliminated the management position dedicated to international S&T issues in the Office of Science and Technology Policy (OSTP) as well as the NSTC’s committee on international science, engineering, and technology—which had launched the emerging infectious diseases initiative described above.

Turnaround formula

In order to make a difference, policies must establish authority, provide resources, and align incentives. This is the leadership package that enables action. The measures should include leadership from the top, defining a position from which things can get done, influencing budgets, and incorporating incentives so that the bureaucracy wants to execute the policy. Here are some specific proposals.

Leadership. The greatest need is for clear leadership from the president. Anything less will result in muddled progress at best. A variety of agencies can respond to varying incentives, but their mixed interests have often resulted in a stalemate, handicapping both S&T and foreign policy. The best time to exert this leadership is in the first 100 days of a new administration. As policies are being redirected, agencies will look to the new president for guidance. A clear form of guidance would be an executive order on S&T in global affairs.

Decisionmaking. If S&T are to be seriously integrated into global affairs, the OSTP director must a member of the NSC as well as the National Economic Council (NEC). No serious international work can be done without integration into the NSC. The NSC and NEC directors currently sit on each others’ councils, and both are on the NSTC.

Execution. Recent history has shown that S&T policy concerns have trouble attracting timely attention and action. The remedy is for the executive order to create a new White House position: deputy assistant to the president for science, technology, and global affairs. The seniority of this position matters. Proximity to the president is power, and a person who can deal with the crosscutting issues that involve the OSTP, NSC, and NEC can make a critical difference. Scientists are often content to have a seat at the table because they believe that their expertise will win respect. But in the rapid-fire environment of high-level policymaking, passive advice is often ignored. The S&T perspective should not merely be at the table, it should take the lead in framing the discussion and influencing decisions.

Budget. To act, agencies need resources, and securing resources for international S&T activities has been difficult. When I was the head of international issues at OSTP, numerous agency representatives noted that this issue could be a “third rail,” because it was perceived that the atmosphere was hostile in Congress. Foreign partners are not a strong political constituency.

International S&T cooperation is greatest in cases in which national interests are deemed most vital: national defense and health. The Department of Defense and the National Institutes of Health have extensive international efforts designed to tap expertise wherever it is found. Why other agencies have less interest in pursuing this global strategy is a mystery. Further, when budget instability in Congress affects major international commitments such as the U.S. commitment to the International Thermonuclear Experimental Reactor, the negative consequences affect the nation’s ability to secure partnerships in other arenas.

The executive order should direct the deputy assistant to the president and the OMB to review international S&T initiatives in the context of annual agency budget proposals. Without such a direct link, budgetary influence is much more ephemeral. Here, Japan’s cabinet-level Council on Science and Technology might provide a model. This council, sitting in the prime minister’s office, plays a formal role in the annual budget process, which enables it to provide meaningful support for priorities and more effective coordination of all S&T programs.

Strategy. The executive order should call for a strategy for S&T in global affairs. Part of the challenge in gaining support for international S&T is that it is not clear to many how much it benefits our national goals or advances technical knowledge. Such a strategy could validate the broad value of international engagement in S&T. It would clarify action and accountability by directing S&T agencies to define ways of supporting their missions and U.S. global priorities simultaneously. It would also mean directing foreign policy agencies to decide how to integrate S&T into their global policy missions and direct all agencies to articulate factors that would fit into agency-by-agency goals and performance plans.

This S&T strategy can also provide a framework for collaborating with nongovernmental organizations. Nongovernmental organizations such as the NAS, American Association for the Advancement of Science, and the Civilian Research and Development Foundation have extensive global networks and on-the-ground expertise. They can also work in situations where the government finds it difficult to do so, such as in our relations with Libya, Iran, and Cuba.

Define incentives for action. To act, agencies need incentives. Budget is one. The congressionally mandated Government Performance and Results Act (GPRA) is another. GPRA requires all agencies to develop regular strategic plans, performance plans, and performance assessments. This system has been effective in driving and clarifying performance in federal agencies and should thus reflect the policies of S&T in global engagement. Incorporating strategic and performance criteria such as the effective leveraging of international assets and expertise would help to reshape this aspect of bureaucratic culture.

Get the best ideas from the bottom up. Scientists often dislike the word strategy because it seems to imply a top-down ordering of events. Many are suspicious of policy as an intrusion rather than an enabler. Just as the United States has achieved the highest quality science using a bottom-up process of idea generation, so too can bottom-up partnerships provide excellent opportunities for global leveraging, global resources, and global impacts. The executive order should direct agencies to establish bottom-up leveraged international partnership programs.

An example is the relatively new Program for International Research and Education at NSF, which leverages capabilities globally and is extremely popular with U.S. research institutions. Projects address a diversity of research challenges, including imaging the African superplume seismic geostructure, analyzing geohazards, providing cleaner water through nanotechnology, developing better ways of interpreting meaning in languages, and advancing frontier fields such as angstrom-scale technologies, electron chemistry, and microfluidics.

Strengthen science in foreign policy. Here more muscle is needed. The executive order should establish the position of under secretary for environment, science, technology, and health in the State Department. This person would also function as science advisor to the Secretary of State, which would give the science advisor the authority, staff, and resources to shape and follow-through on policy initiatives. Currently, the Bureau of Oceans and International Environmental and Scientific Affairs (OES) in the State Department is handicapped by being a secondary priority of the undersecretary of democracy for global affairs. Although the science advisor has access to the Secretary of State, the position has few staff or resources and no direct influence over the OES Bureau. The advisor’s decisionmaking authority needs to be enhanced and resources aligned.

With undersecretary rank, the position would be comparable to the undersecretary for S&T at the Department of Homeland Security, undersecretary for science at the Department of Energy, and the undersecretary for oceans and atmosphere at the Department of Commerce. The science advisor would continue to participate in the broad range of S&T-intensive issues in the State Department’s foreign policy portfolio, including arms control, counterterrorism, and export controls.

Create a catalytic Global Priorities S&T Fund. Modest budgets can catalyze a lot of activity, but the State Department, despite its central role in foreign affairs, has highly limited resources. If the science advisor has no budget to even organize workshops, other agencies with S&T capabilities, international partners, and nongovernmental organizations will not come to the table. A dedicated Global Priorities S&T Fund is needed. It would also support grants to encourage international cooperative activities that advance U.S. foreign policy priorities.

Create a development S&T fund. At USAID, the executive order should establish a separate fund to support S&T global aid priorities. The NAS report on USAID documented the longstanding and counterproductive tension between the need for immediate crisis management and the desire for longer-term capacity building, with the former typically winning out over the latter for resources. A dedicated fund would mitigate the bureaucratic stalemate that has historically weakened long-term goal-setting.

The congressional role

Past studies fail to highlight the critical role played by Congress in S&T policy. Its leadership and support are essential. Members of Congress have often complained that international engagement in S&T is a handout rather than an activity of mutual benefit to the United States and other countries. This clearly deters agency actions. There are three ways to start the process of improving support from Congress: Create a congressional caucus on S&T in global affairs, develop congressional resolutions expressing support, and pass legislation to define global engagement as one tool in effectively fulfilling agency missions and serving the public.

Creating a congressional S&T caucus would help organize congressional support, identify appropriate congressional leaders, provide a forum for education and information exchange, and enable more effective policy guidance. Such congressional caucuses have long existed for national defense, health care, the environment, and S&T for competitiveness.

As an example, in 1997, the Senate S&T caucus provided active dialogue and support for doubling the NSF research budget. On the House side, Reps. Rush Holt (D-NJ) and Judy Biggert (R-IL) formed a similar congressional R&D caucus. These two caucuses have also been active in supporting the annual S&T congressional visits day, during which professional and academic organizations flock to Capitol Hill to present briefings on the need for sustained investments.

To promote science and math education, Reps. Vern Ehlers (R-MI) and Mark Udall (D-CO) launched a bipartisan education caucus for members of Congress, and Sens. Norm Coleman (R-MN) and Richard Durbin (D-IL) established a similar science and math education caucus in the Senate.

To express support, proclamations such as congressional resolutions and senses of the Congress could be a first step. These do not have the force of law, but provide the federal bureaucracy with confirmation that members of Congress back a policy priority. These proclamations can also be done quickly. In a bureaucracy that is often gun-shy when it comes to international S&T, signs of support from Congress would strike a positive chord.

For example, in 2004, both the House and the Senate passed resolutions that encouraged the government and public to observe the World Year of Physics and to engage in educational and research activities to strengthen awareness of the field and advance its knowledge base. The Senate and House resolutions on the International Polar Year of 2007 similarly called for certain agencies to give priority to promoting this event and directed NSF to report on how they would do so.

Legislation would make clear that federal agency missions include leveraging international partnerships in S&T. This would give positive momentum to agencies, make the priority unambiguous, and provide a stronger basis for long-term commitment should future administrations wobble. Agency reauthorization bills provide one such opportunity to confirm this priority. The House Committee on Science and Technology held two hearings in 2008 on the international dimensions of S&T opportunities, which could be important step in this direction.

For decades, U.S. policy toward the dual faces of S&T in international affairs has hobbled along. The growth of global capabilities in S&T and the rise of common global challenges increase the handicap stemming from this weak engagement. Policies to advance S&T have come to the forefront in all regions of the world, and the rise of capabilities in all continents has broadly expanded the sources of discovery and innovation. The world is advancing, but U.S. policies are standing still.

Only with leadership at the highest level, combined with appropriate resources and incentives down to the operational level, can the United States gain full advantage from these underused national and international assets. The new administration has an historic chance to leverage global opportunities in S&T. This could strengthen U.S. global leadership, more effectively meet pressing challenges, and enhance the speed of discovery and innovation. The challenge to the next administration is to see the world as it is changing and to lead.

A National Renewable Portfolio Standard? Not Practical

A discussion of renewable energy seems to addle the brains of many sensible people, leading them to propose policies that are bad engineering and science or have a foundation in yearning for utopia. For example, Michael Bloomberg, self-made billionaire and mayor of New York City, proposed putting wind turbines on the tops of skyscrapers and bridges. No need to ask the engineers whether the structures could bear the strain or whether there were good wind resources. Disagreeing with the mayor, the Alliance for Clean Energy New York said, “New York is really a solar city.“ Like Mayor Bloomberg and the Alliance, 25 governors, and more than 100 members of Congress, we love renewable energy. However, even this wonderful idea requires a hard look to see what is sensible now and why some current and proposed policies are likely to be costly, anger many people, and undermine the reliability of our electricity system. Congress needs to understand some facts before voting for a national renewable portfolio standard (RPS).

We share the goals of reducing pollution and greenhouse gas emissions, enhancing energy security, maintaining electric supply reliability, and controlling costs. The mistake is to think that a blinkered emphasis on renewable energy sources is the best way to achieve these goals. Unfortunately, this mistake has swept through 25 state legislatures.

Renewable energy sources are a key part of the nation’s future, but wishful thinking does not provide an adequate foundation for public policy.

These states have indicated their dissatisfaction with the current electricity-generation system by enacting binding RPSs, which require that wind, solar, geothermal, biomass, waste, or other renewable resources be used to generate up to 30% of the electricity sold by 2025. At the federal level, H.R. 969 was introduced in the 110th Congress to require that 20% of the nation’s electric power be generated by renewable energy sources. Organizations ranging from MoveOn.org and the Union of Concerned Scientists to the American Wind Energy Association urged its passage as a way to fight global warming, promote energy independence, increase wind-lease payments to farmers, and move the country toward a clean energy economy based on solar and wind power. H.R. 969 was not enacted, but a national RPS will certainly be reconsidered after the election.

A national RPS is a bad idea for three reasons. First, “renewable” and “low greenhouse gas emissions“ are not synonyms; there are several other practical and often less expensive ways to generate electricity with low CO2 emissions. Second, renewable sources such as wind, geothermal, and solar are located far from where most people live. This means that huge numbers of unpopular and expensive transmission lines would have to be built to get the power to where it could be used. Third, since we doubt that all the needed transmission lines would be built, a national RPS without sufficient transmission would force a city such as Atlanta to buy renewable credits, essentially bribing rural states such as North Dakota to use their wind power locally. However, the abundant renewable resources and low population in these areas mean that supply could exceed local demand. Although the grid can handle 20% of its power coming from an intermittent source such as wind, it is well beyond the state of the art to handle 50% or more in one area. At that percentage, supply disruptions become much more likely, and the highly interconnected electricity grid is subject to cascading blackouts when there is a disturbance, even in a remote area.

Renewable energy sources are a key part of the nation’s future, but wishful thinking does not provide an adequate foundation for public policy. The national RPS that gathered 159 cosponsors in the last Congress would be expensive and difficult to attain; it could cause a backlash that might doom renewable energy even in the areas where it is abundant and economical.

Consider the numbers. Past mandates and subsidies have increased wind’s share of generated electric energy to 0.8% of total U.S. generation and geothermal’s share to 0.4%. Generation from photovoltaic cells and ocean waves and currents totals less than 0.02%. Wood and municipal waste provide 1.3%, and conventional hydroelectric 6% (but large hydroelectric power is generally excluded from RPS calculations). The near-term potential for acquiring significant additional generation from any of the renewable sources except wind is small. Thus, a renewable portfolio standard requiring 15 to 30% of electricity from renewable sources requires that wind generation be expanded at least 15-fold and perhaps more than 30-fold.

The timeframes for reaching these production goals are very short. Eighteen states require that by 2015 at least 10% of their electricity must come from renewable sources. California and New York require 25%. Satisfying the state mandates would require the production and siting of hundreds of thousands of wind turbines. Because there is little wind power near large population centers, tens of thousands of miles of new transmission lines would have to be built within the next few years. Not only can transmission costs double the cost of delivered power, but the median time to obtain permission and build long-distance transmission lines has been 7 years—when they can be built at all. A Wall Street executive responsible for financing transmission lines stated that of 35 lines he has been involved with at an advanced stage, 80% were never built.

As Massachusetts has already discovered, implementing an RPS is far more difficult than passing popular legislation. The proposed wind farm off Cape Cod is stalled, and Massachusetts is badly behind in meeting its RPS. Even beyond siting the wind farms, states and the federal government would have to expedite permitting and obtaining the land and permission to build transmission lines, as well as provide the resources to review interconnection applications quickly. Although the public supports renewable energy in the abstract, many groups object vociferously to wind farms in particular places and to transmission lines nearly everywhere.

Producing sufficient wind turbines would require a major increase in manufacturing capacity. Demand (driven by state RPSs and the federal renewable production tax credit) has already stretched supplies thin, creating an 18-month delivery delay for wind machines. It has also emboldened manufactures to reduce wind turbine warranties from five years to two.

Many current laws mandate the use of a specific technology, apparently assuming that legislators can predict the success of future R&D. An RPS is such a law. In our judgment, laws ought to specify requirements that generation technologies must meet, such as low pollution, affordability, power quality, and domestic power sources, and leave the means of realizing the goals to technologists and the market.

Technological realities

Wind and solar generation are qualitatively different from electricity generated by fossil fuels, nuclear energy, or hydropower. Wind and solar generation are variable, do not generate power most of the time, and generally do not generate electricity when demand is highest. The cost of renewable power includes ancillary expenses such as long-distance transmission, the need to operate fossil-fueled backup facilities, and storage. Each of the renewable sources has its particular liabilities.

Wind. For the next decade or two, wind is the most practical and cost-effective renewable option and has been deployed in 27 states. Wind and geothermal are, on a percentage basis, the nation’s fastest-growing electric power sources. But even at the 2008 rate of growth (a historic high), wind will supply less than 2% of U.S. electric energy in 2020. If new policies aim to increase wind’s share to 13% of 2020 electric energy, it would mean increasing annual wind installations from 5,400 megawatts (MW) (in 2008) to between 40,000 and 70,000 MW per year by 2020. Total land area for wind farms would be 30,000 to 50,000 square miles, about the area of Ohio.

Among the disadvantages of wind systems are that they produce power only when the wind is strong and that they are most productive at night and during spring and fall, when electricity demand is low. The capacity factor (the percent of maximum generation potential actually generated) of the best sites for wind turbines is about 40%, and the average capacity of all the wind turbines used to generate utility power in the United States was 25% in 2007.

Electricity can be generated by wind turbines for an unsubsidized cost of 8 cents per kilowatt-hour (kWh) (at sites with a capacity factor of 40%) to 12 cents/kWh (at sites with the 2007 average capacity factor of 25%). Transmitting the power to market could add 1 to 8 cents/kWh, depending on the distance and the cost of acquiring land and installing the lines. Because the best wind sites are remote, the cost of delivered wind power to the populous Northeast or Southeast would be 12 to 20 cents/kWh. A new coal gasification plant with CO2 capture is estimated to produce power for 10 cents/kWh and could be located much closer to where the power is consumed. New nuclear plants might produce power for 12 cents/kWh. Energy-efficient appliances and buildings reduce energy consumption at a much lower cost.

Wind power does save fossil fuel, but not as much as it might seem. For example, if wind supplied 15% of the electricity, it would save less than 15% of fuel because other generators backing up the wind must often run at idle even when the wind is blowing and because their fuel economy suffers when they have to ramp up and slow down to compensate for variability in wind.

Variability also requires constant attention, lest it threaten the reliability of the electric system. On February 26, 2008, the power system in Texas narrowly avoided a breakdown. At 3 p.m., wind power was supplying a bit more than 5% of demand. But over the course of the next 3.5 hours, an unforecast lull caused wind power to fall from 2,000 MW to 350 MW, just as evening demand was peaking. Grid operators declared an emergency and blacked out 1,100 MW of load in a successful attempt to avoid a system collapse. According to the Electric Reliability Council of Texas, “This was not the first or even the worst such incident in ERCOT’s area. Of 82 alerts in 2007, 27 were ‘strongly correlated to the drop in wind’.”

At night the wind blows strongly and demand for power is low. On Hawaii’s Big Island, wind supplies over a third of nighttime electric energy. Oil generators that are not required are shut down. On three nights during one week in June 2007 on the Big Island, the variability of the wind overwhelmed the ability of the single oil generator that remained running to compensate. While the system operators urgently tried to get a second unit warmed up, the frequency of grid power fell from its normal 60 hertz (Hz) to 58 Hz. Emergency procedures are implemented in most grids to prevent frequency from falling below 59.8 Hz to prevent damage to customers’ electronic equipment.

The largest system with significant wind energy is Spain, where wind supplies 9.5% of electric energy every year. System operators there cope well, helped by large hydroelectric plants (18% of all generation capacity) that can react quickly to drops in the wind and store excess electricity when the wind blows strongly at times of low demand. Spain’s large amount of excess capacity also helps to protect system reliability; it has 86 GW of generation, including 15 GW of wind, to serve a maximum load of 45 GW. In the U.S.’s largest wind area, Texas, there is 6 GW of wind capacity but only 0.5 GW of hydroelectric capacity (with no ability to store electricity). Instead of Spain’s 90% excess generation capacity, Texas has 13%.

Can the United States do as well as Spain or, as mandated by 11 state RPSs, twice as well? Yes, but probably not without the $60 billion investment in new transmission lines recommended by the American Wind Energy Association. Such an interstate superhighway transmission system might allow remote generators or hydroelectric dams to pick up the slack when the wind dies down. A recent U.S. Department of Energy report relies on such a system to sketch a roadmap to 20% wind energy by 2030. Major investments in transmission lines, standby generators, and storage will be required to ensure that the lights don’t flicker if 20% of the nation’s electric energy comes from wind.

Finally, wind energy is a finite resource. At large scale, slowing down the wind by using its energy to turn turbines has environmental consequences. A group of researchers at Princeton University found that wind farms may change the mixing of air near the surface, drying the soil near the site. At planetary scales, David Keith (then at Carnegie Mellon) and coworkers found that if wind supplied 10% of expected global electricity demand in 2100, the resulting change in the atmosphere’s energy might cause some regions of the world to experience temperature changes of approximately 1ºC.

Solar. The amount of solar energy that reaches the United States each year is equivalent to an impressive 4,000 times the nation’s electric power needs. Although using the Sun’s energy has captured people’s imagination, its practical near-term prospects for meeting an RPS are dim.

Electric power can be supplied by solar photovoltaic (PV) arrays and by solar thermal systems in which the Sun heats a fluid that generates steam to drive a steam turbine. PV has a nonsubsidized cost of 33 to 61 cents/kWh, almost 10 times the cost of the current electric power generation mix, and 3 to 5 times the cost of other low-carbon generators. The current cost of PV makes it more a subject for basic research than widespread deployment. Solar thermal is cheaper, but without subsidy is not competitive except in special applications.

One of the largest solar PV arrays in the United States is a 5-MW system operated by Tucson Electric Power in Arizona,. Over two years of operation, the capacity factor for that generator has averaged 19%. Even in Arizona, clouds cause rapid fluctuation in the array’s power output. As with wind, large-scale solar power will require large transmission system investment to pair solar with steady power.

Solar thermal systems such as the new 64-MW Nevada Solar One installation should have smoother output power than PV systems because the thermal inertia of the oil used as a working fluid is expected to continue producing electricity despite the fluctuating thermal input. Molten-salt energy storage will be used to store energy for a few hours in order to generate power during the evening peak load. The facility is expected to have a capacity factor of 24%. The unsubsidized cost can be about 17 cents/kWh.

Solar subsidies in Japan and Germany, as well as solar setasides in domestic state legislation, are based on legislators’ assumption that the price for solar PV systems will decline to competitive levels as economies are achieved in manufacturing. At present, solar PV in states such as Pennsylvania (where the RPS requires 800 MW of solar PV) can produce wholesale power at 50 cents/kWh. Basic research might make solar PV competitive, but relying on large-scale orders to attain this goal with today’s technology is fantasy.

Costs for a solar PV system (solar cells, electronics, packaging, and installation) would need to fall by a factor of 3 to 5 to produce power at rates competitive with other low-emissions sources, and that does not even include additional costs due to the variability of solar power. Cost reductions of this magnitude will not come quickly or easily. In fact, solar cell costs are now 10% higher than they were in 2004; the balance of the system components, representing half the total cost, have not become less expensive.

Geothermal. At a good site, geothermal power can generate electricity from hydrothermal sources at about 10 cents/kWh. At present, it supplies almost as much energy as does wind, and it has the advantage of providing a fairly steady supply. The median geothermal plant averaged a 63% capacity factor, comparable to that of coal-fired generators. However, the best locations are clustered in the Southwest, so long-distance transmission may be needed.

Today’s geothermal power operates by pumping very hot subsurface water to the surface to produce steam to run a generator. Appropriate hydrothermal sources are limited, and large-scale geothermal power will require injecting surface water into very deep rock with techniques that are still in development and water that is scarce in the Southwest.

Run-of-the-river hydroelectric. Run-of-the-river hydro (a modern water wheel) can be attractive, but operates only when the river is flowing. To produce much energy, there would have to be a large, fast-flowing river. The potential power from this source is limited because many of the suitable rivers have already been dammed for hydroelectric power.

Rather than specifying a winning technology, Congress and state legislatures should specify the goals and provide incentives to reach them.

Biomass. At small scale, the use of waste biomass that would otherwise be left in fields is economically attractive. However, removing crop residue can make soil less productive and decrease its ability to store carbon. Biomass such as wood chips and switchgrass can be co-fired up to 10% with coal or can be burned in a specially designed furnace. The U.S. Department of Agriculture estimates that offering $60 per ton would produce 350 million tons of farm waste, tree trimmings, municipal solid waste, and energy crops. Increasing the price to $90 per ton would pull in an additional 80 million tons. These prices are comparable to coal at $120 and $180 per ton, respectively. A generator burning biomass would raise the price of electricity by almost 4 to 7 cents/kWh, respectively. Transporting biomass is expensive, so it is likely to be used only near existing coal-fired power plants or in plants especially built for biomass. Thus, biomass might provide a few percent of generation.

Ocean. Systems to produce electricity from ocean tides, currents, waves, and thermal gradients are immature technologies whose costs and environmental effects are not fully known. The estimated global practical potential from tides and currents totals 70 GW, about 2% of current global electric power generation.

Storage. The variable nature of wind and solar generation requires demand response, other generation, or storage to fill the gaps when the wind calms or clouds obscure the Sun. At 38 sites in 18 states, water is pumped up into a reservoir by electric motors; when needed, the water flows back through the turbine to produce hydroelectric power. These pumped-storage facilities are expensive to build and have controversial environmental effects. The combined capacity of these pumped-storage facilities is 19,400 MW, or about 1.8% of the nation’s generation capacity. Where they have available capacity, they are good choices for storing variable power.

In many areas of the country, electricity can be stored by using it to compress air, which is injected underground into depleted gas reservoirs, abandoned mines, or salt caverns. When electricity is needed (for example, when the wind is not blowing), the compressed air is released, heated, mixed with natural gas, and burned in a turbine to produce electricity. Many areas of the country have suitable geology. A 110-MW compressed-air energy storage facility of this type that has been operating since 1991 in Alabama can help provide power for 26 hours. At current natural gas prices, these storage facilities have capital and operating costs of approximately 8 cents/kWh of electricity produced.

Storage batteries are often used in small-scale, off-grid solar or wind systems. For large-scale application, sodium-sulfur batteries using a high-temperature chemical reaction have been deployed in several U.S. locations. These remain expensive. Plug-in electric hybrid vehicles that can be charged at night when the wind is blowing and demand is low may provide electricity storage in the future, but considerable technical and economic problems remain to be solved.

To sum up, we estimate that the states could accommodate 10% of the electricity coming from wind (or solar, if the costs were to come down) at any one time. With some attention and adjustment, we find that the electricity system could accommodate 15% or even 20%. To accomplish this, the system would require good prediction of wind speeds (or clouds for solar) several hours in advance, as well as a great deal of spinning reserve to substitute for the wind power when there are major changes in wind speed. Dealing with the minute-to-minute variability requires battery storage, fast-ramping generators, or customers who can react in minutes to raise or lower their use.

A national system must also deal with the fact that the best wind resources are in the Great Plains, about 1,000 miles from the Southeast where the electricity is likely to be needed. Policymakers must remain mindful of the difficulty of expanding transmission infrastructure. Community opposition will be widespread, the cost will be high, and the lines themselves will be vulnerable to disruption by storms or terrorists.

Thus, although a 20% national RPS might be physically possible with a very large transmission network and large amounts of spinning reserve, the logistical barriers will be high and the costs daunting. Embarking on this path without considering alternative strategies to reach the same ultimate goal would be short-sighted.

Energy efficiency

Mandating rapid, massive deployment of these technologies will result in high cost, disputes over land use, and unreliable electricity, leading to a public backlash.

An RPS is essentially a narrowband solution to a broadband problem. By placing an inordinate focus on a limited number of renewable energy sources, legislators are neglecting numerous other options that can make significant contributions to the larger social goal of an adequate supply of clean, low-carbon, reliable, and affordable electricity. A prime example of a strategy that deserves more attention is energy efficiency.

In comparison with other developed nations, the United States is a profligate user of energy. For example, Americans use more than twice as much energy per capita and per dollar of gross domestic product as do Denmark and Japan. The comparison across nations or over time indicates a high potential for increased U.S. energy efficiency.

Experience in states such as California shows that aggressive policies can substantially reduce the growth of electricity demand. Aggressive efficiency standards for appliances and buildings, subsidizing efficient lighting, a five-tier electricity pricing structure with prices that start at 11.6 cents/kWh and go up to 34.9 cents/kWh for residential customers with high consumption, and incentive plans that reward utilities for lowering electricity use have led residential use per capita in California to grow only 4% from 1980 to 2005, while use in the rest of the United States grew 89%. The per capita demand in the commercial sector in California grew by 37% over that period, much less than the 228% growth in the rest of the country. California used 4% more electricity per dollar of gross state product in 2005 than in 1980, whereas the rest of the country used 40% more.

A new approach now in the early stages of implementation in California and elsewhere is changing from charging the same price for electricity at all times of the day to a system in which the price varies to reflect the actual cost of power at that time. On hot summer afternoons, inefficient and expensive generators are turned on to satisfy the additional demand; they may run for only a few dozen hours in a year, but the cost of building and maintaining them means that the cost of that peak electricity is very high. If customers were forced to pay the actual price at the time they use electricity, they would be motivated to shift some of their usage to lower-price hours, which would reduce the need for some expensive peaking capacity.

An economic model designed to predict consumer response to real-time pricing found that in the mid-Atlantic states, peak load would be reduced by 10 to 15%. But the model also found that total demand would increase by 1 to 2% as consumers took advantage of lower rates at off-peak hours. The shift to increased nighttime electric use would be a good match for wind’s production profile but would not be a good fit for solar power. One potential downside of real-time pricing is that it may increase pollution emissions in certain regions of the country if customers switch their use from daytime, when natural gas is the predominant generation source for meeting peak demand, to the night, when coal dominates.

Policies to promote energy efficiency could clearly make a large contribution to reducing CO2 emissions from electricity generation. However, the experience of California and other energy-conserving states indicates that implementing energy efficiency takes time and resources. An effective program requires actions that take years, such as replacing appliances and installing better insulation and windows. Although aggressive energy efficiency measures might lower electricity demand in states where the population is not growing, for most of the nation population is likely to grow faster than efficiency can be improved, so that total energy demand will continue to grow.

An inclusive strategy

Electricity is essential to modern life and commerce, from computers to natural gas furnaces to telecommunications to elevators and traffic signals. The critical importance of the electric system was made painfully clear by the 2003 Northeast blackout, which stopped all economic activity and endangered the lives and well-being of 50 million people.

The United States is increasing its reliance on electric power and will have to generate 40% more electricity by 2030 if demand keeps growing as it has during the past 35 years. The North America Electricity Reliability Council is warning that reserve generation capacity is becoming so low in the country (except for the Southeast) that unless generation is added or demand reduced, within a decade there will be brownouts or blackouts.

We face the additional challenge of quickly reducing CO2 and other pollutants such as mercury and soot. At the same time, the price of power has risen 25% nationally since the last presidential election and has risen much faster in cities such as Baltimore.

The recent doubling of oil prices reduced imports appreciably. High oil, natural gas, and coal prices encourage energy efficiency, conservation, and a more sustainable fuel supply. Higher electricity prices, real-time pricing, and new efficiency standards can reduce growth in electricity demand. But even if the country can reduce the growth in electricity demand substantially, it will still need new generation capacity, much of it to replace old, inefficient plants.

Rather than specifying a winning technology, Congress and state legislatures should specify the goals—reduce pollution and greenhouse gas emissions, enhance energy security, maintain electric supply reliability, and control costs—and provide incentives to reach them. Since no current technology meets all goals, legislators must allow for tradeoffs. Specifying the goals rather than the technologies will lead to a technology race that will serve society.

Instead of enacting a national RPS, Congress should:

  • Handle conventional pollution discharges through legislation and the Environmental Protection Agency.
  • Handle greenhouse gas emissions through legislation such as a carbon tax or a cap-and-trade system that addresses such emissions explicitly.
  • Handle energy security through energy efficiency programs such as equipment performance standards and consumer incentives and through maintenance of a high petroleum price.
  • Maintain reliability through close monitoring of the new Electric Reliability Organization and of generating capacity and demand.
  • Control costs through efficiency standards and encouraging a diverse portfolio of generating fuels, but avoid mandates to deploy expensive technologies. Rather, it should allow the market to determine the least-cost generation options.

Impatience to solve current problems has resulted in aggressive RPSs with strict deadlines. Although we agree that renewable technologies will help attain social goals, mandating rapid, massive deployment of these technologies will result in high cost, disputes over land use, and unreliable electricity, leading to a public backlash against these policies. The United States needs to focus on the goals, provide substantial incentives to meet them, and avoid polices that exclude economical ways to meet them.

Archives – Fall 2008

DENNIS ASHBAUGH, Marlyn, Mixed media on canvas, 74 × 80 inches, 2000.

Marlyn

Dennis Ashbaugh was among the first contemporary artists to incorporate genetic imagery into his work. His large-scale paintings based on autoradiographs fuse the traditions of abstract art with cutting-edge scientific imaging technology. He is interested in technology’s ability to translate a hidden reality into a visible pattern, to reveal the inner code beneath appearances. Ashbaugh is a Guggenheim Fellowship recipient. The Institute of Modern Art in Valencia, Spain, mounted a retrospective of his work in the fall of 2007, following an exhibit of his work at the National Academy of Sciences.

Creating a National Innovation Foundation

The issue of economic growth is on the public agenda in this election year in a way that it has not been for at least 15 years. Policymakers have thus far been preoccupied with providing a short-term economic stimulus to counteract the economic downturn that has followed the collapse of the housing bubble. Yet the problem of how to restart and sustain robust growth goes well beyond short-term stimulus. The nation needs a firm foundation for long-term growth. But as of yet, there has been no serious public debate about how to create one. At best, there has been a rehash of 1990s debates about whether tax cuts or lower federal budget deficits are the better way to increase saving and (it is often assumed) stimulate growth.

A growing number of economists have come to see that innovation—not more saving—is the key to sustained long-term economic growth. Some economists have found that R&D accounts for nearly half of U.S. economic growth, and that R&D’s rate of return to the United States as a whole is as high as 30%. But R&D is not all there is to innovation. Properly conceived, innovation encompasses new products, new processes, and new ways of organizing production, along with the diffusion of new products, technologies, and organizational forms throughout the economy to firms and even entire industries that are not making effective use of leading technologies or organizational practices. Innovation is fundamentally about applying new ideas in organizations (businesses, nonprofits, and governments), not just about creating those ideas.

Innovation has returned to the federal policy agenda, most recently in the form of the America COMPETES Act signed into law in 2007. That law, unfortunately not yet fully funded, provides for much-needed increases in federal support for research and science and engineering education—key inputs into the process of innovation. But it does not go far enough. It does little to promote the demand for those inputs or to organize them in ways that lead to the commercial application of new ideas. More engineers and more R&D funding do not automatically create more innovation or, particularly, more innovation in the United States. In the mid-20th century, the nation could largely rely on leading firms to create research breakthroughs and turn them into new products, leaving the tasks of funding basic research and scientific and technological education to government. But it can no longer do so. Moreover, in the previous era, when the United States was the dominant technology-based economy, both old and new industries were domestic (for example, U.S. semiconductor firms replaced U.S. vacuum-tube firms). But a flat world means that more potential first movers will come from an increasingly large pool of technology-based economies, and that shifts in the locus of global competitive advantage across technology life cycles will occur with increasing frequency.

As a result, it is time for the federal government to make innovation a central component of its economic policy, not just a part of technology or education policy. To do so, it should create a National Innovation Foundation (NIF), which would be funded by the federal government and whose sole responsibility would be to promote innovation.

Growing innovation challenge

Since the end of World War II, the United States has been the world leader in innovation and high-value-added production. But now other nations are posing a growing challenge to the U.S. innovation economy, and increasingly, services as well as goods are subject to international competition. Because the United States cannot and should not try to maintain its standard of living by competing with poorer countries through low wages and lax regulations, it will have to compete in two other ways: by specializing in innovation-based goods and services that are less cost-sensitive, and by increasing productivity sufficiently to offset the lower wages paid in countries such as India and China. Both strategies rely on innovation: the first on product innovation and the second on process and organizational innovation. These same strategies are essential for maintaining the U.S. competitive position relative to other economically advanced countries.

However, there is disturbing evidence that the nation’s innovation lead is slipping. Companies are increasingly shifting R&D overseas. Between 1998 and 2003, investment in R&D by U.S. majority-owned affiliates increased twice as fast overseas as it did at home (52 versus 26%). In the past decade, the share of U.S. corporate R&D sites in the United States declined to 52 from 59%, while the share in China and India increased to 18% from 8%. The United States’ shares of worldwide total domestic R&D spending, new U.S. patents, scientific publications and researchers, and bachelor’s and new doctoral degrees in science and engineering all fell between the mid-1980s and the beginning of this century. The United States ranks only 14th among countries for which the National Science Foundation (NSF) tracks the number of science and engineering articles per million inhabitants. The United States ranks only seventh among countries in the Organization for Economic Co-operation and Development in the percentage of its gross domestic product (GDP) that is devoted to R&D expenditures, behind Sweden, Finland, Japan, South Korea, Switzerland, and Iceland, and barely ahead of Germany and Denmark.

Why has the United States’ innovation lead been slipping? One reason is that the process by which R&D is financed and performed has changed. During the first four decades after World War II, large firms played a leading role in funding and carrying out all stages of the R&D process. Companies such as AT&T and Xerox did a substantial amount of generic technology research, as well as applied R&D, in house. More recently, private funders of R&D have become more risk-averse and less willing to fund long-term projects. U.S. corporations, while investing more in R&D in this country overall, have shifted the mix of that spending toward development and away from more risky, longer-term basic and applied research. Similarly, venture capitalists, who have become a leading source of funding for cutting-edge science-based small firms, have shifted their funding away from startups and early-stage companies, which are riskier investments than later-stage companies, and have even begun to shift funding to other nations. In addition, as short-term competitive pressures make it difficult for even the largest firms to support basic research and even much applied research, firms are relying more on university-based research and industry/university collaborations. Yet the divergent needs of firms and universities can hinder the coordination of R&D between these two types of institutions.

Problems with the diffusion of innovation have also become more important. Outside of relatively new science-based industries such as information technology and biotechnology, many industries, including construction and health care, lag in adopting more productive technologies. Regardless of their industry, many small and medium-sized firms lag in adopting technologies that leading firms have used for decades. This is perhaps most visible in the manufacture of durable goods, where small and medium-sized suppliers have lagged behind their larger customers in adopting waste-reducing lean production techniques. Although smaller firms have long been late adopters of new technologies, the problem was less serious in an era when large firms manufactured most of their own components and designed products for their outside suppliers. With today’s more elaborate supply chains, technological lag by suppliers is a more serious problem for the U.S. economy.

Finally, geographic clustering—the tendency of firms in the same or related industries to locate near one another—enables firms to take advantage of common resources, such as a workforce trained in particular skills, technical institutes, and a common supplier base. Clustering also facilitates better labor-market matching and facilitates the sharing of knowledge, thereby promoting the creation and diffusion of innovation. It exists in such diverse industries and locations as information technology in Silicon Valley, autos in Detroit, and insurance in Hartford, Connecticut. Evidence suggests that geographic clustering may have become more important for productivity growth during the past three decades. Yet because the benefits of clustering spill over beyond the boundaries of the firm, market forces produce less geographic clustering than society needs, and firms have little incentive to collaborate to meet needs that they share in common, such as worker training to support new technologies or ways of organizing work.

Federal innovation policy should respond directly to these innovation challenges. It should help fill the financing gaps in the private R&D process, particularly for higher-risk, longer-term, and more-generic research. It should spur collaboration between firms and research institutions such as universities, colleges, and national laboratories. It should help speed the diffusion of the most productive technologies and business practices by subsidizing the training of workers and managers in the use of those technologies and practices and by giving firms (especially small and medium-sized ones) the information and assistance they need to adopt them. There is also a growing need for government to encourage the development of industry clusters, as governments such as that of China have deliberately done as a way of reducing costs and improving productivity. The federal government should do all of these things in an integrated way, taking advantage of complementarities that exist among activities to create an integrated, robust innovation policy that can make a real contribution to long-term economic growth.

Current federal policy does little to address the nation’s innovation challenges. Most fundamentally, the federal government does not have an innovation policy. It has a basic science policy (supporting basic scientific research and science and technology education). It has an intellectual property policy, carried out through the Patent and Trademark Office. It has agencies and programs that promote innovation in specific domains as a byproduct of agencies and missions that are directed at other goals (for example, national defense, small business assistance, and energy production). It even has a few small programs that are designed to promote various types of commercial innovation. But this activity does not add up to an innovation policy. Innovation-related programs are fragmented and diffuse, scattered throughout numerous cabinet departments, including Commerce, Labor, Energy, and Defense, and throughout a host of independent agencies, such as NSF and the Small Business Administration. There is no federal agency or organization that has the promotion of innovation as its sole mission. As a result, it is not surprising that innovation is rarely thought of as a component of national economic policy.

Existing federal innovation efforts are underfunded as compared with efforts in other economically advanced nations. In fiscal year 2006, the U.S. government spent at most a total of $2.7 billion, or 0.02% of GDP, on the principal programs and agencies that are most centrally concerned with commercial innovation. If the federal government were to invest the same share of GDP in these programs and agencies as many other nations do in comparable organizations, it would have to invest considerably more: $34 billion per year to match Finland, $9 billion to match Sweden, $5.4 billion to match Japan, and $3.6 billion to match South Korea. Some U.S. programs, particularly the Technology Innovation Program and its predecessor, the Advanced Technology Program, and the Manufacturing Extension Partnership Program, have had their budgets drastically reduced (from already low levels), largely because the current administration has tried to have them abolished.

The only federal support for technology diffusion comes through the Manufacturing Extension Partnership Program, an outstanding but underfunded program whose existence has been threatened during the current administration. But although federal support for technology diffusion in manufacturing is meager, in services it is almost nonexistent, as is federal support for innovation in services generally. Yet services, which include everything other than agriculture, mining, and manufacturing, account for four out of every five civilian jobs.

Likewise, there is little federal support for regional industry clusters. In fact, few federal innovation promotion programs engage in any way with state or regional efforts to spur innovation. Yet state governments and regional partnerships of businesses, educational institutions, and other actors involved in innovation have developed many effective efforts to promote innovation. These efforts are relatively small-scale, amounting to only $1.9 billion annually, and they are, understandably, undertaken with only the interests of states and regions, not the interests of the nation as a whole, in mind. Federal support, with appropriate federal incentives, could remedy both of these defects.

Charting a new federal approach

Establishing a NIF should lie at the heart of federal efforts. NIF would be a nimble, lean, and collaborative entity devoted to supporting firms and other organizations in their innovative activities. The goal of NIF would be straightforward: to help firms in the nonfarm U.S. economy become more innovative and competitive. It would achieve this goal by assisting firms with such activities as joint industry/university research partnerships, technology transfer from labora tories to businesses, technology-based entrepreneurship, industrial modernization through the adoption of best-practice technologies and business procedures, and incumbent worker training. By making innovation its mission, funding it adequately, and focusing on the full range of firms’ innovation needs, NIF would be a natural next step in advancing the innovation agenda that Congress put in place when it passed the America COMPETES Act.

Because flexibility should be one of NIF’s key characteristics, it would be counterproductive now to overspecify NIF’s operational details. NIF would determine how best to organize its activities; it would not be locked into a particular programmatic structure. Nonetheless, there are some core functions that NIF should undertake.

  • Catalyze industry/university research partnerships through national-sector research grants. To begin, NIF would offer competitive grants to national industry consortia to conduct research at universities—something the government does too little of now. These grants would enable federal R&D policy to break free of the dominant but unproductive debate over science and technology policy, which has tended to pit people who argue that the federal government should fund industry to conduct generic precompetitive R&D against those who maintain that money should be spent on curiosity-directed basic research at universities. This is a false dichotomy. There is no reason why some share of university basic research cannot be oriented toward problems and technical areas that are more likely to have economic or social payoffs for the nation. Science analyst Donald Stokes has described three kinds of research: purely basic research (work inspired by the quest for understanding, not by potential use); purely applied research (work motivated only by potential use); and strategic research (work inspired by both potential use and fundamental understanding). Moreover, there is widespread recognition in the research community that drawing a hard line between basic and applied research no longer makes sense. One way to improve the link between economic goals and scientific research is to encourage the formation of industry research alliances that fund collaborative research, often at universities.
Current federal policy does little to address the nation’s innovation challenges. Most fundamentally, the federal government does not have an innovation policy.

Currently, the federal government supports a few sector-based research programs, but they are the exception rather than the rule. As a result, a key activity of NIF would be to fund sector-based research initiatives. NIF would offer competitive Industry Research Alliance Challenge Grants to match funding from consortia of businesses, businesses and universities, or businesses and national labs. These grants would resemble the National Institute of Standards and Technology’s (NIST’s) Technology Innovation Partnership programs and NSF’s innovation programs (Partnerships for Innovation, Industry-University Cooperative Research Centers, and Engineering Research Centers). However, NIF grants would have an even greater focus on broad sectoral consortia and would allow large firms as well as small and mid-sized ones to participate. Moreover, like the NIST and NSF innovation programs, NIF’s work in this area would be industry-led, with industry coming to NIF with proposals.

To be eligible for NIF matching funding, firms would have to form an industry-led research consortium of at least five firms, agree to develop a mid-term (3- to 10-year) technology roadmap that charts out generic science and technology needs that the firms share, and provide at least a dollar-for-dollar match of federal funds.

This initiative would increase the share of federally funded university and laboratory research that is commercially relevant. In so doing, it would better adjust the balance between curiosity-directed research and research more directly related to societal needs.

NIF would also support a productivity enhancement research fund to support research into automation, technology-enabled remote service delivery, quality improvement, and other methods of improving productivity. Automation (robotics, machine vision, expert systems, voice recognition, and the like) is a key to boosting productivity in both manufacturing and services. Technology-enabled remote service delivery (for example, home health monitoring, remote diagnosis, and perhaps even remote surgery) has considerable potential to improve productivity in health care and other personal service industries. A key function of NIF would be to fund research at universities or joint business/university projects focused on increasing the efficiency of automated manufacturing or service processes. NIF would support early-stage research into processes with broad applications to a range of industries, not late-stage research focused on particular companies. It also would fund a service-sector science initiative to conduct research into productivity and innovation in the nearly 80% of the economy that is made up of service industries.

  • Expand regional innovation promotion through state-level grants to fund activities such as technology commercialization and entrepreneurial support. The design of a more robust federal innovation policy must consider, respect, and complement the plethora of energetic state and local initiatives now under way. Although the federal government has taken only very limited steps to promote innovation, state governments and state- and metropolitan-level organizations have done much more. They engage in a variety of different technology-based economic development activities to help spur economic growth. They spur the development of cutting-edge science-based industries by boosting research funding. Moreover, they try to ensure that research is commercialized and good jobs created in both cutting-edge science-based industries and industries engaging in related diversification. States have established initiatives to help firms commercialize research into new business opportunities. They also promote upgrading and project-based innovation by helping existing firms become more competitive.

Although already impressive, these entities could do even more, and their current efforts could be made more effective. Because the benefits of innovation often cross state borders or take at least a few years to result in direct economic benefits, or both, state elected officials have less incentive to invest in technology-based economic development activities than in other types of activities, such as industrial recruitment, that lead to immediate benefits in the state.

Moreover, any effective national innovation initiative will need to find a way to assist the tens of thousands of innovation-focused small and mid-sized firms as well as larger firms that have specific regionally based innovation needs that they cannot meet on their own. Unlike small nations, the United States is too big for the federal government to play an effective direct role in helping these firms. State and local governments and regional economic development organizations are best positioned do this.

As a result, without assistance from the federal government, states will invest less in these kinds of activities than is in the national interest. NIF would compensate for this political failure by offering state Innovation-Based Economic Development (IBED) Partnership Grants to help states expand their innovation-promotion activities. The state IBED grants would replace part of the grantmaking that the NIST and NSF innovation programs currently perform but would operate exclusively through the states.

To be eligible for NIF funding, states would need to provide at least two dollars in actual funding for every NIF dollar they receive. Rotating panels of IBED experts would review proposals. NIF staff would also work in close partnership with states to help ensure that their efforts are effective and in the national as well as the state interest.

  • Encourage technology adoption by assisting small and mid-sized firms in implementing best-practice processes and organizational forms that they do not currently use. Although NIF’s national-sector grants and state IBED grants would largely support new-to-the-world, sometimes radical product and process innovation, its technology diffusion work would focus more on the diffusion of existing processes and organizational forms to firms (mostly small and mid-sized) that do not currently use them. This effort would incorporate and build on NIST’s Manufacturing Extension Partnership (MEP) program, the only federal program whose primary purpose is to promote technology diffusion among such firms. NIF effort would follow the MEP model of a federal/state partnership. One or more technology diffusion centers would be located in each state. Like existing MEP centers, the centers could be operated by state or private organizations. States would submit proposals to NIF for the operation of these centers, and NIF would evaluate the centers periodically. Some specific changes in the current MEP program would enable NIF to serve as a more comprehensive and more effective promoter of technology diffusion for both manufacturing and service industries. NIF would expand the scope of the current MEP beyond its current emphasis on applying waste-reducing, quality-improving lean production techniques to the direct production of manufactured goods. It would do so by helping improve productivity in some service activities where lean production could be applied.

In addition to supporting efforts that assist firms directly, NIF would analyze opportunities and challenges regarding technological, service-delivery, and organizational innovation in service industries, such as health care, construction, residential real estate, financial, and transportation services. It also might recommend steps that federal and state governments could take to help spur innovation, including the digital transformation of entire sectors through the widespread use of information technology and e-business processes. Such steps might include revising procurement practices, modifying regulations, and helping spur standards development.

Emphasizing accountability

To guide its own work and provide firms and government agencies with the information they need to promote innovation, NIF would create methods of measuring innovative activity and carry out research on innovation. It would be the primary entity for conceptualizing how innovation should be measured and the primary advocate within the federal government for measuring innovation. It would help the major federal statistical agencies (the Census Bureau, Bureau of Economic Analysis, and Bureau of Labor Statistics) and NSF develop operational measures of innovation that can be included in new or existing economic data sources.

NIF would also work with other agencies to improve the measurement of productivity and innovation, particularly in better measuring output in the service sector; total factor productivity (the most comprehensive measure of productivity, which accounts for capital, materials, energy, and purchased services, in addition to labor, as productive inputs); and better bottom-up estimates of gross product and productivity for counties and metropolitan areas.

In addition, NIF would be the federal government’s major advocate for innovation and innovation policy. As a key step, it would produce an annual Innovation Report, akin to the annual Economic Report of the President. More generally, NIF’s advocacy role in support of innovation would resemble the Small Business Administration’s role as a champion for small business. NIF would seek input into other agencies’ decisions on programs that are likely to affect innovation. However, unlike the Small Business Administration, NIF would not have any authority to intervene in other agencies’ decisions.

Compelling need, but obstacles

In the current fiscal climate, it will be difficult for the federal government to launch major new investment initiatives, especially because strong political forces on either side of the aisle oppose raising taxes or cutting other spending. Nevertheless, the compelling need to boost innovation and productivity merits a substantial investment in NIF. The federal government should fund it at an initial level of $1 billion per year, but approximately 40% of this funding would come from consolidating existing innovation programs and their budget authority into NIF. (Rolled up would be the NIST and NSF innovation programs, as well as the Department of Labor’s WIRED program. Federal expenditures on all of the programs that NIF would replace or incorporate total $344 million. In addition, the America COMPETES Act provides a total of about $88 million more in 2010 than in 2006 for the programs that will be folded in. Therefore, current and already-planned expenditures on the programs whose work would be included in NIF total $432 million.) After several years, NIF could easily be ramped up to a budget of $2 billion, a level that would make its budget approximately one-third the size of NSF’s. In addition, because of its strong leveraging requirements from the private sector and state governments, NIF would indirectly be responsible for ensuring that states and firms spent at least one dollar on innovation for every dollar that NIF spent.

NIF could be organized in several ways. It could be organized as part of the Department of Commerce, as a government-related nonprofit organization, as an independent federal agency, or as an arm of the Office of the President. But whatever way it is organized, it should remain relatively lean, employing a staff of approximately 250 individuals. It should recruit the best practitioners and researchers whose expertise overlaps in the areas of productivity, technology, business organization and strategy, regional economic development, and (to a lesser extent) trade. Like NSF, NIF would be set up to allow some staff members to be rotated into the agency for limited terms from outside of government and to allow some permanent NIF staff members to go on leave for limited terms to work for private employers.

Already there is legislation in the Senate to create an NIF-like organization. The National Innovation Act, introduced by Senators Hillary Clinton (D-NY) and Susan Collins (R-ME), would create a National Innovation Council, housed in the Office of the President and consolidating the government’s primary innovation programs.

Now more than ever, the U.S. standard of living depends on innovation. To be sure, companies are the engines of innovation, and the United States has an outstanding market environment to fuel those engines. Yet firms and markets do not operate in a vacuum. By themselves they do not produce the level of innovation and productivity that a perfectly functioning market would. Even indirect public support of innovation in the form of basic research funding, R&D tax credits, and a strong patenting system—important as it is—is not enough to remedy the market failures from which the nation’s innovation process suffers. At a time when the United States’ historic lead in innovation is shrinking, when more and more high-productivity industries are in play globally, and when other nations are using explicit public policies to foster innovation, the nation cannot afford to remain complacent. Relying solely on firms acting on their own will increasingly cause the United States to lose out in the global competition for high-value-added technology and knowledge-intensive production.

The proposed NIF would build on the few federal programs that already succeed in promoting innovation and borrow the best public policy ideas from other nations to spur innovation in the United States. It would do so through a combination of grants, technical assistance, information provision, and advocacy. It would address the major flaws that currently plague federal innovation policy and provide the United States with a state-of-the-art initiative for extending its increasingly critical innovation prowess.

Yet NIF would neither run a centrally directed industrial policy nor give out “corporate welfare.” Rather than taking the view that some industries are more important than others, NIF is based on the idea that innovation and productivity growth can happen in any industry and that the nation benefits regardless of the industry in which they occur. It would work cooperatively with individual firms, business and business/university consortia, and state governments to foster innovation that would benefit the nation but would not otherwise occur. In a world of growing geographic competition for innovative activities, these economic and political actors are already making choices among industries and technologies to serve their own interests. NIF would give them the resources they need to make those choices for the benefit of the nation as a whole.

Without the direct federal spur to innovation that NIF would offer, productivity growth will be slower. Wages will not rise as rapidly. U.S. companies will introduce fewer new products and services. Other nations have realized this and established highly effective national innovation-promotion agencies. It is time for the United States to do the same. By combining the nation’s world-class market environment with a world-class public policy environment, the United States can remain the world’s innovation leader in the 21st century.

Predicting the future

Predicting the future of the global human community often seems to be a fool’s errand. The track record of futurism is notoriously deficient; mid-20th century prognostications of life in the early 21st century are now used mainly for generating laughter. The failure of prophecy has many roots. Forecasters often merely extrapolate existing trends, unreasonably assuming that the underlying conditions will remain stable. Wrenching discontinuities are often difficult even to imagine, yet history has been molded by their inevitable if unpredictable occurrences. Many futurists also allow their ideological commitments, if not their underlying personalities, to shape their conclusions. Thus pessimists and environmentalists commonly see doom around the corner, whereas technophiles and optimists often envisage a coming paradise. As years go by and neither circumstance comes to pass, the time of fulfillment is merely put off to another day.

Vaclav Smil is well aware of these and other problems that confront any would-be seer. As a result, he has written a different kind of consideration of the global future, one marked by careful analysis, cautious predictions, and a restrained tone. “In sum,” he tells us in the book’s preface, “do not expect any grand forecasts or prescriptions, any deliberate support for euphoric or catastrophic views of the future, any sermons or ideologically slanted arguments.” Smil readily acknowledges that we live in a world of inherent uncertainty in which even the near-term future cannot be accurately predicted. Yet he also contends that a number of risks and trends can be quantitatively assessed, giving us a sense of the relative likelihood of certain outcomes. Such a modest approach, limited in its purview to the next 50 years, is unlikely to generate public excitement or large book sales. It can, however, provide a useful corrective for the inflated claims of other futurists as well as generate constructive guidelines for risk minimization.

Few authors are as well qualified to write about the coming half-century as Smil, a Czech-born polymath who serves as Distinguished Professor at the University of Manitoba, Canada. Smil works in an impressive array of languages; reads voraciously; skillfully engages in economic, political, and ecological analysis; and is fully global in his concerns and interests. He initially gained attention as a Sinologist, his 1984 book The Bad Earth: Environmental Degradation in China easily counting as pathbreaking if not prescient. More recently, Smil has emerged as a leading expert on global issues, his topics ranging from energy production to food provision to biospheric evolution. In general, he aims for a broad but highly educated audience. Readers of Global Catastrophes and Trends should be prepared for a good dose of unadorned scientific terminology and quantitative reasoning as well as a qualified style of argumentation in which both sides of heated debates are given due hearings.

SMIL DEBUNKS ALARMIST CONCERNS ABOUT THE IMMINENT EXHAUSTION OF OIL, EXCORIATES ALL FORMS OF BIOMASS-BASED ENERGY AS ENVIRONMENTALLY DESTRUCTIVE, AND DISMISSES THE QUEST FOR FUSION POWER AS QUIXOTIC.

As his current title indicates, Smil divides his consideration of the future into two parts: the first examining the possibility of catastrophic events, the second turning to the playing-out of current trends. Smil initially focuses on potential catastrophes of global scale, whether human-induced or generated by nature. He concludes that the risks of “fatal discontinuities” emerging from fully natural events are real but small. Some possible hazards, such as those posed by volcanic mega-eruptions, must be accepted as unavoidable but highly unlikely. Others, such as the cataclysmic collision of an asteroid or comet with Earth, could potentially be addressed. Threatening objects, for example, might be nudged away from Earth-intersecting trajectories by docked rockets. Smil urges NASA to reorient its mission toward gaining such capabilities.

Overall, Smil is less concerned about possible physical calamities than he is about epidemic diseases. He downplays the significance of new pathogens, such as the Ebola virus, to focus on novel strains of influenza, concluding that the likelihood of a flu pandemic in the next 50 years approaches 100%. In a similar vein, he worries more about the possibility of a “megawar” than he does about terrorism, the risks of which he considers overstated and manageable.

Major trends

After having dealt with possible catastrophes, Smil outlines the trends that he thinks will be most influential over the next 50 years. He wisely begins with energy, contending that the world’s most momentous near-term change will be its “coming epochal energy transition.” Smil will disappoint both environmentalists and high-tech enthusiasts, however, with his argument that the movement away from fossil fuels will be protracted because of the continuing economic advantages of oil, coal, and natural gas and the inherent limitations of solar, wind, and other green energy sources. He debunks alarmist concerns about the imminent exhaustion of oil, excoriates all forms of biomass-based energy as environmentally destructive, and dismisses the quest for fusion power as quixotic. Smil is guardedly supportive of nuclear fission but rejects it as any kind of panacea. In the end he calls for governmental programs to increase energy efficiency and reduce overall use.

From energy, Smil abruptly turns to geopolitics and international economics. His main goal here is to assess which parts of the world are likely to occupy positions of leadership 50 years from now. He argues that Europe, Japan, and Russia will probably see their influence diminish, largely because of their imploding populations and the resulting stresses generated by mass aging. In the case of Europe, he is also alarmed by growing immigrant populations, mostly Muslim, that are not experiencing social integration. Smil is no more sanguine about the prospects of the demographically expanding Islamic world, which he sees as producing a dangerous surfeit of unemployed young men. He is also concerned about “Muslim countries’ modernization deficit,” warning us that for “sleepless nights, think of a future nuclear Sudan or Pakistan.”

Overall, Smil contends that the two countries that will matter the most are China and the United States. Based on deeply entrenched trends, he foresees the continuing rise of China as the world’s new workshop, coupled with the gradual decline of the spendthrift, deindustrializing United States. By the end of the period in question, he thinks that the Chinese economy will outrank all others in absolute terms. Such economic prowess will translate into geopolitical clout; as early as 2020, Smil argues, China could match the United States in defense spending. He insists, however, that such trends indicate a likely rather than a preordained future. As a result, he takes care to summarize the weak points of the Chinese system that could disrupt the country’s ascent. Similarly, near the end of the book, Smil reconsiders the future position of the United States, this time stressing its economic and political resilience.

The final substantive chapter in Global Catastrophes and Trends turns to the world’s environmental predicaments, especially those posed by climate change. Although Smil accepts the reality of global warming, he emphasizes the uncertainty intrinsic to all climate forecasts. Because of the complex and poorly understood feedback mechanisms involved, he concludes that “even our most complex models are only elaborate speculations.” And although he does expect continued warming, he thinks that the overall effects will be manageable, with little damage done to crop production and a relatively small rise in sea level. Smil also cautions that excessive concern about climate distracts attention from other pressing environmental threats, including those generated by invasive species, water shortages, and the excessive use of nitrogen-based fertilizers. Basic biospheric integrity, he argues, ultimately underwrites all economic endeavors, yet is often taken for granted.

Global Catastrophes and Trends concludes by urging a calmly rational approach to crucial problems, avoiding extreme positions. Smil fears that society at large has embraced a kind of manic-depressive attitude in which “unrealistic optimism and vastly exaggerated expectations contrast with portrayals of irretrievable doom and indefensibly defeatist prospects.” He correspondingly calls for a strategy of prudent risk minimization that would emphasize “no-regrets options.” Reducing energy use and carbon-intensive commodities and services, developing new antiviral drugs, protecting biodiversity, and guarding against asteroid collisions are all, Smil argues, not only possible but economically feasible.

Prediction’s pitfalls

Smil’s moderate and rational approach to major issues of global significance has much to recommend it, as does his rejection of sensational predictions of impending collapse. He is also right to remind us that climate change is such an inordinately complex matter that we should avoid making unduly confident forecasts. But that said, global warming might prove far more damaging to both the economy and the biosphere than Smil expects, especially if the time horizon is extended beyond 50 years. On this issue, Smil seems to have adopted an attitude of optimism that many sober climatologists would find unwarranted.

Smil can also be faulted for occasionally ignoring his own warnings against extrapolating trends into the future. He thus captions a graph showing China’s economy surpassing that of the United States in 2040 with the bald assertion that China’s rapid growth will make it the world’s largest economy. Perhaps, but perhaps not, as Smil readily admits elsewhere. More problematic is Smil’s belief that some trends are so deeply embedded that they will prove highly resistant to change, leading to his assertion that low birthrates will essentially doom Europe, Russia, and Japan to relative decline. Yet in just the past two years, fertility rates in both France and Russia have significantly increased. It is not inconceivable that the birth dearth of the industrial world will eventually come to an end, as did the baby boom of the post–World War II era.

One of the more underappreciated forces of social change is that of conflict between the generations. Rising cohorts often differentiate themselves from those that came before, adopting new attitudes and embracing distinguishing behaviors. Such generational dynamics potentially pertain to a number of tendencies analyzed in Global Catastrophes and Trends. Many Muslim young people today react against their parents and grandparents by espousing a harsh form of Islam, but such an option will not necessarily be attractive to their own children 30 years from now. By the same token, can we be sure that the coming generation of Italian and Japanese youth will be as averse to reproducing themselves as were their parents’ peers? Perhaps they will respond to the impending crises of national diminution and aging by changing their behavior on this score. In any event, fertility rates will almost certainly continue to fluctuate, ensuring that any precise forecasts of future population levels will be wrong.

Another generator of unpredictability is the so-called unknown unknowns: future events or processes of potentially world-transforming magnitude that we cannot postulate or even imagine, given our current state of knowledge. Although Smil by no means denies the possibility of such occurrences, I think he gives them inadequate attention. As such, he could profitably engage with the work of Nassim Nicholas Taleb, perhaps the premier theorist of randomness and uncertainty. As Taleb shows in The Black Swan: The Impact of the Highly Improbable, the rare and unprecedented events that he calls “black swans” have repeatedly swept down to make hash out of many of the world’s most confident predictions.

But one could hardly expect Smil to deal with every form of unpredictability or with every author who has written on risk and uncertainly. The pertinent literature is vast, as is the subject matter itself. Smil has written a terse, focused work on the world’s main threats and trends, not an all-encompassing tome on futurology and its discontents. In doing so, he has digested and assessed a huge array of scientific studies, economic and political analyses, and general prognostications. For that he is to be commended, as he is for his dispassionate tone and rational mode of investigation. Readers interested in large-scale economic, political, environmental, and demographic tendencies will find Global Catastrophes and Trends a worthwhile book. Those suffering from sleepless nights as they fret about the world’s dire condition or enthuse about its coming techno-salvation, on the other hand, may find it an invaluable emollient.


Martin Lewis () is a senior lecturer at Stanford University and the author of Green Delusions: An Environmentalist Critique of Radical Environmentalism.

Forum – Fall 2008

The education people need

Brian Bosworth’s “The Crisis in Adult Education” (Issues, Summer 2008) could not be timelier for U.S. community colleges. As the nation’s current economic problems intensify, increasing numbers of adults are returning to community colleges to obtain education and training for living-wage jobs. They are often unprepared to perform college-level work, and these institutions are often unprepared to handle their concerns, in large part because of the issues Bosworth discusses. His suggestions for change are so logical that the reader is left with but one question: What is preventing change from happening?

Unfortunately, Bosworth does not take up this issue, but there are at least two major stumbling blocks. The first concerns the theory and practice of adult learning at the postsecondary level. Despite all the education theory and research conducted in the United States, there are precious few empirical examinations of successful ways in which adults learn. Indeed, most of the literature on remedial or developmental postsecondary education questions its current effective practice. There is some evidence to suggest that “contextual learning”—the embedding of basic adult-education skills in job training—works, but the jury is still out on whether the promising practices of a few boutique programs can be brought to the necessary scale.

The second obstacle concerns the separation of education and economic development. As long as postsecondary education continues to be viewed as an issue of access and finance for parents and their children, not as a strategy for economic development, most elected officials will continue to focus policy on traditional students. Rarely is educational policy seen as connected to economic growth and international competitiveness. Part of the reason for this has been the relative lack of corporate concern; the private sector has essentially been silent about the need for advanced educational opportunities for adults. Companies do their own training, or try to hire trained workers away from each other. This is a very inefficient, costly and, from a social viewpoint, ineffective strategy that does not produce the number of educated, highly skilled workers necessary for economic growth and prosperity. Some organizations, such as the Business Round-table, are beginning to advance this type of strategy, but it still is in the very beginning stages.

Finally, most of Bosworth’s recommendations call for changes in federal policies. U.S. education, including that at the postsecondary level, is primarily a local and state responsibility. Community colleges derive their main sources of revenue from tuition, state funds, and local property assessments, and they are governed by local boards. Policies affecting adult education and its connection to workforce development and the cost of that education to the student are generally products of state and local policies. The connection between these policies and adult learners also needs to be examined.

Still, Bosworth’s suggested changes are important and should be discussed by all committed to furthering the post-secondary needs of working adults. In that regard, he has made a major contribution to the field.

JAMES JACOBS

President

Macomb Community College

Warren, Michigan


I am writing to elaborate on Brian Bosworth’s thoughtful essay. Once a global leader in educational attainment, the United States has taken a backseat to other industrialized countries, which have broadened their educational pipeline and now are producing more young adults with a college degree. National estimates indicate that the United States will need to produce approximately 16 million college degrees, above and beyond the current rate of degree production, to match those leading nations by 2025.

Unfortunately, our current system of adult education is ill-equipped to handle the millions of adults who need and must receive training in order to replace retiring baby boomers and allow us to meet this target. Each year this problem is compounded further by the influx of high-school dropouts (25% of each high-school class) as well as the large number of high-school students who do graduate but are unprepared for the rigors of college or the demands of work.

Significant progress can and must be made with adult students to address the educated workforce shortfall, while continuing work to improve the achievement and attainment of the traditional school-age population.

First and foremost, we must recognize that the current system of higher education, designed to serve the traditional, high-performing 18- or 19-year-old, simply does not work for the majority of our working adults. Our response has been to retrofit adult students into this model primarily through remedial instruction. Given that most adults attend part-time, this further delays and blurs the path to a college degree. It is not surprising that in a recent California study, fewer than one in six students completed a remedial class and a regular-credit class within one year. Regrettably too few adults achieve success, and those that do persevere typically take 7 to 10 years to attain a degree.

We need bold new approaches designed specifically for adult students that provide a clear and direct path to the degree they seek. Bosworth notes the excellent completion rates at the University of Phoenix, which enrolls more than 50,000 students nationwide and strategically designs programs around the lifestyles of working adults. Indiana Wesleyan University has achieved similar success with campuses across Indiana serving the working adult population. We need more of these accelerated, convenient, technology-enhanced programs designed with the purpose of guaranteeing degree attainment without sacrificing quality.

Equally important is addressing the pipeline of young adults aged 18 to 24 that continues to increase the percentage of our population with low educational attainment. In its recently adopted strategic plan for higher education, Indiana has proposed the development of an accelerated program, wherein students earn the credits to complete an associate’s degree in 10 months. This program will be appealing to students who do not want to forgo earnings for multiple years, and will have positive results, including increased persistence and attainment. The primary goal is to reach students before “life gets in the way” of their educational pursuits.

Successfully educating an underskilled adult workforce is an enormous task, but promises significant returns. It will take bold and new strategies to meet the challenge.

STANLEY G. JONES

Commissioner

Indiana Commission for Higher Education

Indianapolis, Indiana


Peter Cappelli (“Schools of Dreams: More Education Is Not an Economic Elixir,” (Issues, Summer 2008) introduces a well-reasoned perspective into the 25-year conversation that has driven education reform in this nation. Beginning with A Nation at Risk and most recently enshrined in the federal education law No Child Left Behind, we have increasingly assumed two “truths” about public education: (1) the nation’s schools are failing our children, and (2) without preparing all youth for college, we are dooming our economic future.

The first assumption is partly true because too many young people fail to complete high school and too many high-school graduates are poorly prepared for either college or the workplace. Narrowing the curriculum to more college-preparatory coursework and holding schools accountable may contribute to the dropout problem. Piling on more academics seems to have made little impact. National Assessment of Educational Progress (NAEP) reading scores for 17-year-olds have declined since 1984 despite a 45% increase in academic course-taking. NAEP science scores have declined substantially in the same period despite the doubling of science credits earned. NAEP math scores are relatively unchanged despite a doubling of math credits. These and other data argue for other ways of thinking about preparing tomorrow’s workforce.

Cappelli does an artful job of debunking the second assumption. This continued belief, in the face of abundant evidence to the contrary, is at the heart of school reform agendas, from the American Diploma Project to the U.S Department of Education. The recent report of the Spellings Commission on the Future of Higher Education, for example, declared that “90% of the fastest-growing jobs in the new knowledge-driven economy will require some postsecondary education.” As Paul Barton at the Educational Testing Service notes, this false conclusion comes from a lack of understanding about basic data.

So how best to prepare our young people to succeed in the emerging labor market? The most obvious strategy is to focus on the technical and work-readiness skills employers need, especially those at the middle skill level where nearly half of all job growth is expected to occur, and to ensure access to those skills by today’s adolescents. This strategy requires that we expand, not reduce, high-school career-focused education and work-based learning, or Career and Technical Education (CTE). CTE has been shown to increase the likelihood that students will complete high school, increase the math skills of participants, and help young people focus on and complete postsecondary education and training. Yet current data from the Condition of Education (2007) show that the nation’s youth are taking substantially less CTE than in years past. An abundance of anecdotal evidence suggests that this is both a problem of access—fewer programs available in fewer schools—and opportunity—students have less time in the school day to access sustained occupational programming, as academic requirements continue to crowd out options for rigorous CTE.

High-quality CTE can improve the academic performance of America’s youth and the quality of America’s workforce, but only if robust programs are available to all young people who may benefit.

JAMES R. STONE III

Director

National Research Center for Career and Technical Education

University of Louisville

Louisville, Kentucky


Arguments about whether the labor market needs a better-educated work force are far too general. As Peter Cappelli shows, employers have serious needs, but they are for specific vocational and soft skills and in a narrow range of jobs. Cappelli correctly notes that vocational programs in community colleges may substitute for training and development previously provided by employers. Community colleges are, in fact, well-positioned to meet employers’ needs, given that they enroll nearly half of all undergraduates.

As we wrote about in Issues, Summer 2007, some two-year colleges do provide students with specific vocational and soft skills and link them to employers, although they do not do so generally or systematically. As Cappelli notes, instead of diffuse efforts at creating a “better-educated workforce,” policymakers should target their efforts at improving community colleges, focusing particularly on applied associates’ programs, soft skills, and problem-solving in practical contexts, and also on developing high-school career programs. Our college-for-all society and employers’ changing needs are transforming the meaning of a college education; our institutional organizations and policies need to respond.

JAMES E. ROSENBAUM

Professor of Sociology, Education, and Social Policy

Institute for Policy Research

JENNIFER STEPHAN

Graduate student

Northwestern University

Evanston, Illinois


Peter Cappelli provides a provocative analysis questioning the economic benefits of education. Yet the article focuses inordinately on finding connections between the academic pedigree of assembly-line workers and widget production. That analysis is too narrow, too shortsighted. The economic impact of universities extends far beyond creating employees custom-made to boost profits, tax revenue, or production on their first day at work. Universities should foster economic vitality, along with, for example, sustainable environmental health; positive individual well-being; and cultural, ethnic and racial understanding and appreciation. These, too, affect the economy. Perhaps no cliché is more apt: Education is indeed an investment in the future.

For example, additional education in nutrition, hygiene, and biohazards improves individual and public health, benefiting the individual’s workplace and the country’s economy. Let’s look at smoke. A 2005 study estimated that secondhand cigarette smoke drains $10 billion from the national economy every year through medical costs and lost wages. Meanwhile, decades of anti-tobacco education efforts have been linked to fewer teens smoking and more young adults quitting. Simply put: Smoking costs billions; education reduces smoking; and when it does, the economy breathes more easily.

Although many similar threads can be followed, harder to trace are the ways in which education prepares an individual to inspire, innovate, cooperate, create, or lead. Employers want someone “who already knows how to do the job,” often an impractical hope; thus they look for someone who knows how to learn the job, a trait ultimately more valuable, as jobs change rapidly. Quality education fosters the capacity to study, to analyze, to question, to research, to discover. In short, to learn—and to accept, individually, the responsibility for learning how to learn.

As the author acknowledges, employers also want workers with conscientiousness, motivation, and social skills. Except for perhaps good families, good churches, and possibly the armed forces, no institution matches the ability of good schools to foster these qualities.

Work-based learning is also critical to ensuring a labor force sufficient in both numbers and knowledge; thus the California State University has hosted forums bringing faculty from its 23 campuses together with employers from critical economic sectors, such as agriculture, biotechnology, and engineering.

As it renders economic benefits to individuals and industries, education also transforms communities and societies. When universities view their mission through a prism of access and success, diversity and academic excellence, they foster social and economic upward mobility. Raising educational levels in East Los Angeles and other areas of high poverty and unemployment undoubtedly improves the economy by helping to break generational cycles of poverty.

Finally, the article does not address the costs of not educating an individual. In July 2008, California education officials reported that one in four high-school students in the state (and one in three in Los Angeles) drops out of school before graduating. What is the economic toll on society when it loses so many potentially brilliant contributors, as early as middle school, because of inequalities in access to quality education? Whatever it is, it is a toll our society cannot afford, economically or morally.

JAMES M. ROSSER

President

California State University, Los Angeles

Los Angeles, California


Matthew Zeidenberg’s succinct analysis of the challenges facing two-year colleges is both accurate and sobering (“Community Colleges Under Stress,” (Issues, Summer 2008). Several of these issues—financial stress, poor academic preparation, and unsatisfactory persistence and graduation rates—also are common to four-year colleges that enroll large numbers of students who are first in their family to attend college, are from economically depressed neighborhoods, or are members of historically underrepresented racial and ethnic groups. Everyone agrees that K-12 schools must do a better job of making certain that all students have the academic skills and competencies to succeed at the postsecondary level. At the same time, schools cannot do this alone. Family and community support are indispensable to raising a student’s educational aspirations, becoming college-prepared, and increasing educational attainment levels across the board. So to Zeidenberg’s recommendations I add two more.

First, students and families must have adequate information about going to college, including real costs and aid availability. Too many students, especially those from historically underserved backgrounds, lack accurate information about postsecondary options. They are confused about actual tuition costs and expectations for academic work. The Lumina Foundation for Education, the Ad Council, and the America Council of Education are collaborating on KnowHow2GO, a public-awareness program to encourage low-income students in grades 8 to 10 and their families to take the necessary steps toward college (). Another effort is the nonprofit National College Access Network (NCAN), a federation of state and local efforts that provide counseling, advice, and financial assistance to students and families. Local initiatives, such as College Mentors for Kids! Inc., which brings together college and elementary-age students through their participation in campus and community activities, and Indiana’s Learn More Resource Center, are models for disseminating information about college.

Second, we must expand the scale and scope of demonstrably effective college-encouragement and transition programs. Particularly effective programs are the Parent Institute for Quality Education; the Puente Project; and GEAR UP, which provides information about financial aid, family support and counseling, and tutoring, among other things. Other promising encouragement initiatives include many of the TRIO programs funded under Title IV of the Higher Education Act, such as Upward Bound, Upward Bound Math/Science, Student Support Services, Talent Search, Educational Opportunity Center, and the McNair Program. For example, students in Upward Bound programs are four times more likely to earn an undergraduate degree than those not in the programs. Students in TRIO Support Services programs are more than twice as likely to remain in college as students from similar backgrounds who did not participate in the program ().

Preparing up to four-fifths of an age cohort for college-level work is a daunting, unprecedented task. The trajectory for academic success starts long before students enter high school. As Iowa State University professor Laura Rendon sagely observed, many students start dropping out of college in the third grade. Essential to breaking this unacceptable cycle is gaining the trust and support of parents and communities and ensuring that every student knows what is required to become college-ready and how to obtain the necessary financial resources to pursue postsecondary education.

GEORGE KUH

Chancellor’s Professor

Director, Center for Postsecondary Research

Indiana University

Bloomington, Indiana


Prison policy reform

In “Fixing the Parole System” (Issues, Summer 2008), Mark A. R. Kleiman and Angela Hawken correctly note that incarceration has become an overused and hugely expansive state activity during the past generation. Controlling for changes in population, the imprisonment rate in the United States has expanded fourfold in 35 years. Their rather modest proposal is to substitute intensive supervision and non-incarcerative sanctions for a system of parole monitoring and reincarceration in California that combines high cost and marginal public safety benefits.

There are three aspects of their program that deserve support:

  1. The shift from legalistic to harm-reduction goals for parole;
  2. The substitution of non-incarcerative for incarcerative sanctions for parole failure; and
  3. The use of rigorous experimental designs to evaluate the program they advocate.

There is clear public benefit in systematically stepping away from a practice that is simultaneously punitive, expensive, and ineffective.

Almost all responsible students of California crime and punishment support deconstruction of the state’s parole revocation juggernaut. But the brief that Kleiman and Hawken file on behalf of this penal reform is disappointing in two respects. Problem one is the rhetorical tone of their article. The authors intimate that risk monitoring and non-prison sanctions can lower crime rates, which would be very good news but is also unnecessary to the success of the program they support. If non-imprisonment parole monitoring produces no increase in serious crime at its smaller correctional cost, that will vindicate the reform. Reformers shouldn’t have to promise to cure crime to unwind the punitive excesses of 2008. And the proponents of reform should not have to sound like they are running for sheriff to sell modest reforms!

My second problem with the case that is presented for community-based intensive supervision and non-prison sanctions is its modesty. The authors suggest a non-prison program only for those already released from prison. But why not create such programs at the front end of the prison system as well, where diversion from two- and three-year imprisonment terms might be even more cost-effective than a parole reform if non-incarcerative programs have roughly equivalent outcomes? Is reducing California’s prison expenses from $9 billion to $8 billion per year the best we can hope for?

FRANKLIN E. ZIMRING

William G. Simon Professor of Law

School of Law

University of California

Berkeley, California


Mark A. R. Kleiman and Angela Hawken are certainly correct in saying that the parole and probation systems are badly broken and overwhelmed. They are also correct in concluding that if parole and probation were more effective, crime would decline, lives would improve, and the states would save barrels of money that are now being wasted on failed policies and lives.

Citing the Hawaii experiments, Kleiman and Hawken would rely heavily on the behavior change benefits of certain, swift, and consistent punishment for violations that now go undetected or are inconsistently punished. They cite the research literature on the importance of behavior change that is reinforced by rewards for appropriate behaviors but suggest that political opposition may limit the opportunities on that side of the ledger. In my view, the role of positive reinforcement and incentives must be significantly expanded in post-release supervision to really affect long-term recidivism. Released convicts have enormous needs, including housing, medical care, job training, etc. A properly resourced parole or probation officer could reinforce and promote a lot of good behavior by getting the parolee/probationer what he really needs to succeed as well as holding him accountable for his slips.

The burden of post-release supervision is made even heavier by the flood of prisoners who arrive totally unprepared to resume civil life. Their addictions—the underlying cause of most incarcerations—and other physical and mental illnesses have not been treated; they have no job experience or training; and their overcrowded prisons have created social norms of racial gangs and violence. Many, if not most, prisoners emerge in worse shape and less able to function in civil society than when they entered. The treatment of many criminals is itself criminal. I really wonder if California will be less safe if a judge orders thousands of prisoners released before they get poisoned by the prison environment and experience. I am confident that competent post-release supervision and support will produce a better result than we get now by leaving people to rot in prison; and at significantly lower cost.

The current weakness of parole systems around the country is an ironic unintended consequence of long mandatory sentences without possibility of parole. In many states, politicians thought it would be fine to let the parole systems wither because people completing mandatory sentences wouldn’t be subject to parole. The result we now see compounds the stupidity of the long mandatory sentences themselves.

DAVID L. ROSENBLOOM

Boston University School of Public Health

Boston, Massachusetts


Thinking about energy

Senator Jeff Bingaman is right (“Strategies for Today’s Energy Challenge,” (Issues, Summer 2008). The key to addressing climate change and future energy supplies is technology. We’ll need new energy technologies and new ways of using traditional energy technologies to build the energy and environmental future Americans want. Government will influence what that future looks like, but consumers and private companies will also play integral roles.

U.S. oil and natural gas companies strongly support new technologies. They have invested more in carbon-mitigation technologies than either the government or the rest of the private sector combined—about $42 billion from 2000 to 2006, or 45% of an estimated $94 billion spent by the nation as a whole. They are involved in every significant alternative energy technology, from biofuels to wind power to solar power to geothermal to advanced batteries. They created the technology to capture carbon dioxide emissions and store them underground.

As demand for alternative energy increases, oil and gas companies will be among the firms that meet that demand. However, they are also prepared to provide the oil and natural gas that Americans are likely to need for decades to come. Although our energy landscape will change, oil and natural gas will still provide substantial amounts of transportation fuels, energy for power generation, and petrochemicals, lubricants, and other products. As fuels, they’ll be cleaner and used more efficiently. We’re already seeing this in new formulations of gasoline and diesel fuel, in combined heat and power technology in our refineries, in advances in internal combustion engines, and in hybrid vehicles.

The future will be as much energy evolution as energy revolution. We’ll need all forms of energy—new, traditional, and reinvented—with each finding its place according to consumer needs and environmental requirements. In the end, providing the energy we need while also advancing our environmental goals will be a formidable balancing act. Government policies that can best help achieve these objectives will be those built on a shared vision; stakeholder collaboration among government, industry, and consumers; and a reliance on free markets.

RED CAVANEY

President and Chief Executive Officer

American Petroleum Institute

Washington, DC


In Senator Jeff Bingaman’s article, he says, “Our past technological choices are inadequate for our future. The solutions we need can only come from new technologies.”

Look around at what we are forgetting and puzzle over what Bingaman says. We developed shoes for solar-powered walking, but few walk. We developed safe nuclear power plants, then stopped building them. We developed glass that lets in light and sun centuries ago, yet our buildings need electric lights in the middle of the day. And there are more methods being forgotten that avoid fossil fuels: the bicycle, the clothes-line, passive heating and cooling, and solar water heaters.

What are the “concrete goals, road maps, timelines” he is after? “The time has come for government to act,“ but he has no idea what to do. Like many, Bingaman is under a spell, off balance, blind to what is around him, and seeking unborn machines and larger bank accounts.

STEVE BAER

Post Office Box 422

Corrales, New Mexico


Senator Lamar Alexander’s “A New Manhattan Project” (Issues, Summer 2008) is inspiring in its scope and scale, and I commend him for his commitment and focus on the big picture vis-à-vis energy policy. Although I disagree with some of his comments on electricity generation, I write as a transportation expert who thinks that the puzzle is missing some pieces.

First, there must be a greater focus on the deployment of new technology. Three of the seven components of the plan—plug-in hybrid commercialization and making solar power as well as biofuel alternatives cost-competitive—are reliant only in part on technological breakthroughs. Equally important, if not more so, are smart deployment strategies. We must work with entrepreneurs to develop revolutionary business models that will rapidly transform our vehicle fleets.

One initiative that aims to spur such innovation is the Freedom Prize (www.freedomprize.org). I am excited to be an adviser to this new organization, which will distribute monetary prizes to cutting-edge transformational initiatives in industry, schools, government, the military, and communities. An example of a revolutionary model is Project Better Place, launched by Israeli entrepreneur Shai Agassi. I recently had the pleasure of hearing him talk firsthand about his big idea, which Thomas L. Friedman described in a recent column in the New York Times (July 27, 2008):

“Agassi’s plan, backed by Israel’s government, is to create a complete electric car ‘system’ that will work much like a mobile-phone service ‘system,’ only customers sign up for so many monthly miles, instead of minutes. Every subscriber will get a car, a battery and access to a national network of recharging outlets all across Israel—as well as garages that will swap your dead battery for a fresh one whenever needed.”

Time will tell if it will work, in Israel or elsewhere. Regardless, it is exactly the kind of thinking we need. Technological breakthroughs are necessary but insufficient; they must be complemented by expedited deployment strategies.

The truly indispensable complements to crash research programs and big carrots for innovation are technology-neutral performance standards and mandatory programs to limit global-warming pollution. Such policy was debated by the U.S. Senate this year: the Boxer-Lieberman-Warner Climate Security Act (CSA).

An analysis commissioned by the Natural Resources Defense Council shows that the CSA would have dramatically cut pollution while slashing oil imports by 6.4 million barrels a day in 2025 (down to 1986 levels). This is in part due to Senator Alexander’s success in adding a national low-carbon fuel standard to the bill, which would lower the carbon intensity of fuels, making alternatives such as plug-in hybrids and advanced biofuels more competitive. That’s the kind of policy that would move us forward, and fast.

In sum, building the bridge to a low-carbon secure future requires an array of carrot and stick programs that expedite technological development and deployment. I look forward to working with Senator Alexander to speed us into that better world.

DERON LOVAAS

Transportation Policy Director

Natural Resources Defense Council

Washington, DC


What is science policy?

Irwin Feller and Susan Cozzens, both nationally recognized science policy scholars, have hit the nail on the head with their appropriately scathing critique of U.S. science policy entitled “It’s About More Than Money” (Issues, Summer 2008). The only thing they didn’t do was drive the nail in far enough to seal the fate of this critical area of national policy that remains wholly unsophisticated, unchanging, and inadequate to the task of providing our nation with the tools we need to make best use of our national R&D investment.

Here it is 2008, when we have the ability to analyze and quantify even everyday things such as the impact of soft drink advertising during the Super Bowl, but we can’t yet develop a national science and technology logic that goes beyond “we need more money.” We live in an era when the production of science-based knowledge, at everincreasing rates, is driving changes in economic competitiveness, culture, quality of life, foreign and military affairs, and sustainability on a global scale, and yet we have a science policy that is no more robust than most families apply to their family budgets: We have so many dollars this year and we would like more next year. Feller and Cozzens attack the central sophomoric argument of U.S. science policy, which has its roots in the original designs of Vannevar Bush and his piece Science—The Endless Frontier, published in the wake of the total victory of the Allies and the unconditional surrender of their enemies in World War II. What they don’t address is why we have been unable to grow up from our simple approach of largely unguided national science planning and budgeting.

It was in fact the simplistic correlation between our very successful efforts to develop new weapons during the war and our ultimate total victory that led to the genesis of a very simplistic model for science policy. This model works something like this: Science is good, more money for science is good, if you fund it more, good things (like winning the war against two opponents at the same time) will happen. We never got past this level of logic. Simple logic always sticks around for a long time, in the same way that lots of outmoded stereotypes do, such as just let science guide itself, as it can’t be guided.

This logic is so simple and so beneficiary to most of the stakeholders in the science policy realm that even the president’s science advisor hasn’t been able to make a change in the basic model after six years of effort. We fund our national science efforts on the premise that our success is measured in the investment itself and not its outcomes. This logic has actually kept us from building an outcomes-oriented national science policy, and as a result has put America’s well-being at risk.

When your policy success is measured by the budget inputs and not on goal attainment or outcome achievement or national performance, then we literally have no idea what we are doing or why. Our present rhetoric is that we need to spend more on science and this will make America greater. Or that we need more scientists or engineers to be stronger. Although these facts may be true, we don’t have empirical evidence of that, and more important, even if we did, we would need to be able to answer the question of whether our investments are helping us to reach the outcomes we most desire.

Most Americans seek a better life for their families; most want to have access to a safe, clean everything; and most want their children to have access to higher qualities of life. At the moment, we have very few tools in the science policy realm that could make any assessment of the relationship between science investments and these outcomes. This is very unfortunate and needs to be addressed.

Addressing it means that we must reject the notion that science policy is about money. It is about who and what we want to be and do. It is about attacking our most critical challenges and knowing where we are along the way. It is about having some dreams that we hope for and understanding that these investments are our means to achieve these dreams and holding people accountable for progress toward them.

It’s about a lot more than money, and Feller and Cozzens help us to see that.

MICHAEL M. CROW

President

Professor of Public Affairs and Foundation Leadership Chair

Arizona State University

Tempe, Arizona


Irwin Feller and Susan Cozzens note several important challenges for the new science of science policy. They point to “the perennial challenges that researchers and policymakers confront as they try to reduce the uncertainties and complexities surrounding processes of scientific discovery and technological innovations.” And they note the serious gaps that exist in the knowledge base on which new theories of science policy must be based. Most important, they assert that more effective science policy requires increased dialogue between the policy and research communities.

Although Feller and Cozzens note that one of the problems with current policy and research on policy is that it is too narrowly framed, they discuss policy research only in terms of evaluating science policy. What about the other side of the coin: research to improve science policy? As I’ve argued elsewhere, if one of the goals of our research is to improve science (and technology) policy, we must design our research with improved policy as an outcome. From a systems perspective, this research process would necessarily include key stakeholders such as policymakers in at least the design and communication phases, with feedback loops from such stakeholders to the research team. The identification of gaps in knowledge, possible consequences of success and failure of contemplated policies, and possible unintended consequences would all be part of a systems analysis framing policy-relevant research.

A systems analysis including policymakers clearly won’t solve the current dialogue gap between the policy and research communities. But we must begin a serious effort to work together for more effective policy-relevant research and policymaking. Not all researchers or policymakers would choose to be part of such an effort, but many from both groups, at the federal and state levels, have already demonstrated their interest through participation in such communication efforts, usually on specific topics.

On a more minor point, but perhaps typifying at least some of the examples used in the article, Feller and Cozzens point to one of the action outcomes identified in the National Academies report Rising Above the Gathering Storm: the call for recruiting 10,000 new science and mathematics teachers. They suggest that this call overlooks “the impressive data base of human resource surveys,” analyses of science and technology career patterns, and the government level responsible for education. In fact, this call was based on a rigorous state-level study of these factors in concrete cases such as Texas and California, and was extrapolated conservatively to states conducting similar studies at the time of the report. There has been excellent uptake of this call by more states subsequently (Arizona, North Carolina, Arkansas, Iowa, and Indiana, among others). The National Academies hosted national symposia in 2007 and 2008 focused on the federal/state/local relationship essential to meeting the science, technology, engineering, and mathematics education challenge.

In their four-page article, Feller and Cozzens manage to draw in most of the recent reports and commentaries relating to Presidential Science Adviser John Marburger’s call for a new science of science policy. And their overall point seems exactly right: that “a much broader approach” than has currently been taken is needed. Although I have no doubt about their capacity to map an outstanding broader approach, this brief article didn’t take us there, or even point us there.

ANNE C. PETERSEN

Deputy Director, Center for Advanced Study in the Behavioral Sciences at Stanford

Professor of Psychology

Stanford University

Stanford, California


Investments in basic scientific research and technological development have had an enormous impact on innovation, economic growth, and social well-being. Yet science policy decisions at the federal and state levels of government are typically dominated by advocates of particular scientific fields or missions. Although some fields benefit from the availability of real-time data and computational models that allow for prospective analyses, science policy does not benefit from a similar set of tools and modeling capabilities. In addition, there is a vigorous debate as to whether analytically based science policy is possible, given the uncertainty of outcomes in the scientific discovery process.

Many see the glass as half empty (not half full) when they contemplate the “knowns” that make up the evidence-based platform for science and innovation policy. This area of research and practice is not new to academics or policymakers; there are decades-old questions that we currently contemplate; problem sets that continue to be imperfectly answered by experts from varied disciplines and fields. In addition, the anxious call for or anticipation of better conceptualizations, models, tools, data sets, and metrics is not unique to the United States but is shared among countries at different levels of economic development. The marriage of ideas from an interdisciplinary and international community of practice is already emerging to advance this scientific basis of science policy. Diversity of thought and experiences no doubt lead Irwin Feller and Susan Cozzens to encourage the cause but strongly caution the process by which frontier methods are developed and utilized. An “increased dialogue between the policy and research communities”—and I would add here the business community—is paramount.

As the glass fills, therefore, so will the frequency and complexity of this dialogue. For instance, the management of risks and expectations is common practice in business and increasingly common in designing potent science and innovation policy mechanisms. Opportunities exist, therefore, for breakthroughs in finance and economics with applications to funding portfolios of science. But, it’s about more than the money. Understanding the multifunctional organism that facilitates creative invention and innovation requires the synthesis of network analysis, systems dynamics, and the social psychology of team networks. Add to that downstream linkages to outcomes data that can be harvested using modern scientometric or Web-scraping techniques. This complex research activity could add clarity to our understanding of the types of organizations that pass new ideas on to commercial products most effectively.

Another question often overlooked in the literature is the management of short-term and long-term expectations. The portfolio approach to the science of science and innovation policy could yield a full spectrum of analytical tools that satisfy short-term requirements while accomplishing long-term goals. These are topics that are ripe for frontier research and yet still have practical applications in the policy arena.

Often the question is asked, what should government’s role be in science and innovation policy? Although there is much controversy about incentives that try to pick winners, returns from tax incentives, and regulatory reform, many would agree that facilitating information exchange could yield important positive social dividends. Already, public funding has been used to sponsor research on the science of science and innovation policy and workshops and forums where academics, policymakers, and representatives from the business community exchange ideas. Partnerships among these three stakeholders are expected to be productive, yet as with many scientific endeavors, time is an important variable.

KAYE HUSBANDS FEALING

Visiting Professor

Humphrey Institute of Public Affairs

University of Minnesota

Minneapolis, Minnesota


Science and democracy

In “Research Funding via Direct Democracy: Is It Good for Science?” (Issues, Summer 2008), Donna Gerardi Riordan provides a timely, cogent case study of the “be careful what you wish for” brand of risk-taking that comes with merging science funding with populist politics. The California Stem Cell Research and Cures Bond Act of 2004 (Proposition 71) is probably not, in toto, good for science. The ends (more funding) can not justify the means (hype masquerading as hope). No good comes when science sacrifices honesty for expediency.

Beyond doubt, the language used to sell Proposition 71 promises more than science can hope to deliver. What is hard to understand is what made many of the parties involved say some of the things that were said. One can understand the anguish motivating those who have or whose loved ones have untreatable illnesses to bet on the promises of embryonic stem cell research, particularly when federal funds are limited. This new area of biology deserves to be explored, even if the ultimate aims of such research remain unproven and unpredictable at this time. Indeed, the United States has a rich history of private dollars, dispersed by individuals, charities, and voluntary health organizations, funding controversial and unpopular research that the federal government cannot or will not support. Economic development and higher-education infrastructure are traditional investments for state coffers. But Proposition 71 seems a horse of a different color. Riordan’s analysis of it rightly focuses our attention on an important question: Is it a good thing that a deliberate decision was made to circumvent the usual processes by which sciences gets funded and states decide investment priorities?

Those of us who care about letting the democratic process work should ask, is Proposition 71 good for public policy? Concocting a public referendum on a complicated issue fraught with scientific, ethical, legal, and social controversies should not be celebrated (nor misinterpreted) as giving people a voice. It is, rather, an example of the few pushing an agenda on the many, bypassing the representative legislative process. Such initiatives are not intended to stimulate debate. The intent, rather, is to shut down the healthy messiness of public debate. The legislative process can be inconvenient, inefficient, and often requires compromise. Given the forced choice of Proposition 71, a majority of the citizens of California, believing money could accelerate the alchemic process whereby basic research yields medical treatments, voted to cure diabetes and defeat Alzheimer’s. They voted for fairness—they wanted life-saving cures derived from “stem cells” (arguably two words that without other modifiers have little biologic or therapeutic meaning) to be accessible and available to all, including the economically disadvantaged. The citizens of California were not asked, at least not in the flyers, billboards, and advertisements, to decide on investing $3 billion in a life-sciences economic stimulus package primarily benefiting University of California research universities and biotechnology companies. They might have been willing to fund such an investment. But they weren’t given the option. What serious problems would California citizens chose to solve in a decade with $3 billion to spend? We don’t know. The powerful few who knew what it was that they wanted didn’t stop to ask them.

SUSAN M. FITZPATRICK

Vice President

James S. McDonnell Foundation

St. Louis, Missouri


Donna Gerardi Riordan points out some of the rotten teeth in California’s $3 billion gift horse: funding for human embryonic stem cell and related research. California’s was the biggest and one of the first such state initiatives in the wake of the Dickey-Wicker federal appropriations ban and President Bush’s August 2001 Executive Order permitting but hemming in federal funding.

California’s referendum mechanism does indeed introduce some wrinkles into the process of funding and governing science. Riordan focuses on the consequences of insulating the program from conventional state legislative and executive processes. Insulating stem cell research from mainstream politics was understandable, however, because of a foreseeable political problem. The opposition was strongly motivated and managed to delay funding for several years through court battles despite the insulation. Fighting this out in the legislature would surely have been contentious, although perhaps eventually reaching more or less the same outcome (but only perhaps).

A previous California health research program, the Tobacco-Related Diseases Research Program (TRDRP), also built in insulation from legislative and gubernatorial politics. TRDRP was created by another referendum, Proposition 25, which increased cigarette taxes and dedicated some of the proceeds to research. The research program was clearly specified in the constitutional amendment but was nonetheless blocked at several turns by the governor and the speaker of the State Assembly, challenges resolved only by the California Supreme Court. TRDRP was immensely valuable to tobacco control research, for years the largest program in the country, surpassing federal funding (sound familiar?). It laid a foundation for tobacco control research nationally and internationally. It mattered, and but for its built-in protections, it clearly would have been scuttled by conventional politics.

The common element of stem cell and tobacco control research is determined opposition, and so there is a plain political explanation for why the insulating provisions were built into the propositions. That does not take away from the consequences of following the referendum route that Riordan so aptly describes.

Attention may now turn to the serious coordination problem that follows from state research programs. How will these integrate with federal funding and with other states and other nations? This may well be tested in embryonic stem cell research if the federal brakes come off next spring, regardless of which party wins the presidency. Should Congress and the National Institutes of Health (NIH) race to match California, Massachusetts, Hong Kong, Israel, Korea, and other jurisdictions that have generously funded stem cell research? NIH merit review awards funds according to scientific opportunity and health need. The need for federal funding is arguably reduced in scientific areas where states and other countries have stepped in. Or is it? The NIH has no clear mechanism to take such funding into account. Pluralism is one of the virtues of U.S. science funding. But too much uncoordinated funding can leave some fields awash in money while others starve. With several independent state-based funding programs, California’s being the largest, the coordination problem will be unprecedented in scale and intensity.

ROBERT COOK-DEEGAN

Director, Center for Genome Ethics, Law, and Policy

Institute for Genome Sciences and Policy

Duke University

Durham, North Carolina

Strengthening the Global Environmental Treaty System

The global environmental treaty-making system—the set of mechanisms by which countries fashion agreements to promote more sustainable development—is not working very well. More than 400 multilateral agreements such as the Kyoto Protocol on climate change now exist, and new treaties are continually being added that address a wide range of problems, including the loss of endangered species and habitats, increasing levels of ocean dumping, the unregulated transshipment of hazardous substances, and desertification. Yet there is no evidence to suggest that the problems these treaties are intended to address are being corrected. There is a variety of reasons why the “system” isn’t working and a number of ways it could be strengthened.

The system is actually quite undeveloped. There are few if any rules regarding the number of countries that must sign a treaty before it can come into force. The penalties for failing to meet treaty obligations are rarely made explicit, and the extent to which countries that have not signed a treaty are legally bound by the standards that the rest of the world has adopted is still a matter of speculation. Enforcement of global environmental treaties is practically nonexistent. The administrators of treaty regimes are, as Abram and Antonia Chayes point out in their book The New Sovereignty, forced to seek “compliance without enforcement.” In a strange turn of events, elected political leaders can get credit domestically for signing a global agreement even if they have no intention of seeking ratification of the agreement from their Parliament or Congress. Environmental treaty regimes are administered by a series of ad hoc secretariats, not by a single United Nations (UN) agency and depend entirely on funding donated by a handful of the countries they are supposed to be regulating. Finally, scientific input into the writing of each treaty and the monitoring of implementation efforts are entirely catch-as-catch-can.

There are a number of ways in which the treaty-making system could be improved. Four in particular stand out: increasing the role of “unofficials” in treaty drafting and implementation, setting more explicit adaptive management targets, offering financial incentives for treaty compliance, and organizing regional science advisory panels to enhance the level of scientific advice available to all nations.

Key environmental treaties

Treaties or multilateral environmental agreements (MEAs) are the products of negotiations among groups of countries. One of the most successful is the Vienna Convention for the Protection of the Ozone Layer (and the follow-up Montreal Protocol). It reversed the rate at which stratospheric ozone–depleting chlorofluorocarbons (CFCs) are emitted into the atmosphere by banning them. On the other hand, the Biodiversity Convention and the Climate Change Convention, which were signed by more than 150 countries at the 1992 Earth Summit, have not even begun to reverse the growing loss of biodiversity or the threat of global warming. Other hoped-for treaties, including some such as the Global Forest Protection Treaty that have been under discussion for decades, have not yet emerged.

For many treaties, the problem is that the goals set are so modest that even if implemented, they would not reverse the trend that triggered the problem-solving effort. The Convention on Wetlands of International Importance, the Convention on International Trade in Endangered Species, and the Convention on Persistent Organic Pollutants seek to slow the rate at which a resource is lost or pollution occurs, but under the best of circumstances, they won’t be sufficient to reverse or mitigate the adverse effects that have already occurred.

Other MEAs have simply not been ratified by key countries. The United States, for instance, has not ratified the Kyoto Protocol, the United Nations Convention of the Law of the Sea, or the Basel Convention on the Control of Transboundary Movements of Hazardous Wastes. Even when countries have signed and ratified treaties, they have been slow to bring their national legislation into conformance with their treaty obligations.

In quite a few instances, the responsibilities of signatory countries for meeting timetables and targets are vague. In general, we have relied on what might be called a two-step convention-protocol process. First, usually after a decade or more of talks among a limited number of countries, a convention is adopted indicating that a problem exists and exhorting countries to do something about it. That’s about all the Climate Change Convention accomplished. Once a convention is ratified, the signatories agree to meet every year or so to talk about ways of adding protocols that spell out more specific timetables and targets. Thus, the Montreal Protocol was a 1990 amendment to the 1987 Vienna Convention. The protocol called for a total phase-out of a list of CFCs by specific dates. It also scheduled interim reductions for each chemical and called on the signatory countries to reassess relevant control measures every four years. During the time that the protocol was under discussion, there was considerable disagreement regarding the scope of the problem, the level of production cuts required, and the provision of aid to developing nations to enable compliance with phase-out targets. The discovery of a hole in the ozone layer (over the South Pole), along with the availability of less-polluting aerosol alternatives, settled the scientific debate and prompted relatively quick action.

In general, financial resources have not been adequate to enable or ensure treaty compliance. There are no general funds available at the global level to help cover the cost of treaty implementation. On occasion, some of the most developed nations, with the help of multilateral institutions such as the World Bank, volunteer to contribute small amounts of money through a foundation-like entity called the Global Environmental Fund (GEF) to assist developing nations in meeting their treaty obligations. Often, though, the politics of allocating these funds mean that money must be set aside for each region despite overwhelming needs in one location or the scientific merit of grant proposals from particular countries.

Although most treaties require each signatory nation to submit regular progress reports, the treaty secretariats rarely, if ever, have sufficient technical staff to review the accuracy of the information submitted or assist countries that need technical support. The progress reports submitted by some countries often contain information that is questionable.

Some nations don’t take their treaty obligations seriously. They sign and even ratify treaties, but they don’t adopt national standards consistent with MEA requirements. In some instances, although they adopt appropriate legislation, they don’t or can’t enforce the standards.

Let’s look more closely at the five key reasons why global environmental treaty making has not produced more impressive results.

The system for creating and enforcing MEAs is still relatively undeveloped. The need to balance science and politics has not been elaborated. At the outset, when countries meet to talk about the possibility of taking collective action, they are often quite skeptical or suspicious of the scientific claims offered by others regarding the scope of the problem. Even if credible scientists or scientific agencies have published relevant research, the political and economic implications of having to make reforms of various kinds lead some countries, particularly the poorer nations, to question the scientific basis for action. Typically, there are no results from a global research program in hand when countries are first asked to adopt a convention. It may seem strange that dozens of countries would pass the equivalent of a global law saying that there is a serious problem requiring global attention before they have the scientific research to back them up, but it is only after a convention is enacted that there is a chance of putting together sufficient funding to undertake a worldwide inquiry. What this means, though, is that the scientific basis for taking action is often scanty at the time countries are asked to act.

Global environmental agreements will always reflect political as well as scientific considerations. This means that decisionmaking is always politicized: Some countries are bound to resent the claims of others (and of nongovernmental entities) that they see as threats to their sovereignty. In general, throughout the treaty-making process, politics dominates scientific considerations. We often see “instructed science” masquerading as detached inquiry when experts from one or more countries are forced to take stands that are contrary to their best scientific judgments. If they don’t comply, they will be replaced. Thus, there is little or no balance between science and politics in treaty making and treaty enforcement.

No single institution has responsibility for building institutional treaty-making capacity. There is no central agency, no UN Environmental Treaty-making and Enforcement body, to oversee multilateral treaties dealing with natural resources or sustainable development. The UN Development Programme, the World Bank, the UN Environment Program, and a long list of global agencies have all weighed in at different times, but there is very little coordination among the many independent treaty secretariats.

Ongoing North-South tensions get in the way. Efforts to formulate and implement new global environmental treaties have been slowed by continuing tension between developed and developing countries. The G-77 nations have repeatedly taken the stand that the developed countries should first do all they can to address various global environmental problems (that they caused) before asking the developing nations to put off development or take costly steps to reduce emissions. The developed world, after all, has been growing in an unsustainable fashion for many decades and is disproportionately to blame for current levels of pollution and unsustainable levels of resource use. In addition, the nations of the South often assert that it is unreasonable for the North to expect the South to take action when the North is unwilling to share new technologies or help to fund Southern capacity-building efforts.

The North asserts that most of the future population growth, increasing demand for energy, and pressure for greater food production will come from the South. Thus, the South ought to be held to the same environmental standards as the North, and the North refuses to sign until the South agrees to participate. The South pleads poverty and demands that the North show good faith by taking action first, sharing technology and providing funds for capacity-building. The two sides continue to wrangle about timetables and targets. The South seeks lower targets and longer time frames to meet them.

We have lost sight of the importance of “common but differentiated” responsibilities. When the Climate Change Convention was signed, it embraced the principle of common but differentiated responsibilities; that is, it acknowledged that global environmental treaties can succeed only if all countries agree to accept a common goal (such as reducing greenhouse gas emissions to sustainable levels) while recognizing that developing countries might need more time and extra assistance (and perhaps be assigned less ambitious goals) than developed countries. The Montreal Protocol gave India and China an extra decade to reach the same targets as most of the developed world.

Some developing nations, such as Brazil, India, and China, the argument goes, should be capable of meeting more ambitious targets than the poorest of the developing nations. In addition, even among highly developed countries, distinctions based on past levels of effort, differences that are a function of resource endowments, and perhaps variations in capabilities already in place might justify variations in assigned goals.

Unfortunately, the United States has backtracked on its commitment to the principle of common but differentiated responsibilities. It is using the unwillingness of the larger, industrializing countries of the G-77 to accept the same targets and timetables as the United States as a reason not to ratify the Kyoto Protocol. Differing timetables and targets for various categories of countries make a lot of sense, but unless the principle is accepted globally, it will be difficult to convince most countries to endorse the treaties they have not yet ratified.

There are few incentives for treaty compliance and few penalties for noncompliance. Most global environmental treaties emphasize sharing the pain rather than sharing the gain. Countries are eager to sign multilateral trade agreements because they want the benefits that being part of a global trading system offers. They are much less inclined to sign environmental treaties, because the presumed benefits won’t be realized for some time (if they come at all) in the form of a cleaner, safer, more sustainable environment. The costs, however, must be paid now. Politicians with limited electoral time frames are willing to sign such treaties only if they are pushed to do so.

One way to change this calculus is to reward countries in the short term for joining environmental treaty regimes. If elected national leaders could demonstrate that signing a MEA entitled their country to immediate economic benefits, they would be more inclined to do so. Unfortunately, few if any revenue streams are linked to environmental treaty regimes, so it is difficult to see how economic benefits might materialize in the short term to encourage membership and compliance.

In addition, there are no financial penalties for noncompliance with global environmental regimes. It is hard enough to get reluctant countries to voluntarily embrace timetables and targets that impose costs but offer no short-term benefits; threatening them with financial penalties for noncompliance would mean that even fewer would join. Of course, if there are no penalties, many countries might be more inclined to sign, but they certainly would have no incentive to comply. Although shaming is one means of pressuring countries to live up to their obligations, it is far from foolproof.

We have allowed the absence of scientific certainty to forestall useful action. We can’t afford to wait for scientific certainty before we take global action. For many countries and many ecosystems, it will be too late. If we think there is a chance that critical resources and habitats are about to be eliminated, precautionary steps would seem to be in order. Certainly, with regard to fisheries, we know that waiting too long will cause a fish stock to crash. Once it falls below a sustainable level, it will not recover on its own. Because the systems that global environmental treaties are addressing are so complex, the notion that we should take no action until we are certain about the causes of each problem and the efficacy of proposed solutions means that we will always be too late.

Improving the treaty-making system

Even in the face of all the difficulties described above, there are four major ways in which the treaty-making system can be improved.

First, we should involve “unofficials” more directly in treaty drafting and enforcement. Treaties are official agreements among nations. As such, only elected leaders and their appointed agency staff are invited to the negotiation table to formulate the terms of each MEA. But national leaders often have short-term political agendas that encourage them to look the other away when global environmental treaties are being discussed. Within each country, civil society groups, including environmental and scientific nongovernmental organizations (NGOs), universities, and trade associations, are more likely to take the long view and accept the responsibilities of global stewardship. We need to alter our understanding of the global treaty-making system to encourage civil society representatives to sit at the table at all times.

Bringing civil society to the negotiating table would, in fact, merely be the next step in an evolving trend. Nongovernmental actors are often included in official national delegations. Although only countries will continue to be signatories, unofficials could bring additional scientific understanding and long-term perspective to almost all treaty discussions.

The UN, through its Department of Economic and Social Affairs (ECOSOC), maintains a list of thousands of qualified NGOs. If an organization such as ECOSOC sets and maintains standards for groups seeking to participate in global environmental negotiations, the concerns of some leaders that their in-country opponents will use treaty negotiations to embarrass them would be alleviated. ECOSOC could also invite clusters of nongovernmental actors to caucus to choose their own ad hoc representatives to each treaty negotiation in order to keep the number of unofficials to a manageable scale.

Finally, it would probably make sense to write into all new treaties monitoring and enforcement roles for civil society, which already exist in some treaty regimes. This would relieve much of the financial and administrative burden on secretariats.

A second way to improve the treaty system would be to set longer-term timetables and adaptive management targets. A huge amount of time is spent debating timetables and targets once a convention is adopted. But it is foolish to think that negotiators can anticipate what almost 200 countries will need to accomplish decades in the future. Rather, it makes more sense to set long-term performance goals (rather than intermediate standards) in each treaty and then specify contingent adaptive management obligations for nations that sign. For example, we can say, as the Intergovernmental Panel on Climate Change has, that the global objective is to achieve carbon dioxide emissions of 450 parts per million by 2040 because that is what it will take (as far as we can tell at present) to avoid the worst effects of global warming. Then, taking an adaptive management approach, we can work back from that scientifically generated goal and set reduction targets for each decade that will make it possible for us to reach the 2040 goal. For each decade, we could then suggest a range of actions each country might take to meet its fair share of the overall reduction target. Then, we could hold countries accountable during mid-decade reviews for adjusting the mix of steps they are taking to ensure that their 10-year obligations are met. The 2040 performance goal that would drive such a treaty could be adjusted each decade as scientific understanding of the systems involved increases.

For an adaptive management approach to work, we need to invest globally in advanced monitoring technology and assign long-term monitoring responsibility as well as leadership of global research efforts to some entity, perhaps the UN, with sufficient funding to build the staff capability required. What we can’t do is allow each country to do whatever it wants at whatever pace it prefers.

Third, we should offer financial incentives for ratification and compliance by linking environmental treaties with trade and development assistance. One way to create incentives for countries, particularly developing ones, to sign global environmental treaties and take their fair share of obligations seriously is to link a variety of trade benefits and various forms of development assistance to membership in good standing in multilateral environmental treaty regimes. For example, the World Bank or the other multilateral financial institutions might offer favorable lending rates or even loan forgiveness to countries that sign, ratify, and implement key environmental treaties. Because the development projects these agencies fund sometimes accelerate certain environmental difficulties, this would make it easier for the banks to justify their investments in environmental terms. It might even make sense to require that countries sign, ratify, and implement key global environmental agreements before they are given international assistance for large infrastructure projects.

Because energy production from fossil fuels is at the heart of many environmental problems, it might make sense to encourage developed countries to add to their current taxes a tiny additional tax on all electricity produced from fossil fuels. If developed countries are going to adopt carbon taxes in the next few years as a means of achieving their Kyoto goals, why not divert a small amount of this money into a global fund to support sustainable energy projects? This kitty could be used to encourage developing countries to sign and implement numerous MEAs. The funds would go to countries meeting their global environmental responsibilities. Some of this money could also be allocated through the GEF to cover general capacity-building efforts in the developing world. However it is done, we need some way of generating a steady flow of funds to encourage developing countries to sign multilateral environmental treaties.

Finally, another incentive to sign MEAs might involve granting favorable technology-sharing agreements to countries implementing the most important global environmental agreements. Countries in the North would still reap a financial return on the sale of “green” technologies, but MEA-complying countries could be given a break on the price. This would be applied whether the technology was being sold or licensed by a country or a company and whether the licensee was a country or a company.

A fourth way to improve the treaty system would be to create standing regional science advisory bodies for clusters of related treaties rather than organize separate committees for every treaty regime. Developing countries are sometimes hard-pressed to find qualified scientists to represent them at all these meetings, and the fragmentation of scientific effort among separate treaty regimes is counterproductive.

A great deal of political acrimony surrounds the selection of science advisory committee members. Trying to balance membership between North and South and among regions in each treaty regime rarely leads to panels that are the best-equipped to provide ongoing technical advice or oversee global research efforts required to enhance treaty implementation. A smaller number of larger regional and sectoral committees would help.

Today, there is no official body with responsibility for improving the global environmental treaty-making system. Science associations around the world should take the lead in drawing attention to the ways in which the current system is failing. They should also suggest possible improvements. It is unlikely that individual governments will advocate for systemic changes while UN organizations are busy contending with each other as they try to hold on to their bureaucratic turf and cope with funding shortages. Thus, the global scientific community will be pivotal in effecting change.

Restructuring the Military

After more than five years of war in Iraq and almost seven in Afghanistan, the U.S. military is facing a crisis not seen since the end of the Vietnam War. Equipment shortages, manpower shortfalls, recruiting and retention problems, and misplaced budget priorities have resulted in a military barely able to meet the challenges the United States faces today and dangerously ill-prepared to handle future challenges.

The 9/11 terrorist attacks demonstrated that the most immediate threat to the United States is not from a conventional nation-state adversary but from an enemy that operates without regard for national borders and aims to surprise it with deadly attacks on its homeland and its interests around the globe. These attacks, and the subsequent war in Afghanistan, have also demonstrated that a weakly governed state or region half a world away could pose a direct threat to U.S security.

The purpose of U.S. military power must be first and foremost to ensure the safety and security of the American people. In an era of globalization, however, homeland security will sometimes be adversely affected by events far from home. The United States will therefore continue to find itself in situations in which it is compelled to use its military power. Although caution must always be exercised when deploying troops abroad, the United States must recognize that its military will continue to be a force in high demand.

Yet today the military is lopsided. Its forces are being ground down by low-tech insurgencies in Iraq and Afghanistan, and the most immediate threat confronting the United States is a terrorist network that possesses no tanks or aircraft. Meanwhile, the Pentagon—the world’s largest bureaucracy—remains fixated largely on addressing the problems and challenges of a bygone era. This focus has left the military unmatched on the conventional battlefield but less prepared to deal with the emerging irregular or nontraditional challenges that the United States is most likely to confront.

As operations in Iraq eventually draw to a close, the United States must plot a new strategic direction for its military. Three areas are critical. First, the military must restructure, reform, and invest in new priorities in order to regain strategic balance and become more adept at four kinds of irregular or nontraditional missions: counterterrorism operations that seek to deny terrorist networks havens from which to operate; stability and reconstruction missions that seek to rebuild nations and restore order to regions where chaos reigns; counterinsurgency operations that seek to eliminate a hostile force by winning the support of the public; and humanitarian missions that seek to alleviate the suffering caused by natural or human-made disasters. Second, the military must develop a more integrated approach across all government agencies and with its allies and partners. Third, it must make investing in people, not hardware, its highest defense priority.

Militaries are notoriously resistant to change and therefore difficult to reform, but the current crisis presents the United States with the real opportunity to move the military in a new and better direction. The military faced a similar crisis in the wake of Vietnam, and as a result was able to dramatically restructure itself. It abandoned the draft and created the professional all-volunteer military; it invested in the training and development of its personnel through initiatives such as the Navy’s Top Gun program that enabled the United States to have a smaller, more effective fighting force; and it adjusted its force posture. How the United States rebuilds its military after Iraq is likely to shape its future and the security of the nation for a generation.

Fighting the last war

Although the Bush administration has repeatedly claimed that 9/11 changed everything, almost nothing has changed in terms of military strategy, force structure, and spending priorities. Since 9/11, the Pentagon’s civilian leadership has canceled only two major weapons programs, both for the Army. Meanwhile, the Pentagon continues to invest billions of dollars in developing the latest high-tech weaponry, which has little relevance to the fight against the terrorist networks and the irregular forms of warfare now confronting the United States.

The failure to shift budget priorities after 9/11 was not merely a case of inept management but was more a byproduct of the administration’s ideological and strategic vision of military transformation. Despite the fact that the United States repeatedly engaged in stability and peacekeeping operations during the 1990s in places such as Somalia, Haiti, Bosnia, and Kosovo, the Pentagon and the Bush administration viewed operations in weak and failing states as a distraction from planning for “real” conventional operations.

The administration pointed to a vision of the military drawn from the rapid victory in the first Persian Gulf War, in which U.S. firepower, mixed with new high-tech precision-guided weaponry, quickly and decisively destroyed the Iraqi army. After the war, some military planners contended that advances in information technology and precision munitions would transform warfare. Through the use of unmanned aerial vehicles, satellite technology, and other advances that would connect the troops on the battlefield to each other as well as to commanders back at the base, U.S. forces would operate with perfect vision of the battlefield, allowing U.S. forces to see the enemy before the enemy could see them. This effort would virtually eliminate the “fog of war” and allow the United States to achieve “information dominance,” resulting in near-perfect decisionmaking, thus allowing U.S. forces to rapidly and decisively destroy enemy targets with precision-guided munitions.

In this vision, future warfare would be determined largely by the speed at which one could destroy enemy forces through high-tech weaponry. Technological firepower would serve to reduce the number of troops needed on the battlefield. Secretary of Defense Donald Rumsfeld embraced this view and was determined to create a smaller, more agile, and more lethal force.

Although the investment in advanced technology and networked forces has made the U.S. military an even more formidable conventional fighting force, the administration’s vision of warfare played only selectively to the military’s strengths. Because the administration believed that the principal challenges would come in the form of traditional conventional threats emanating from nation states, its efforts to transform the military were not tethered to any particular threat but instead directed at developing capabilities in the abstract.

This vision of warfare was similar to those of the past, in which the United States would be pitted against a peer competitor resembling the Soviet Union or would confront other highly developed nation states, against which its technological advances would be decisive. As Lt. Col. Paul Yingling, an active duty officer who served in Iraq and has been highly critical of the current military leadership, noted in a May 2007 article in Armed Forces Journal, “the military learned the wrong lessons from Operation Desert Storm. It continued to prepare for the last war, while its future enemies prepared for a new kind of war.”

Retired Marine Corps Col. T. X. Hammes, an expert on irregular warfare, wrote in a 2004 book that the administration’s vision of transformation placed too much faith in technology and “simply disregard[ed] any action taken by an intelligent, creative opponent to negate our technology. In fact, they seem to reduce the enemy to a series of inanimate targets to be serviced.” This target-centric approach focused on winning battles, not wars. Conservative military historian Fredrick Kagan pointed out in a 2006 book that “The history of U.S. military transformation efforts since the end of the Cold War has been the story of a continuous movement away from the political objective of war toward a focus on killing and destroying things.” This has left the military ill-prepared to deal with delicate stability operations, which rarely depend on the destructive power of force.

The misguided assumptions behind the administration’s military transformation were exposed by its utter failure to understand the magnitude of the task involved in invading Iraq. The lack of planning for stability and reconstruction operations after the initial invasion; the marginalizing of other government agencies involved in the operation, such as the State Department; the decisions to disband many of Iraq’s institutions, such as the army; and the failure to recognize that a U.S.-led occupation would instigate a backlash all reflected an ideologically naïve approach.

A new strategic context

In previous eras, wars between states were the most common sources of conflict and instability, as countries sought to expand their territories, gain access to resources, or increase their international prestige. Wars were therefore likely to be struggles of mass and will between self-interested nation states. As a result, a state’s military was organized and structured to defeat the military of another state.

For more than 40 years, the Pentagon devoted itself to confronting the Soviet threat. U.S. military doctrine, force structure, and weaponry were developed and shaped to address this challenge. When the Cold War ended, the military lost its primary organizing principle and faced an uncertain strategic environment. As democratic movements spread across the globe and new technologies enabled greater worldwide interconnection, modern states increasingly had little incentive to engage in interstate wars. An international consensus shunning interstate conflicts emerged, helped significantly by the creation of the United Nations but also by the fact that international prominence is now determined much more by a country’s economic, political, and cultural strengths than by its military might. This has led to a precipitous decline in the number of interstate conflicts.

Although the traditional context for military force has been changing, new constraints affecting the execution of military force have also emerged. The spread of democracy, strongly promoted by the United States, has also served as a great constraint on U.S. military power. The awakening of national and ethnic identities in response to decolonization and the spread of democracy has greatly limited the tolerance of peoples and nations to being ruled by outsiders. The rise of instantaneous global communications, which can project images from the battlefield around the globe, has made global opinion a potent mobilizing force. These trends constrain U.S. power just as space and distance limited previous great powers. The legitimacy of an action is now as important as the military capabilities used to conduct combat operations.

Yet although the strategic environment has become more complex, there is increasing awareness that situations within sovereign states may require collective or unitary action. A rising concern to the United States and the international community has been an increasing number of conflicts within states, the prevalence of ungoverned and unaccountable regions or territories within weak or failing states, and the emergence of powerful nonstate actors that operate in the shadows of sovereign states or find havens in weak and failing ones.

Because the United States has global responsibilities, it will at times be called on to respond to global crises, whether resulting from the emergence of terrorist safe havens in Afghanistan, Pakistan, or Somalia; the collapse or overthrow of regimes such as North Korea; genocide in Darfur; natural disasters such as the 2004 tsunami in Indonesia or the 2008 cyclone in Burma; or continued instability in weak and failing states such as Haiti and Sudan. These nontraditional missions could entail stability operations ranging from limited peacekeeping operations to more extensive nation-building missions, as well as rapid responses to places struck by natural disasters or suffering from severe humanitarian crises.

What should now be clear is that the strength of U.S. firepower means that few enemies will ever confront the United States on a conventional battlefield; they will instead seek to confront it in ways that neutralize its advantage. The wars in Iraq and Afghanistan demonstrate what the United States can expect to confront in the future: an enemy that blends in with the population and uses available technology to create crude but deadly low-tech weapons such as improvised explosive devices. During the past five years, insurgents have honed and developed their techniques and have killed and wounded thousands of U.S. military personnel. Although the United States must continue to be prepared for the full spectrum of conventional threats, the challenge after Iraq will be rebalancing the military so that it can effectively engage these asymmetric threats.

To that end, the United States should adopt a national security strategy that seeks to integrate all elements of U.S. power. This applies not only to what the United States does abroad but also to how it makes national security policy at home. Such a strategy entails matching resources to priorities; ending the artificial divisions that exist between agencies involved in defense, homeland security, diplomacy, energy, and development assistance; and leading and using global and regional alliances to increase U.S. power, rather than taking a unilateral approach.

An effective new defense strategy must address the following five issues: the deteriorating state of the ground forces, the emergence of important new missions for the military, the crisis in the defense budget, the disjointed nature of the national security bureaucracy, and the need to improve the way the United States operates in the world.

Renewed focus on ground forces

The United States must rebuild, expand, and transform its ground forces to focus more on stability and peacekeeping operations. The ground forces have borne the brunt of the wars in Iraq and Afghanistan. The Army is on the verge of breaking, and both the Army and the Marine Corps are experiencing severe equipment shortages.

After the 9/11 attacks, the Bush administration had a tremendous opportunity to increase the size of the ground forces. Unfortunately, the president and Secretary Rumsfeld pursued a policy that actually sought to cut ground forces. Now, more than six years after 9/11, the Pentagon has finally called for a permanent increase of 92,000 soldiers and Marines, which during the next five years would increase the size of the active Army to 547,400 personnel and the Marine Corps to 202,000. Although this is too late to help in Iraq and Afghanistan, it is an important step in preparing for the future.

Opponents of expanding the force argue that the main lesson of Iraq is that the United States should not engage in these sorts of operations in the future, and therefore the military does not need larger ground forces. Although the United States may become more reluctant to deploy its ground forces in the near future, this does not obviate the fact that if ground forces are deployed, the most likely missions will be stability and reconstruction operations. It is an illusion to believe that ground forces after Iraq will once again just need to focus on traditional conventional warfare.

Expanding the ground forces over the long term will allow the military to conduct humanpower-intensive missions more effectively. In Iraq, the pace of deployments has greatly strained ground forces, yet future stability and reconstruction operations might require an even larger troop presence than we ever had in Iraq. This means that U.S. ground forces must seek to become more adept at these types of operations and that ground forces must be sized appropriately to conduct such missions.

An expanded ground force would enable the active Army to become less dependent on the Army National Guard, which would allow the Guard to more effectively fulfill its homeland defense tasks. It would decrease the country’s excessive reliance on private contractors to perform military functions and ensure that the soldiers receive adequate time at home between deployments.

However, because of the recruitment and retention difficulties the Army is experiencing as a result of the Iraq war, any sizeable expansion of the ground forces in the short term will be difficult. Still, the military must ensure that an expansion is not achieved by lowering standards. Although dropping the ban on gays in the military and ending the restrictions on women in combat would help, it will take some time to expand the force, and any expansion should not be rushed.

Emerging new missions

Because of its unparalleled logistical and force-projection capabilities, the U.S. military will increasingly be called on to respond to humanitarian crises. Indeed, at times, the United States will resemble a global first responder. Although the United States is reluctant to be the world’s ambulance as well as the world’s policeman, the fact is that in many cases this country has no real choice but to respond. Initial U.S. hesitancy to respond to the Indian Ocean tsunami in 2004 brought with it a global rebuke. A failure to respond to these sorts of crises could lead to massive instability and a drop in U.S. prestige in the eyes of the world.

Effective action can make a tremendous difference. After the tsunami, the United States eventually sent 15,000 troops, a carrier task force, a Marine expeditionary force, and a flotilla of ships and aircraft to respond to the disaster. Admiral Michael Mullen, the chairman of the Joint Chiefs of Staff, noted that, “we literally built a city at sea for no other purpose than to serve the needs of other people.” The response to the tsunami disaster made a tremendous difference in alleviating the humanitarian crisis and assisting in the region’s recovery. And it also had a tremendous impact on the U.S. image.

After the disaster, 79% of Indonesians said they had a more favorable view of the United States, and the country’s overall favorability rating rose more than 20%. Such a dramatic turnaround in the largest Muslim country in the world showed, as Mullen explained, another side of “American power that wasn’t perceived as frightening, monolithic, or arrogant. We showed them American power—sea power—at its finest, and at its most noble.” Mullen described the U.S. response to the tsunami as “one of the most defining moments of this new century.”

Responding to these sorts of disasters should be a core mission of the U.S. military, especially the Navy and the Marine Corps. To be effective, the United States must invest in new types of programs and equipment. In particular, the United States must develop its “sea-basing” capability, because it is likely that it will often be involved in areas of the world with weak or failing governments and limited land-basing options. The Maritime Pre-positioning Force is a squadron of ships that the military is designing to support sea-basing operations. This would allow for the rapid transoceanic movement of expeditionary forces, as well as goods, services, and additional personnel into regions with undeveloped or destroyed infrastructure.

Sea-basing is not a new concept. After the tsunami in 2004, the Navy essentially set up bases at sea off the coast of Indonesia. It did the same off its own Gulf Coast in 2005 in response to Hurricane Katrina. Using the sea as a secure base of operations along the world’s coastlines provides unmatched mobility and power projection, allows the flexibility of having a significant base close to operations, and enables the United States to deploy power without depending on unreliable land bases.

In addition, because most of the world’s population lives within 200 miles of the sea, a key priority for the future Navy will be the capability to operate effectively along the world’s coastlines. The Littoral Combat Ship, which is being developed to operate along the world’s coastlines, is vital to this mission. It is also necessary to better police the world’s oceans in order to more effectively counter piracy and illegal trafficking.

Matching resources to priorities

The United States now spends more on defense then the rest of the world combined. But its military remains heavily focused on engaging in wars that require intensive use of military capital rather than relying heavily on labor. It devotes too many resources to purchasing weapons that are more relevant to dealing with threats from a bygone era than the threats the U.S. confronts today, and it makes insufficient investments in basic research to ensure that the military maintains its technological edge. The nation must adopt a more balanced approach to meet current and future challenges.

After 9/11, there was a dramatic increase in defense spending. The Pentagon’s current procurement plans incorrectly assume that the regular defense budget will continue to grow just as it has for the past six years. Current Department of Defense (DOD) plans call for increasing spending on weapons systems by 6.5% annually during the next five to seven years.

The current approach of many conservatives is simply to throw more money at the growing problems afflicting each military service. Some have even suggested giving the Pentagon a fixed share of the gross domestic product. This approach is unsustainable. It is not in the U.S. interest to waste precious resources on unnecessary and outdated weapons programs that do little to enhance security. The country must start making the difficult choices that have been deferred by the current administration.

Paying for rebuilding and expanding the ground forces as well as boosting the quality of life for military personnel will require the ground forces to receive a larger share of the defense budget. Cuts will have to be made in some programs in each service to offset these costs. Weapons systems that were developed to address outdated threats and challenges should be cut or significantly scaled back. For example, the Navy’s DDG-1000 Destroyer, a new class of surface combatant, is extremely expensive, at nearly $5 billion per ship, and merely adds to capabilities in which the Navy already enjoys overwhelming superiority. The F/A-22 is an impressive fifth-generation stealth fighter, but it is also very expensive and was designed to achieve air superiority over Soviet fighter jets that were never built. The F-35 Joint Strike Fighter, which will be used by the Air Force, Navy, and Marines, as well as by our allies, would be a better investment.

In addition, reducing the number of strategic nuclear weapons to 1,000, as well as keeping national missile defense in a R&D mode, could save $10 billion to $15 billion per year and bolster U.S. credibility in nuclear nonproliferation. This could all be achieved at no cost to U.S. security, because these systems are designed to protect the United States from extremely improbable threats.

The United States should also seek to hedge against future conventional needs by adopting a mobilization strategy, which would entail, as laid out by Richard Betts of Columbia University, “developing plans and organizing resources now so that military capabilities can be expanded quickly later if necessary.” This would entail increasing the amount of spending on basic and applied research and ensuring that the country has the ability to ramp up production of conventional weaponry if needed.

Calling for the military to set spending priorities is not just code for cutting the budget. New systems and technology must be developed to equip the modern 21st-century force, although not every system has to move into full production.

Making future vehicles and weapons programs more fuel-efficient should be one of the most urgent technological priorities, as it would save billions of dollars, reduce the burdensome supply lines needed to keep vehicles running, and help decrease U.S. reliance on imported oil. According to Scott Buchanan of the DOD Office of Force Transformation, “The Department of Defense energy burden is so significant that it may prevent the execution of new and still evolving operational concepts, which require the rapid and constant transport of resources without regard for the energy costs.” It is not simply that fuel costs are rising; the cost to transport fuel is significantly higher than the cost of the fuel itself. New R&D investments are critically needed in this area.

The goal of the Army’s Future Combat Systems, the Army’s largest procurement program, is to create a more mobile ground force, one that could be deployed quickly and efficiently around the world. The program is seeking to modernize 15 of the Army’s more than 40 combat divisions and replace the durable 70-ton Abrams tank with a lighter 20-ton version that would be easier to deploy. A critical technological challenge is the need to develop lighter armor.

One of the main lessons of Iraq has been that although mobile ground forces made it possible to quickly topple Saddam’s regime, vehicles that lacked adequate armor protection found themselves vulnerable in the urban combat environment. Humvees, designed without armor, later had armor bolted on, reducing maneuverability, fuel efficiency, and reliability. The Army was forced to put a bulky cage of armor around its new Stryker vehicle to protect it from rocket-propelled grenade launchers. Yet even with these additions, the Army’s ground vehicles’ undersides were vulnerable to improvised explosive devices. This has led to the rollout of the new Mine Resistant Ambush Protected (MRAP) vehicles. Although the MRAP is useful in the intense urban combat in Iraq, its huge size and intimidating presence make it difficult to maneuver and awkwardly suited for peacekeeping missions in less hostile environments. In short, one of the main discoveries of the past five years is that the ground forces need vehicles that are light and mobile, as well as equipped with effective armor protection.

In the absence of a new technological breakthrough in armor protection, the ground forces will be forced to develop an even more diverse fleet of vehicles, ranging from a new mobile Humvee suitable for low-threat environments, to heavy and bulky MRAPs appropriate for the most violent of urban environments, to traditional tanks. Carrying such a diverse fleet is not just incredibly difficult but will create tremendous deployment and logistical challenges.

To maintain global mobility, it is essential that the Air Force build the new KC-X tanker, which is vital to maintaining the global air bridge, and the C-17 Globemaster, which can transport a large amount of personnel and cargo without needing large runways. This capability is important for disaster and humanitarian relief operations.

Developing and distributing technologies that improve the situational awareness of troops on the ground and in the air should also be a major priority. For too long, communications among different services—Army tanks, Marine amphibious assault vehicles, and Air Force and Navy fighters—have been disjointed. In some cases, soldiers in an Army vehicle have been unable to communicate with Marines in a vehicle just yards away. The networking of forces would decrease the number of tragic friendly-fire incidents as well as provide more effective intelligence. Systems such as the Joint Tactical Radio System—a DOD-wide program currently in development that will create an all-service family of radios—should greatly streamline the various communications systems. The development of Blue Force Tracking systems, which identify and track the location and movements of friendly forces, represents an important effort to better connect U.S. forces. Advances in pilotless aerial vehicles are also essential to increasing situational awareness and acquiring intelligence.

A more integrated approach

Becoming more adept at addressing nontraditional challenges will require rebalancing priorities within the Pentagon and throughout the entire national security apparatus. Successfully executing stability, peacekeeping, and counterinsurgency operations is as much a political and economic challenge as a military one. These operations are therefore challenges for the entire U.S. government. Facilitating a consensus-based political process, maintaining and improving the administrative capacity of the government, and promoting economic development are not military tasks but tasks for diplomatic and development professionals. The government needs a new blueprint for action to address future post-conflict stability operations.

The Goldwater-Nichols Act of 1986 enhanced coordination among the services and empowered the chair of the Joint Chiefs of Staff. The legislation reworked the command structure of the military in an effort to correct the counterproductive effects of interservice rivalries. Unfortunately, the model of cohesion developed among the various branches of the military has not been extended to the broader bureaucracy that oversees the nation’s warfighting, diplomatic, and aid agencies.

The U.S. government desperately needs to coordinate its operations more effectively. Ambassador James Dobbins, who oversaw stability and reconstruction operations in the Balkans and Afghanistan, said in a 2005 book that “until recently, the U.S. government as a whole has treated each successive new nation-building operation as if it were the first ever encountered, sending new, inexperienced personnel to face what should have been familiar problems.”

The establishment of the Office of Stabilization and Reconstruction in the State Department was a productive step, but it has not yet been provided with sufficient funding. Another encouraging step was the creation of the Provincial Reconstruction Teams in Afghanistan and Iraq. These teams are made up of a civil affairs military component, personnel from aid agencies and the State Department, and personnel from other agencies, including the Centers for Disease Control and Prevention and the Department of Agriculture.

Yet increased coordination goes only so far. Truly integrating operations will require the government to invest more in civilian agencies. As the former Chairman of the Joint Chiefs of Staff Peter Pace explained in testimony before the House Armed Services Committee in February 2007, “Our civilian agencies are underresourced to meet the requirements of the 21st century.” Secretary of Defense Robert Gates has called for adding $100 billion to the State Department budget.

Establishing a unified national security budget is the first step toward creating a more balanced national security structure. Under a unified budget, the president and Congress would finally be able to make cost-effective tradeoffs across agency lines and determine whether to put a marginal dollar into deploying national missile defense interceptors or building more Coast Guard cutters. This type of tradeoff cannot be made now because missile defense is funded in the Pentagon budget and the Coast Guard is funded in the Department of Homeland Security budget.

Building a global security architecture

A latent U.S. strength is an extensive network of alliances. Building alliances and global partnerships can create a force multiplier for the military and defray the costs of maintaining order in the international system. Yet conservatives too often approach military strategic planning with the view that the United States will be alone. This approach is shortsighted and places a greater burden on the taxpayer and the troops on the ground.

The United States must support the growing number of United Nations (UN) operations, which relieve some of the burden on the U.S. military. More than 80,000 UN troops from more than 100 countries are now conducting peacekeeping operations in 18 countries, at a cost of just $5.5 billion a year. A 2005 Rand study argued that the UN is generally more effective than individual nations at conducting peacekeeping or nation-building operations, stating that “The United Nations provides the most suitable institutional framework for most nation-building missions, one with a comparatively low cost-structure, a comparatively high success rate, and the greatest degree of international legitimacy.” Yet UN operations too often lack resources and personnel. It is in the U.S. national interest to support these operations financially and logistically.

The United States must also ensure that its forces are capable of operating effectively with allies. The Navy is developing the concept of a 1,000-ship Navy, which would leverage the fleets of allied or friendly countries to create a network of navies to better police the world’s oceans. The term “1,000-ship Navy” is a metaphor for the strength that could be gained if countries joined to prevent threats on the high seas. The goal is not to purchase additional ships but to change the way navies interact and operate together.

Transforming the military to meet the above challenges will be difficult. It will require a willingness to change and will demand leadership from the military, Congress, and the next president. As the 9/11 attacks grow more distant, it is past time to begin such a transformation.

Be Careful What You Wish For: A Cautionary Tale about Budget Doubling

Sometime in the near future, with timing dependent on the economy, the military actions in Iraq and Afghanistan, and other competing demands for government money, Congress will substantially boost R&D spending. It will do so in response to the great challenges facing the United States and the world—global warming, the threat of a global pandemic, rising energy and natural resource prices, and so on—whose solutions depend on increased scientific understanding and technological advance. It will also do so in response to the many reports, especially the National Academy of Sciences’ 2006 Rising Above the Gathering Storm, highlighting the importance of advancements in science and technology to U.S. economic well-being and national security.

Although federal R&D spending relative to gross domestic product has been declining during the past 20 years, between 1998 and 2003 the government increased spending in the biological and life sciences at rates that could presage a future spending boom. The Clinton administration began and the Bush administration completed a doubling of the budget of the National Institutes of Health (NIH).

At first glance, the doubling appeared to be an unalloyed benefit for medical research, but a closer examination reveals that scientists need to be careful what they wish for. The doubling did not appear to produce a dramatic outpouring of high-quality research. It failed to address critical flaws in federal research funding and actually exacerbated some existing problems, especially for younger researchers.

The negative consequences of the rapid run-up in research spending began to be felt immediately after the doubling ended, when the Bush administration and Congress essentially froze the NIH budget, resulting in a sizable drop in real spending. Indeed, one of the key lessons from the doubling experience is that if the aim is to raise aggregate R&D intensity, the United States should increase spending gradually and steadily rather than undertake a one-time surge and subsequent sharp deceleration in spending.

From 1998 to 2003, the NIH budget increased from about $14 billion to $27 billion—twice as rapidly in five years as in the previous decade. Using the Biomedical R&D Price Index, the doubling increased real spending by 66%, or about 12% per year.

Although there are situations in which sharp spending increases are preferable to steady growth, gradual increases generally are more efficient. Gradual buildups produce smaller increases in costs (because it takes time for people and resources to increase to meet the new demand and costs tend to rise nonlinearly in the short run) and avoid large disruptions when the increase decelerates. Although it is difficult to determine whether a more gradual increase in NIH spending would have produced greater scientific output than the five-year doubling, the data are consistent with the notion that the spending surge did less than a gradual buildup of funds might have done. In a November 19, 2007, article in The Scientist, Frederick Sachs noted that the number of biomedical publications from U.S. labs did not accelerate rapidly after 1999, although they did increase steadily (as they had done in the years before 1999). In addition, from 1995 to 2005, the share of U.S. science and engineering articles in the biological and medical sciences did not tilt toward these areas despite their increased share of the nation’s basic research budget, according to 2007 National Science Foundation (NSF) data.

But the big problem with a sharp acceleration of spending occurs when it ends. People and projects get caught in the pipeline. Our analysis here draws on lessons that economists have learned from studying increases in capital spending using the accelerator model of investment in physical capital. In the accelerator model, an increase in demand for output induces firms to seek more capital stock to meet the new demand. This increases investment spending quickly. When firms reach the desired level of capital stock, they reduce investment spending. This process helps explain the volatility of investment that underlies business cycles. The R&D equivalent of demand for output is federal R&D spending, and the equivalent of investment spending is newly hired researchers. We find that the young people who build their skills as graduate students or postdocs during the acceleration phase of spending bear much of the cost of the deceleration.

The deceleration of NIH spending in the aftermath of the budget doubling was particularly brutal. Using the Consumer Price Index deflator, real NIH spending was 6.6% lower in 2007 than in 2004. It is expected to fall 13.4% below the 2004 peak by 2009, according to a 2008 analysis by Howard Garrison and Kimberly McGuire for the Federation of Associated Societies for Experimental Biology. Using the Biomedical R&D Price Index deflator, real spending was down 10.9% through 2007. The drop in the real NIH budget shocked the agency and the bioscience community because it largely undid the increased funding from the doubling.

The deceleration caused a career crisis for the young researchers who obtained their independent research grants during the doubling and for the principal investigators whose probability of continuing a grant or making a successful new application fell. Research labs were pressured to cut staff. NIH, the single largest employer of biomedical researchers in the country, with more than 1,000 principal investigators and 6,000 to 7,000 researchers, cut the number of principal investigators by 9%. The situation was described in the March 7, 2008, Science as “a completely new category of nightmare” by a researcher at the National Institute of Child Health and Development, which was especially hard hit. “The marvellous engine of American biomedical research that was constructed during the last half of the 20th century is being taken apart, piece by piece,” said Robert Weinberg, founder of Whitehead Institute, in the July 2006 Cell.

In economics, the optimal path to a larger stock of capital depends on the adjustment costs. Many models of adjustment use a quadratic cost curve to reflect the fact that when you expand more rapidly, costs rise more than proportionately. If adjustment costs take any convex form, the ideal adjustment path is a gradual movement to the new desired level. Empirical studies estimate that adjustment costs for R&D are substantial as compared to those for other forms of investment. The way research works, with senior scientists running labs in which postdocs and graduate students perform most of the hands-on work, much of the adjustment cost falls on young researchers. An increase in R&D increases the number of graduate students and the number of postdocs hired. During the deceleration phase, a large supply of newly trained researchers compete for jobs when the number of independent research opportunities may be less than when they were attracted to the field. In the United States, indeed, much of the adjustment fell on postdocs, whose numbers increased rapidly during the doubling, with the greatest increase among those born overseas.

Funding and researcher behavior

A second key lesson of the budget doubling is that the way agencies divide budgets between the number and size of research grants will affect researchers’ behavior and thus research output.

FUNDING AGENCIES SHOULD VIEW RESEARCH GRANTS AS INVESTMENTS IN THE HUMAN CAPITAL OF THE RESEARCHER AS WELL AS IN THE PRODUCTION OF KNOWLEDGE, AND CONSEQUENTLY SHOULD SUPPORT PROPOSALS BY YOUNGER RESEARCHERS OVER EQUIVALENT PROPOSALS BY OLDER ONES.

Funding agencies and researchers interact in the market for research grants. An agency with a given budget decides how to allocate the budget between the number of grants and the size of grants. Researchers respond to the changed dollar value and number of research awards by applying for grants or engaging in other activities.

During the doubling, NIH increased the average value and number of awards, particularly for new submissions, which include new projects by experienced researchers as well as projects by new investigators. With the success rate of awards stable at roughly 25%—the proportion the agency views as desirable on the basis of the quality of proposals—the number of awards increased proportionate to the number of submissions. From 2003 to 2006, when the budget contracted, NIH maintained the value of awards in real terms and reduced the number of new awards by 20%. But surprisingly, the number of new submissions grew, producing a large drop in the success rate. In 2007, NIH squeezed the budgets of existing projects and raised the number of new awards.

The interesting behavior is the response of researchers to changes by funding agencies in the number of awards granted and thus the chance of winning an award. An increase in the number of awards increases the number of researchers who apply, because the chance of winning increases. As might be expected, researchers responded to the NIH doubling by submitting more proposals. Given higher grant awards and increased numbers of awards (with roughly constant funding rates), the growth of submissions reflects standard supply behavior: positive responses to the incentive of more and more highly valued research awards.

But researchers also increased the number of submissions when NIH research support fell. The number of submissions per new award and per continuation award granted rose from 2003 to 2007 after changing modestly during the doubling period. By 2007, NIH awardees were submitting roughly two proposals to get an award. Fewer investigators gained awards on original proposals, which induced them to amend proposals in response to peer reviews to increase their chances of gaining a research grant. The response of researchers in submitting more than one proposal to funding agencies produced the seemingly odd short-run supply behavior: more proposals with lower expected rewards. Faced with the risk of losing support and closing or contracting their labs, principal investigators made multiple submissions, although over time, normal long-run supply behavior would be expected to lead those who do not gain awards to leave research science and to discourage young people from going on in science. Who wants to spend time writing proposal after proposal with modest probabilities of success? It may also lead to more conservative science, as researchers shy away from the big research questions in favor of manageable topics that fit with prevailing fashions and gain support from study groups.

The message is that to get the most research from their budgets, funding agencies need good knowledge about the likely behavior of researchers to different allocations of funds. NIH, burned by its experience with the doubling and ensuing cutback in funds, hopefully will respond differently to future increases in R&D budgets.

Young researchers take a hit

A third lesson of the doubling experience is that increased R&D spending will not resolve the structural problems of the U.S. scientific endeavor that limit the career prospects of young researchers and arguably discourage riskier transformative projects.

THE AVERAGE AGE OF A NEW R01 RECIPIENT WAS 42.9 IN 2005, UP FROM 35.2 IN 1970 AND 37.3 IN THE MID-1980S. IN 1980, 22% OF GRANTS WENT TO SCIENTISTS 35 AND YOUNGER, BUT IN 2005, ONLY 3% DID.

At the heart of the U.S. biomedical science enterprise are the individual (R01) grants that NIH gives to fund individual scientists and their teams of postdoctoral employees and graduate students. The system of funding individual researchers on the basis of unsolicited applications for support comes close enough to an economist’s views of a decentralized market mechanism to suggest that this ought to be an efficient way to conduct research as compared, say, to some central planner mandating research topics. The individual researchers choose the most promising line of research based on local knowledge of their special field. They submit proposals to funding agencies, where expert panels provide independent peer review, ranking proposals in accordance with criteria set out by funding agencies and their perceived quality. Finally, the agency funds as many proposals with high rankings as it can within its budget.

Although there are alternative funding sources in biomedical sciences, NIH is the 800-pound gorilla. For most academic bioscientists, winning an NIH R01 grant is critical to their research careers. It gives young scientists the opportunity to run their own lab rather than work for a senior researcher or abandon research entirely. For scientists with an NIH grant, winning a continuation grant is often an implicit criterion for obtaining tenure at a research university.

It is common to refer to new R01 awardees as young researchers, but this term is a misnomer. Because R01s generally go to scientists who are assistant professors or higher in rank, and because postdoctoral jobs last longer, the average age of a new R01 recipient was 42.9 in 2005, up from 35.2 in 1970 and 37.3 in the mid-1980s. In 1980, 22% of grants went to scientists 35 and younger, but in 2005, only 3% did. In contrast, the proportion of grants going to scientists 45 and older increased from 22% to 77%, and within the 45 and older group, the largest gainers were scientists aged 55 and older.

Most of this change is due to the structure of research and research funding, which gives older investigators substantive advantages in obtaining funding and places younger researchers as postdocs in their labs. Taking account of the distribution of Ph.D. bioscientists by age, the relative odds of a younger scientist gaining an NIH grant as compared to someone 45 and older dropped more than 10-fold. The doubling of research money did not create this problem, which reflects a longer-run trend, but it did not address or solve the problem. The result is considerable malaise among graduate students and postdocs in the life sciences as well as among senior scientists concerned with the health of their field, as has been well documented in a variety of studies. More money is not enough.

Bolstering younger scientists

A final lesson that we derived from the doubling experience is that funding agencies should view research grants as investments in the human capital of the researcher as well as in the production of knowledge, and consequently should support proposals by younger researchers over equivalent proposals by older ones.

There are three reasons for believing that providing greater research support for younger scientists would improve research productivity.

First, scientists may be more creative and productive at younger ages and may be more likely to undertake breakthrough research when they have their own grant support rather than when they work as postdocs in the labs of senior investigators. We use the word “may” here because we have not explored the complicated issue of how productivity changes with age.

Second, supporting scientists earlier in their careers will increase the attractiveness of science and engineering to young people choosing their life’s work. It will do this because the normal discounting of future returns makes money and opportunities received earlier more valuable than money and opportunities received later. If scientists had a better chance to become independent investigators at a younger age, the number of students choosing science would be higher than it is today.

The third reason relates to the likely use of new knowledge uncovered by researchers. A research project produces research findings that are public information. But it also increases the human capital of the researcher, who knows better than anyone else the new outcomes and who probably has better ideas about how to apply them to future research or other activities than other persons. If an older researcher and a younger researcher are equally productive and accrue the same additional knowledge and skills from a research project, the fact that the younger person will have more years to use the new knowledge implies a higher payoff from funding the younger person than from funding the older person. Just as human capital theory says that people should invest in education when they are younger, because they have more years to reap the returns than when they are older, it would be better to award research grants to younger scientists than to otherwise comparable older scientists.

Making adjustments

Future increases in research spending should seek to raise sustainable activity rather than meet some arbitrary target, such as doubling funding, in a short period. There are virtues to a smooth approach to changes in R&D levels, because it takes considerable time to build up human capital, which then has a potentially long period of return. Since budgets are determined annually, the question becomes how Congress can commit to a more stable spending goal or how agencies and universities can offset large changes in funding from budget to budget. One possible way of dealing with this issue is to add extra stabilization overhead funding to R&D grants, with the stipulation that universities or other research institutions place these payments into a fund to provide bridge support for researchers when R&D spending levels off.

To deal with some of the structural problems in R&D funding, our earlier argument that younger investigators have longer careers during which to use newly created knowledge than do equally competent older investigators suggests that future increases should be tilted toward younger scientists. In addition, given multiple applications and the excessive burden on the peer review system, agencies should add program officers and find ways to deal more efficiently with proposals, as indeed both NIH and NSF have begun to do.

In sum, there are lessons from the NIH doubling experience that could make any new boost in research spending more efficacious and could direct funds to ameliorating the structural problem of fostering independence for the young scientists on whom future progress depends.

Reexamining the Patent System

Is the patent system working? It depends on whom you ask. Which industry, upstream or downstream firms, public companies or small inventors? Opinions are plentiful, but answers supported by data are few. The patent system is at the heart of the knowledge economy, but there is surprisingly little knowledge about its costs and benefits. If the system is to promote innovation as effectively as possible, we need to know much more about how patents are used and licensed and what effects they have on innovators and business practice. Since innovation is one of the key engines of economic growth, the cost of a dysfunctional, or merely suboptimal, patent system could be substantial.

Signs of dysfunction are spreading, stimulating interest in legislation to reform our patent system. But to date, progress toward enacting any reforms has been stymied by inter-industry disputes over key provisions. Although patent reform is often contentious because of the divergent economic interests at stake, the current political struggle between the pharmaceutical and biotech industries on one end and information technology and financial services on the other is unprecedented.

In a recent acclaimed book, Patent Failure, economists James Bessen and Michael Meurer review the literature on the private value of patents and costs of patent litigation. They conclude that the system now functions effectively only for the pharmaceutical and chemical industries. Whereas 20 years earlier the system provided a net benefit across the board, in effect it now imposes a tax on other industrial sectors.

The conflict raises a fundamental question: Is a one-size-fits-all system viable in an age when technologies and the processes of innovation are so diverse? Given the high stakes, we need to know the answer to this question. If the end result of a uniform system is to favor innovation in one field at the expense of others, then the patent system will effectively influence the allocation of capital to different economic activities, resulting in an unintentional form of industrial policy.

At least one of the underlying problems is easy to recognize. In pharmaceuticals, there is a close relationship between patents and products: Blockbuster drugs are characteristically very dependent on a single primary patent. Conversely, in information technology, there is a great distance between a patent and a product.

Computers and computer programs may contain thousands of patentable functions. Each may represent only a tiny fraction of the market value of the product, which may in fact derive primarily from product design and the integration of components. But a patent dispute over a single component function may result in an injunction against the entire product line. This creates an incentive for opportunistic patent owners to “hold up” companies who have made a very large investment in a product. In other words, a patent owner who obtains an injunction can potentially extract settlements that approach the financial and opportunity cost of withdrawing the product from the market and redesigning, remanufacturing, and remarketing it.

Although there are numerous press reports describing the tactics of such “patent trolls,” we actually know relatively little about how common or successful holdups are. What we do know is limited to just a few industries, such as semiconductors and cellular telephony, and even those examples are disputed.

Why the mystery?

The reasons why our knowledge about patents is so limited and why we have not even begun to collect sufficient data are numerous. Many stem from overly simplified, idealized understanding of how patents work. Patents are commonly understood to be protections against theft by unscrupulous imitators. And although they are often seen as an affirmative right to exploit technology, in fact a patent is a right to prevent the use of the patented invention. Thus, a patent holder can be blocked from using his or her own patented invention because of patents owned by others.

The patent system is a form of government regulation, but it regulates indirectly, and mostly out of public view. The U.S. Patent and Trademark Office (PTO) grants patents as a private right, and it is up to the patent owner whether and how to assert the right. Thus, patent issues are framed narrowly within disputes arising between two parties under particular circumstances. And the overwhelming number of these conflicts go undocumented because they stop well short of being litigated in court.

Economic considerations play no explicit role in the patent system unless and until the ultimate train wreck occurs—that is, when litigation results in a finding of infringement and damages need to be calculated. In a rights-based legal system managed by specialized lawyers, patents tend to be seen as absolute entitlements: as ends in themselves rather than as tools for promoting innovation. The PTO focuses all its attention on the original decision to deny or grant a patent. Once patents go out the door, the PTO does not attempt to monitor their value, to document how they are used, or to uncover how they are abused.

A dearth of information on assertions, licensing contracts, settlements, and other business aspects of patents means that there is little empirical foundation on which to develop sound policy. Practically all we know about these business activities is anecdotal. This is of little help when each patent is, by definition, unique, and when the business context varies so greatly.

For all the talk of “patent quality,” there is no consensus on what that the term means or how it can be measured consistently, let alone how problems with quality should be fixed. Deep confusion exists over the obligation of technology creators and users to read, assimilate, and evaluate the massive database of current patents in order to avoid infringement.

Even at the level of individual patents, there is often considerable uncertainty about where the boundaries lie, whether the patent is valid and enforceable, and whether a particular product actually infringes. Patents are intended to disclose new knowledge to the public, but they are written by lawyers for lawyers. If you want to know what the patent means and whether it is valid, you need legal assistance. If you want to know whether your product infringes, you are advised to get a legal opinion. In 2007, the average cost of a legal opinion on validity cost $13,000. A legal opinion on whether a product infringes costs another $13,000.

But $26,000 does not buy certainty. Because the interpretation of claims at trial is reversed on appeal one-third to one-half of the time, it is difficult to see how our patent system provides adequate notice of existing or pending property rights to other inventors and the public in general. With this uncertainty and the sheer volume of patenting in some fields, inadvertent infringement inevitably becomes a nearly unavoidable hazard for innovators seeking to bring complex products and services to the marketplace.

The lack of information on the cost-effectiveness of the patent system is inexcusable for a government function that has come to play such a pervasive role in today’s knowledge-driven economy. Although patent policy is inevitably determined via the adjudication of lawsuits, judges lack an adequate framework for evaluating the efficacy of the system as a whole. Judges correctly point to Congress as the proper arbiter of policy, but Congress also is bedeviled by a lack of adequate data. Besides, Congress is burdened with more politically salient issues.

Thus, individual patent cases are decided because they must be, while meaningful policy decisions are deferred for lack of data. No institution has the responsibility to collect data on patents and how they are used, and no organized constituency demands it. There is constant pressure to keep patent application fees low to encourage more patenting, but the much higher legal and business costs of patent practice and litigation are not officially monitored or measured.

Data needs

We need a patent database that, like a land title registry, shows a chain of title and tells us who has what interests in the patent. We need unique identifiers for assignees and a database that tells us whether an assignee is independent or owned by another firm.

It would be useful to have estimates of the number of innovations firms make, what proportion they choose to patent, and with how many patents. We need an understanding of how innovators cope with the problem of inadvertent infringement, especially in areas such as software, where low barriers and/or intense competition result in prolific, widespread innovation at many levels of granularity and abstraction.

We need to know about the life of patents after they are issued and before they become a matter of public record in litigation, because many are asserted or licensed in some form but never fully litigated. It would help to know the frequency and cost of searching patents and other prior art to avoid infringement, the frequency of letters putting innovators on notice of patent claims, and the outcomes of those letters. We need information on the number and terms of settlements, patent-licensing agreements, and transfers of patents. We need to know whether these agreements are really manifestations of technology transfer or capitulation to legal bullying. Although gathering such information must respect the need for confidentiality in some aspects of business practice, acquiring and analyzing as much data as possible is essential to understanding and promoting the efficiency of our nascent markets for technology.

R&D and patent information is generally available for larger publicly held firms, but newer and smaller firms are underrepresented in the available databases. And although the Census Bureau collects data on R&D for smaller firms, accessing these data can be difficult. This is unfortunate, because our understanding of the implications of intellectual property on decisions to form and invest in new companies is essential to understanding the growth of the economy.

Most of the existing databases focus on manufacturing firms, which are traditionally the predominant users of the patent system and where there is a consensus about the definition of R&D. But with more permissive standards for patentable subject matter, service firms have increasingly turned to patents for business models and practices. The service sector, which accounts for the majority of economic output, can no longer be ignored simply because it is difficult to determine what should be considered R&D in a service firm. If our definitions of R&D require refinement, we should begin that process today.

Where should economic insight be built into the system? At one level, investors need meaningful reporting about patents as sources of value as well as potential liability. But we also need statistical reporting that helps us understand how well patents work to promote innovation in different fields. Indeed, the Department of Commerce has already launched an effort to develop metrics for innovation, and better patent data could be a key component.

Perhaps the most obvious solution is to ask the PTO to assume greater responsibility and accountability for the performance of the system. To its credit, the PTO has just announced that it is hiring a chief economist—a step that is long overdue. But will it be possible for the insights of the chief economist to counterbalance the demands of hundreds of thousands of patent applicants and their attorneys for making patents easy to get? The PTO’s commendable efforts at reforming the application process have already met a tidal wave of opposition.”

To insulate economic analysis from political influence or capture by particular patent interests, an autonomous institute could be established, perhaps housed in the PTO but independent of PTO administrators. This institute could be a critical resource not only for the PTO in its advisory functions but also for Congress, the courts, and other agencies.

To ensure independence, the institute could be overseen by a council of agencies with an interest in innovation, along with an advisory board that represents the best disinterested experts as well as the “users” that make up the PTO’s present public advisory committee. This institute would craft and support a research agenda to advance our understanding of the crucial tradeoffs involved in our efforts to improve the functioning of the patent system. The institute might be funded by a very small share of patent maintenance fees, which presently bring in over $500 million a year, and most of the research would be performed through grants or contracts.

This modest step would advance knowledge about patents and their effects on knowledge, technology, and innovation—the very heart of today’s economy. It would help give credence to the rhetoric we often repeat as a matter of faith: that patents are tools for innovation and economic growth. Tools need at times to be calibrated, sharpened, and augmented. Sometimes, they may need to be traded for other tools better suited to the problem at hand and with fewer unintended side effects.

Reducing Proliferation Risk

The use of nuclear energy to produce electricity is expanding worldwide, and as it does the danger that nuclear weapons will also be developed is increasing as well. Historically, most of the nuclear power industry has been concentrated in the United States, Europe, and Japan. Today, however, many countries are planning reactors and making choices about their fuel supply that will determine the risk of weapons proliferation for the next generation. Although the countries that traditionally set the tune for nuclear power policies have waning influence on who goes nuclear, they may be able to affect how it is done and thus reduce the proliferation risk. The key is rethinking the fuel cycle: the process by which nuclear fuel is supplied to reactors, recycled, and disposed of.

There is no nuclear fuel cycle that can, on technical grounds alone, be made proliferation-proof against governments bent on siphoning off materials to make weapons. Opportunities exist for the diversion of weapons-usable material at the front end of the fuel cycle, during which natural uranium is enriched to make reactor fuel. Opportunities also exist at the back end of the cycle to extract fissile material from the spent fuel removed from reactors. Although a complete siphon-proof system is impractical, one maxim can guide our thinking on lowering the odds of proliferation: The more places in which this work is done, the harder it is to monitor.

Weapons have been produced from both ends of the fuel cycle by countries as diverse in industrial capacity as India, Israel, North Korea, Pakistan, and South Africa. (South Africa abandoned its nuclear weapons in 1991. Libya started down the weapons road and gave it up. Iran’s intentions are still uncertain.) The level of technical sophistication of these countries ranges from very low to very high, yet all managed to succeed in building a weapon once they had the fissile material. The science behind nuclear bombs is well known, and the technology seems to be not that hard to master or acquire.

There is no shortage of good ideas for creating a better-controlled global fuel cycle based on minimizing the number of fuel-handling points. Mohamed ElBaradei, head of the International Atomic Energy Agency (IAEA), and President Bush, for example, have both suggested plans that would internationalize the fuel cycle. The problem is that such ideas, although good in theory, need to get the incentives for participation right. So far, these plans are the result of the nuclear haves talking among themselves and not talking to the nuclear have-nots. While the talking proceeds, governments that are new to the nuclear game are concluding that they may have to build their own fuel supply systems that are less dependent on suppliers with their own political agendas. That outcome must be avoided. The problem needs urgent attention because it will take a generation to build a credible international fuel cycle. If serious efforts do not begin today, then the have-nots will probably build their own fuel supply systems and the dangers of proliferation will become much greater.

Serious plans to tame proliferation by nation-states must include carrots to make any system attractive and sticks to provide effective monitoring and credible sanctions. Currently, incentives are in short supply, inspections are not as rigorous as they could be, and there is no consensus on the rigor of sanctions that should be applied.

A well-designed international fuel cycle could create many carrots. The cost to a country that is new to nuclear power of setting up its own enrichment or spent-fuel treatment facilities is enormous. Countries with a new or relatively small nuclear program will strongly favor an international approach if they come to trust the suppliers of the fuel and other needed services. Today, the only places to purchase enrichment services are the United States, Western Europe, and Russia. This group is too narrow in its political interests to constitute a credible supply system. Others must be encouraged to enter the fuel supply business. A well-managed system in China would add considerably to political diversity in the supply chain.

The back end of the fuel cycle is technically more complicated to deal with and so are the systems that need to be implemented. The spent-fuel stage cannot be made as bulletproof as enrichment, but it can be improved through the same approach that is needed at the front end: a credible international system.

The coming expansion of nuclear power can be a security as well as an environmental blessing (after all, nuclear energy is entirely free of greenhouse gas emissions and can help us deal with climate change), but only if it comes without a great increase in the risk of the proliferation of nuclear weapons. It is clear to scientists like me that we cannot fix the proliferation problem ourselves. We can tell the diplomats where the biggest holes are and suggest how they might be plugged. The plugs are not technical, but political and diplomatic. What is needed is for the haves to spend less time talking to each other and begin more serious discussions with the have-nots on what a new internationalized fuel cycle should be.

Enrichment: Plugging leaks

Designing a fuel-supply system requires focusing on light-water power reactors (LWRs), which make up nearly all of the nuclear power plants. Although there are many variants of advanced LWRs being developed and marketed these days, as far as proliferation risk is concerned, they are all basically the same, relying on enriched uranium for fuel and producing fissile plutonium in their spent fuel.

Natural uranium has only a tiny fraction of the isotope (U-235) that LWRs need to make energy commercially. All LWRs need fuel that is enriched by a factor of six or seven to a level of 4 to 5% U-235. Any enrichment plant has the potential to enrich far beyond this target to the level needed to make a nuclear weapon. To make a reliable weapon, a potential proliferator will want 90% enriched material. This is front-end proliferation.

If a facility big enough to do the enrichment for a power plant already exists, it takes only a small increment in capacity to produce the material for a few uranium weapons. A nuclear power plant with an output of 1,000 megawatts of electricity [one gigawatt of electricity in international units (1 GWe)] requires about 20,000 kilograms (kg) of new enriched fuel per year, which would come from nearly 200,000 kg of natural uranium. Diverting only about 1-20th of this material would be sufficient to produce enough highly enriched uranium to make a single weapon.

The real issue is the credibility of sanctions that can be imposed on those who violate the rules and start down the road to a nuclear weapons program. There is no technical barrier to proliferators, and if the international community does not act together no program can succeed.

The preferred technology for enrichment today is the gas centrifuge. These are not simple devices, and the technology of the modern high-throughput centrifuge is not easy to master. Those of the Kahn network used by Pakistan for its uranium weapons are primitive by today’s standard. Making enough fuel for Iran’s Bushehr 1-GWe reactor would, for example, need about 100,000 of the Pakistan P1 centrifuges, requiring a very large plant. But it takes only about 1,500 more centrifuges fed with the output of the big plant to make enough 90%-enriched material for one uranium weapon per year.

The Iranian IR-2 centrifuge is said to be about three times more effective than the P1. Iran is clearly mastering the technology. The most modern Western centrifuges are still more efficient by another factor of 3 to 10. The proportions stay the same, however. A plant based on IR-2s would require 35,000 units to fuel a power reactor, but only about 250 more to produce a weapon.

Iran’s insistence on developing its own enrichment capacity has led to much discussion on how to do enrichment in a more proliferation-resistant fashion. The main focus has been on preventing nations new to nuclear power from developing their own enrichment capacity, by creating an attractive alternative. The exemplar is South Korea, which obtains 39% of its electricity from nuclear power and by its own choice does no enrichment of its own. It has saved a great deal of money by not enriching. Making this kind of arrangement acceptable to countries that are not firm allies of those with enrichment facilities requires some mechanism to guarantee the fuel supply. Without such a mechanism, it is doubtful that any sensible country interested in developing nuclear power would agree to a binding commitment to forego its own enrichment capability. The tough cases are not the South Koreas of the world, but states such as Malaysia, Indonesia, and Brazil (which has two reactors and talks of building more), with growing economies that are more suspicious of their potential suppliers.

ElBaradei, in his proposal, envisions the IAEA serving in some way as a guarantor of fuel supply to those nations willing to forego developing their own enrichment capacity. There are two issues that need to be addressed if such a scheme is to be accepted. The big issue is sufficient diversity of supply, so that countries foregoing their own enrichment facilities are reasonably assured of access to fuel when needed. A secondary issue is the control of entrants into the supply chain. If the forecasts of the expansion of nuclear power are anywhere near correct, there is a great deal of money to be made by supplying enrichment services, and it may be that new entrants into the business will want a share.

Security of supply really comes down to a diversity of suppliers, both politically and commercially. The world has been through this before with oil during the 1970s. OPEC was the dominant world supplier and cut off supplies because of Western support for Israel. Today, the oil supply comes from more places, and supply concerns are less about oligopoly control than about the adequacy of resources.

At present, there are only four places to purchase enrichment services: the United States Enrichment Corp. (U.S.-owned and operated), Eurodif (internationally owned but French-operated), URENCO (internationally owned and operated), and Russia (state-owned and operated). This is not much diversity of supply, either politically or commercially.

The issue in the ElBaradei model is how to make the security of supply credible to countries that purchase enrichment rather than doing it themselves. No nation can afford to put a major part of its electricity supply at the mercy of a supplier who might cut it off for political reasons.

The IAEA proposal includes the establishment of an emergency fuel bank, which would ensure that those who agreed to forswear enrichment would be guaranteed continuity of supply. The way to do this is to stockpile enriched uranium. With facilities in 16 different countries that can fabricate fuel from enriched uranium, the necessary diversity of services is ensured. It takes only about 90 days to fabricate a fuel reload for a 1-GWe reactor if the mechanical parts and assemblies are available. If they have to be ordered and newly built, the entire process can take much longer. The security of supply is ensured if each reactor has the spare component parts for a reload minus the enriched uranium, and the IAEA can supply the enriched uranium at short notice. The new fuel could then be supplied in 90 days.

An expansion of nuclear power requires an expansion of enrichment services. Even if the fraction of electricity coming from nuclear energy (now 16% worldwide) is to remain constant as energy use increases, the world will need at least a twofold expansion in enrichment services by the year 2050. Many new suppliers can enter the business. China should enter the commercial enrichment business to enhance political diversity, and other countries should be encouraged to develop internationally owned and operated enrichment services under appropriate safeguards to increase the diversity of supply. New actors such as Australia, Canada, or Mongolia, which have large supplies of uranium, may want to move up the profit chain by entering the enrichment field and not merely supplying uranium ore.

President Bush has proposed an even broader proliferation prevention scheme that looks at both the enrichment and spent-fuel treatment phases of the nuclear fuel cycle. The Bush system envisions that only the countries with existing facilities would serve as the suppliers of enrichment services. But this does not increase the diversity of suppliers and might be seen by potential users as too risky. There have been discussions with other countries as part of a program called the Global Nuclear Energy Partnership (GNEP). However, these discussions have only begun to engage some of the countries that might start a nuclear power program in the future. GNEP will not go anywhere as a concept, much less as a real program, until these talks get serious. Doing enrichment only in those countries that already do it is unlikely to work.

Spent-fuel problems

The plutonium in spent fuel is the proliferation concern at the back end of the fuel cycle. A standard 1-GWe reactor produces roughly 200 kg of plutonium per year, enough in principle for about 20 weapons. This material is called reactor-grade plutonium, to contrast it with weapons-grade plutonium. The difference is in the amount of isotopes of plutonium (Pu) other than the weapon maker’s favorite, Pu-239. Weapons-grade is about 95% Pu-239, whereas reactor-grade is about 50% Pu-239. Mixtures with considerably less Pu-239 than weapons-grade can in principle be made into a weapon, but they generate much radiation and heat, making weapons harder to build reliably. The fraction of this “bad” plutonium depends on how long the original fuel stays in the reactor—three years for reactor-grade plutonium but no more than three to four months for weapons-grade. No one has used reactor-grade material as a source of weapons, and the feasibility of making a weapon from it is still being reviewed by experts. For now, it is best to assume that it can be used and that the material is a proliferation risk.

When spent fuel comes out of a reactor, the radiation is so intense that some form of cooling is required to keep the fuel rods from damage that would result in the escape of radioactive material. The rods are put in cooling ponds where water keeps their temperature at a safe level. Typically, the rods stay in the ponds for at least four years. By then, the radioactivity and the associated heat generated have decayed enough to allow the rods to be removed from the ponds and put into dry casks for storage or shipping offsite. From this point, practices diverge.

Until recently, the United States supported a “once-through” fuel cycle in which the spent fuel is kept intact, with plans to eventually ship it to a geological repository for permanent entombment. The intense radiation from the fission fragments in the spent fuel forms a natural barrier to theft or diversion. Any potential thief would receive a disabling and lethal radiation dose in a matter of minutes.

Other countries have pursued another approach. France, which has the most well-thought-out and developed program, is an example. It reprocesses the spent fuel to extract the plutonium, mixes it with unenriched uranium to make a new fuel called MOX (mixed oxide), and uses this to generate 30% more energy in its power reactors from the original enriched-uranium fuel. The extraction process, called PUREX, is well known. After the MOX fuel is used and comes out of the reactor, it is stored with its radiation barrier intact for later use in a new kind of reactor that many believe will come into use in the second half of this century, when supplies of natural uranium may begin to run short.

The proliferation risk is that during one stage of reprocessing, pure separated plutonium (reactor-grade) is produced that might be vulnerable to theft (perhaps by terrorists) or diversion (by states intent on building nuclear weapons). Radiation from the separated plutonium is weak and thus is not a barrier to handling the material. Even when fabricated into MOX, the chemical separation of the plutonium and uranium is a simpler process than PUREX itself.

The proliferation risks involving spent fuel are different for the first four years and thereafter. Spent fuel must remain at its place of origin until its radioactivity has decayed sufficiently, but once the fuel is accessible, countries can use the “supreme national interest” clause in the Nuclear Non-Proliferation Treaty (NPT) to withdraw from the treaty, expel IAEA inspectors, and reprocess the fuel to produce plutonium for a weapon. This so-called “breakout” scenario, in which a country abides by the rules until it is ready to make weapons and then does what it wants, is the route North Korea took in developing its nuclear weapons. Although the spent fuel from North Korea’s Yongbyon reactor had been around for a long time, the same situation would have been possible if the reactor had been continuously active, because there would have been sufficient spent fuel on hand.

Older spent fuel can be stored at the reactor site, stored offsite, or shipped out of the country to some international site. Shipping the material out of the country for nonproliferation purposes is favored by many, but it would not make much of a difference to determined proliferators. A country with a power reactor would have enough material in the cooling pond to make quite an arsenal. Even reactors with outputs of only 100 MWe would have enough material for about 10 weapons from the four years of stored fuel. Technical means can keep track of the spent fuel so that breakout intentions can be identified as early as possible, but no technical system can prevent it. This is why the potential hole in the back end of the fuel cycle is more difficult to plug than that in the front end.

The case in which shipping spent fuel out of the country does make a difference is in reducing the spread of reprocessing technology. Although the PUREX process is well known, the technology for implementing it is difficult to master. If reprocessing and the use of MOX fuel for energy purposes are to be broadly done, it would be best if the reprocessing plants were few, internationally owned and operated, and under tight surveillance (an approach known as advanced technical safeguards, in IAEA parlance). The French reprocessing plant at Le Hague already does reprocessing for other countries. Current practice is to send the MOX and the radioactive leftovers back to the country of origin.

The U.S.-proposed GNEP program is an interesting idea that would go even further. Fuel is leased, not sold; delivered just in time to the reactor owner; and spent fuel is returned to the lessor. In this scheme, both ends of the fuel cycle are handled by a small group of countries, mainly the nuclear weapons states in the original proposal. The front-end issues are the same as in the IAEA proposed system: security of supply. At the back end, GNEP proposes that the lessors separate the plutonium and other actinides and use this material themselves as fuel in a new kind of “burner” reactor. The lessors get electric power from the burner reactors, and the incentives for the lessee are being spared the burden and cost of enrichment and reducing the cost of disposing of the remaining radioactive waste. Burning the plutonium and other long-lived material reduces the required isolation time for the remaining waste, making it easier and less costly to isolate. When examined closely, GNEP adds an element to reducing the risk from the back end of the fuel cycle: limiting who uses plutonium-bearing fuel. It does nothing, however, to limit the breakout potential from fuel still in the cooling ponds.

The first steps in securing the back end are clear, though imperfect. There should be a few internationally owned and operated reprocessing facilities. MOX fuel should be fabricated at the reprocessing facility, so that plutonium is not shipped around. Delivery to customers should be just-in-time for loading in the reactor where it is to be used. Cooling ponds and spent-fuel storage facilities should have more advanced technical monitoring systems installed. It would be desirable if spent fuel were shipped to international storage facilities, but there is no need to wait for that before starting down the road to greater security.

Don’t wait for utopia

Scientists and engineers know that a major strengthening of the defenses against proliferation is a political issue, not a technical one. The politicians hope that some technical miracle will solve the problem so that they will not have to deal with political complications. Short of a distant utopia, the best step that the nations of the world can take is to make it difficult to move from peaceful uses to weapons, to detect such activities as early as possible, and to apply appropriate sanctions when all else fails. Although there are technical improvements that can reduce proliferation risk, it is only in the political arena that real risk reduction can occur.

Article IV of the NPT gives every signatory the right to develop nuclear technology for peaceful uses. The enrichment technology required for the production of reactor fuel is the same technology that can produce the highly enriched uranium required for a weapon. The reprocessing technology required to produce MOX fuel is the same technology required to secure plutonium for a weapon. Article X of the NPT lets a signatory go to the brink, withdraw from the treaty, and go nuclear if it so desires. Breakout potential is built into the current system

Today the talk is of somehow internationalizing the fuel cycle. Internationalization of both ends of the fuel cycle can reduce proliferation risk. Reduction is particularly needed at the front end, and a well-designed fuel supply system can allow big reductions in proliferation risk. The key, however, is ensuring political and commercial diversity in the supply chain, so that those who build their own reactors are not tempted to build their own supply chain as well. This is particularly important for those countries that pose the greatest risk: countries that are starting down the nuclear power road and have concerns about the reliability of a Western-dominated supply chain. Because enrichment and reprocessing plants are expensive and are uneconomical for smaller-scale programs, this is the kind of carrot that might make such programs more acceptable. A country that agrees to join receives an economic benefit that is preferable to going it alone. The core issue is to ensure a secure fuel supply to those agreeing to forego their own fuel cycle development.

Until the discussions meaningfully include the nations who are considering turning to nuclear power, they will not get anywhere. To achieve political diversity, China should be encouraged to enter into the commercial enrichment business; and to achieve commercial diversity, some other countries should be encouraged to develop internationally owned and operated enrichment and reprocessing services under appropriate safeguards. It might help if the United States set an example by encouraging the owners of the USEC to sell a share to the Canadians and others and to have their new partners share operations.

The real issue is the credibility of sanctions that can be imposed on those who violate the rules and start down the road to a nuclear weapons program. There is no technical barrier to proliferators, and if the international community does not act together, no program can succeed.