From the Hill – Spring 2009

Economic stimulus bill provides major boost for R&D

The $790-billion economic stimulus bill signed by President Obama on February 17 contains $21.5 billion in federal R&D funding—$18 billion for research and $3.5 billion for facilities and large equipment. The final appropriation was more than the $17.8 billion approved in the Senate or the $13.2 billion approved in the House version of the bill. For a federal research portfolio that has been declining in real terms since fiscal year (FY) 2004, the final bill provides an immediate boost that allows federal research funding to see a real increase for the first time in five years.

The stimulus bill, which is technically an emergency supplemental appropriations bill, was approved before final work has been completed on funding the federal government for FY 2009. Only 3 of 12 FY 2009 appropriations bills have been approved (for the Departments of Defense, Homeland Security, and Veterans Affairs). All other federal agencies are operating at or below FY 2008 funding levels under a continuing resolution (CR) through March 6.

Under the CR and the few completed FY 2009 appropriations, the federal research portfolio stands at $58.3 billion for FY 2009, up just 0.3% (less than inflation), but after the stimulus bill and assuming that final FY 2009 appropriations are at least at CR levels, the federal research portfolio could jump to nearly $75 billion.

Basic competitiveness-related research, biomedical research, energy R&D, and climate change programs are high priorities in the bill. The National Institutes of Health (NIH) will receive $10.4 billion, which would completely turn around an NIH budget that has been in decline since 2004 and could boost the total NIH budget to $40 billion, depending on the outcome of NIH’s regular FY 2009 appropriation.

The National Science Foundation (NSF), the Department of Energy (DOE) Office of Science, and the National Institute of Standards and Technology (NIST)—the three agencies highlighted in the America COMPETES Act of 2007 and President Bush’s American Competitiveness Initiative—would all be on track to double their budgets over 7 to 10 years. NSF will receive $3 billion, DOE’s Office of Science $1.6 billion, and NIST $600 million.

DOE’s energy programs would also be a winner with $3.5 billion for R&D and related activities in renewable energy, energy conservation, and fossil energy, part of the nearly $40 billion total for DOE in weatherization, loan guarantees, clean energy demonstration, and other energy program funds. DOE will receive $400 million to start up the Advanced Research Projects Agency–Energy (ARPA-E), a new research agency authorized in the America COMPETES Act but not funded until now.

The bill will provide money for climate change–related projects in the National Aeronautics and Space Administration and the National Oceanic and Atmospheric Administration (NOAA). There is also additional money for non-R&D but science and technology–related programs, higher education construction, and other education spending of interest to academia.

The bill provides billions of dollars for universities to construct or renovate laboratories and to buy research equipment, as well as money for federal labs to address their infrastructure needs. The bill provides $3.5 billion for R&D facilities and capital equipment to pay for the repair, maintenance, and construction of scientific laboratories as well as large research equipment and instrumentation. Considering that R&D facilities funding totaled $4.5 billion in FY 2008, half of which went to just one laboratory (the International Space Station), the $3.5-billion supplemental will be an enormous boost in the federal government’s spending on facilities.

Obama cabinet picks vow to strengthen role of science

Key members of President Obama’s new cabinet are stressing the importance of science in developing policy as well as the need for scientific integrity and transparency in decisionmaking.

In one of his first speeches, Ken Salazar, the new Secretary of the Interior, told Interior Department staff that he would lead with “openness in decisionmaking, high ethical standards, and respect to scientific integrity.” He said decisions will be based on sound science and the public interest, not special interests.

Lisa Jackson, the new administrator of the Environmental Protection Agency (EPA), said at her confirmation hearing that “science must be the backbone of what EPA does.” Addressing recent criticism of scientific integrity at the EPA, she said that “political appointees will not compromise the integrity of EPA’s technical experts to advance particular regulatory outcomes.”

In a memo to EPA employees, Jackson noted, “I will ensure EPA’s efforts to address the environmental crises of today are rooted in three fundamental values: science-based policies and programs, adherence to the rule of law, and overwhelming transparency.” The memo outlined five priority areas: reducing greenhouse gas emissions, improving air quality, managing chemical risks, cleaning up hazardous waste sites, and protecting America’s water.

New Energy Secretary Steven Chu, a Nobel Prize–winning physicist and former head of the Lawrence Berkeley National Laboratory, emphasized the key role science will play in addressing the nation’s energy challenges. In testimony at his confirmation hearing, Chu said that “the key to America’s prosperity in the 21st century lies in our ability to nurture and grow our nation’s intellectual capital, particularly in science and technology.” He called for a comprehensive energy plan to address the challenges of climate change and threats from U.S. dependence on foreign oil.

In other science-related picks, the Senate confirmed Nancy Sutley as chair of the Council on Environmental Quality at the White House. Awaiting confirmation as this issue went to press were John Holdren, nominated to be the president’s science advisor, and Jane Lubchenco, nominated as director of NOAA.

Proposed regulatory changes under review

As one of its first acts, the Obama administration has halted all proposed regulations that were announced but not yet finalized by the Bush administration until a legal and policy review can be conducted. The decision means at least a temporary stop to certain controversial changes, including a proposal to remove gray wolves in the northern Rocky Mountains from Endangered Species Act (ESA) protection.

However, the Bush administration was able to finalize a number of other controversial changes, including a change in implementation of the ESA that allows agencies to bypass scientific reviews of their decisions by the Fish and Wildlife Service or the National Marine Fisheries Service. In addition, the Department of the Interior finalized two rules: one that allows companies to dump mining debris within a current 100-foot stream buffer and one that allows concealed and loaded guns to be carried in national parks located in states with concealed-carry laws.

Regulations that a new administration wants to change but have been finalized must undergo a new rulemaking process, often a lengthy procedure. However, Congress can halt rules that it opposes, either by not funding implementation of the rules or by voting to overturn them. The Congressional Review Act allows Congress to vote down recent rules with a resolution of disapproval, but this technique has been used only once and would require separate votes on each regulation that Congress wishes to overturn. House Natural Resources Chairman Nick Rahall (D-WV) and Select Committee on Global Warming Chairman Ed Markey (D-MA) have introduced a measure that would use the Congressional Review Act to freeze the changes to the endangered species rules.

Members of Congress have introduced legislation to expand their options to overturn the rules. Rep. Jerrold Nadler (D-NY), chair of the House Judiciary Subcommittee on the Constitution, Civil Rights and Civil Liberties, has introduced a bill, the Midnight Rule Act, that would allow incoming cabinet secretaries to review all regulatory changes made by the White House within the last three months of an administration and reverse such rules without going through the entire rulemaking process.

Witnesses at a February 4 hearing noted, however, that every dollar that goes into defending or rewriting these regulations is money not spent advancing a new agenda, so the extent to which agencies and Congress will take on these regulatory changes remains to be seen.

Democrats press action on climate change

Amid efforts to use green technologies and jobs to stimulate the economy, Congress began work on legislation to cap greenhouse gas emissions that contribute to climate change. At a press conference on February 3, Barbara Boxer (D-CA), chair of the Senate Environment and Public Works Committee, announced a broad set of principles for climate change legislation. They include setting targets that are guided by science and establishing “a level global playing field, by providing incentives for emission reductions and effective deterrents so that countries contribute their fair share to the international effort to combat global warming.” The principles also lay out potential uses for the revenues generated by establishing a carbon market.

Also addressing climate change is the Senate Foreign relations Committee, which on January 28 heard from former Vice President Al Gore, who pushed for domestic and international action to address climate change. Gore urged Congress to pass the stimulus bill because of its provisions on energy efficiency, renewable energy, clean cars, and a smart grid. He also called for a cap on carbon emissions to be enacted before the next round of international climate negotiations in Copenhagen in December 2009.

In the House, new Energy and Commerce Chair Henry Waxman (D-CA), who ousted longtime chair John Dingell (D-MI) and favors a far more aggressive approach to climate change legislation, said that he wants a bill through his committee by Memorial Day. Speaker Nancy Pelosi (D-CA) would like a bill through the full House by the end of the year.

A hearing of Waxman’s committee on climate change featured testimony from members of the U.S. Climate Action Partnership, a coalition of more than 30 businesses and nongovernmental organizations, which supports a cap-and-trade system with a 42% cut in carbon emissions from 2005 levels by 2030 and reductions of 80% by 2050. Witnesses testified that a recession is a good time to pass this legislation because clarity in the law would illuminate investment opportunities.

Energy and Environment Subcommittee Chair Ed Markey (D-MA) has said that he intends to craft a bill that draws on existing proposals, including one developed at the end of the last Congress by Dingell and former subcommittee chair Rick Boucher (D-VA). Markey’s proposal is also likely to reflect a set of principles for climate change that he announced last year, along with Waxman and Rep. Jay Inslee (D-WA). The principles are based on limiting global temperature rise to 2 degrees Celsius.

President Obama has also taken steps to address greenhouse gas emissions. He directed the EPA to reconsider whether to grant California a waiver to set more stringent automobile standards. California has been fighting the EPA’s December 2007 decision to deny its efforts to set standards that would reduce carbon dioxide emissions from automobiles by 30% by 2016. If approved, 13 other states have pledged to adopt the standards. Obama also asked the Department of Transportation to establish higher fuel efficiency standards for carmakers’ 2011 model year.

Biological weapons threat examined

The Senate and the House held hearings in December 2008 and January 2009, respectively, to examine the findings of the report A World at Risk, by the Commission on the Prevention of Weapons of Mass Destruction, Proliferation and Terrorism. At the hearings, former Senators Bob Graham and Jim Talent, the commission chair and vice chair, warned that “a terrorist attack involving a weapon of mass destruction—nuclear, biological, chemical, or radiological—is more likely than not to occur somewhere in the world in the next five years.”

Graham and Talent argued that although the prospect of a nuclear attack is a matter of great concern, the threat of a biological attack poses the more immediate concern because of “the greater availability of the relevant dual-use materials, equipment, and know-how, which are spreading rapidly throughout the world.”

That view was supported by Senate Homeland Security and Governmental Affairs Committee chairman Joe Lieberman (I-CT) and ranking member Susan Collins (R-ME). Both recognized that although biotechnology research and innovation have created the possibility of important medical breakthroughs, the spread of the research and the technological advancements that accompany innovations have also increased the risk that such knowledge could be used to develop weapons.

Graham and Talent acknowledged that weaponizing biological agents is still difficult and stated that “government officials and outside experts believe that no terrorist group has the operational capability to carry out a mass-casualty attack.” The larger risk, they said, comes from rogue biologists, which is what is believed to have happened in the 2001 anthrax incidents. Currently, more than 300 research facilities in government, academia and the private sector in the United States, employing about 14,000 people, are authorized to handle pathogens. The research is conducted in high-containment laboratories.

The commission said it was concerned about the lack of regulation of unregistered BSL-3 research facilities in the private sector. These labs have the necessary tools to handle anthrax or synthetically engineer a more dangerous version of that agent, but whether they have implemented appropriate security measures is often not known.

For this reason, the commission recommended consolidating the regulation of registered and unregistered high-containment laboratories under a single agency, preferably the Department of Homeland Security or the Department of Health and Human Services. Currently, regulatory oversight of research involves the Department of Agriculture and the Centers for Disease Control and Prevention, with security checks performed by the Justice Department.

Collins has repeatedly stated the need for legislation to regulate biological pathogens, expressing deep concern over the “dangerous gaps” in biosecurity and the importance of drafting legislation to close them.

In the last Congress, the Select Agent Program and Biosafety Improvement Act of 2008 was introduced to reauthorize the select agent program but did not pass. The bill aimed at strengthening biosafety and security at high-containment laboratories. It would not have restructured agency oversight. No new bills have been introduced in the new Congress.

Before leaving office, President Bush on January 9 signed an executive order on laboratory biosecurity that established an interagency working group, co-chaired by the Departments of Defense and Health and Human Services, to review the laws and regulations on the select agent program, personnel reliability, and the oversight of high-containment labs.

Multifaceted ocean research bill advances

The Senate on January 15, 2009, approved by a vote of 73 to 21 the Omnibus Public Lands Management Act of 2009, a package of five bills authorizing $794 million for expanded ocean research through FY 2015, including $104 million authorized for FY 2009, along with a slew of other wilderness conservation measures. The House is expected to take up the bill.

The first of the five bills, the Ocean Exploration and NOAA Undersea Research Act, authorizes the National Ocean Exploration Program and the National Undersea Research Program. The act prioritizes research on deep ocean areas, calling for study of hydro thermal vent communities and sea mounts, documentation of shipwrecks and submerged sites, and development of undersea technology. The bill authorizes $52.8 million for these programs in FY 2009, increasing to $93.5 million in FY 2015.

The Ocean and Coastal Mapping Integration Act authorizes an integrated federal plan to improve knowledge of unmapped maritime territory, which currently comprises 90% of all U.S. waters. Calling for improved coordination, data sharing, and mapping technology development, the act authorizes $26 million for the program along with $11 million specifically for Joint Ocean and Coastal Mapping Centers in FY 2009. These quantities would increase to $45 million and $15 million, respectively, beginning in FY 2012.

The Integrated Coastal and Ocean Observation System Act (S.171) authorizes an integrated national observation system to gather and disseminate data on an array of variables from the coasts, oceans, and Great Lakes. The act promotes basic and applied research to improve observation technologies, as well as modeling systems, data management, analysis, education, and outreach through a network of federal and regional entities. Authorization levels for the program are contingent on the budget developed by the Interagency Ocean Observation Committee.

The Federal Ocean Acidification Research and Monitoring Act establishes a coordinated federal research strategy to better understand ocean acidification. In addition to contributing to climate change, increased emissions of carbon dioxide are making the ocean more acidic, with resulting effects on corals and other marine life. The act authorizes $14 million for FY 2009, increasing to $35 million in FY 2015.

The fifth research bill included in the omnibus package, the Coastal and Estuarine Land Protection Act, creates a competitive state grant program to protect threatened coastal and estuarine areas with significant conservation, ecological, or watershed protection values, or with historical, cultural, or aesthetic significance.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

U.S. Workers in a Global Job Market

Among the many changes that are part of the emergence of a global economy is a radically different relationship between U.S. high-tech companies and their employees. As late as the 1990s, a degree in science, technology, engineering, or mathematics (STEM) was a virtual guarantee of employment. Today, many good STEM jobs are moving to other countries, reducing prospects for current STEM workers and dimming the appeal of STEM studies for young people. U.S. policymakers need to learn more about these developments so that they can make the critical choices about how to nurture a key ingredient in the nation’s future economic health, the STEM workforce.

U.S. corporate leaders are not hiding the fact that globalization has fundamentally changed how they manage their human resources. Craig Barrett, then the chief executive officer (CEO) of Intel Corporation, said that his company can succeed without ever hiring another American. In an article in Foreign Affairs magazine, IBM’s CEO Sam Palmisano gave the eulogy for the multinational corporation (MNC), introducing us to the globally integrated enterprise (GIE): “Many parties to the globalization debate mistakenly project into the future a picture of corporations that is unchanged from that of today or yesterday….But businesses are changing in fundamental ways—structurally, operationally, culturally—in response to the imperatives of globalization and new technology.”

GIEs do not have to locate their high-value jobs in their home country; they can locate research, development, design, or services wherever they like without sacrificing efficiency. Ron Rittenmeyer, then the CEO of EDS, said he “is agnostic specifically about where” EDS locates its workers, choosing the place that reaps the best economic efficiency. EDS, which had virtually no employees in low-cost countries in 2002, had 43% of its workforce in low-cost countries by 2008. IBM, once known for its lifetime employment, now forces its U.S. workers to train foreign replacements as a condition of severance. In an odd twist, IBM is offering U.S. workers the opportunity to apply for jobs in its facilities in low-cost countries such as India and Brazil at local wage rates.

Policy discussions have not kept pace with changes in the job market, and little attention is being paid to the new labor market for U.S. STEM workers. In a time of GIEs, advanced tools and technology can be located anywhere, depriving U.S. workers of an advantage they once had over their counterparts in low-wage countries. And because technology workers not only create new knowledge for existing companies but are also an important source of entrepreneurship and startup firms, the workforce relocation may undermine U.S. world leadership as game-changing new companies and technologies are located in low-cost countries rather than the United States. The new corporate globalism will make innovations less geographically sticky, raising questions about how to make public R&D investments pay off locally or even nationally. Of course, scientists and engineers in other countries can generate new ideas and technologies that U.S. companies can import and put to use, but that too will require adjustments because this is not a strategy with which U.S. companies have much experience. In short, the geographic location of inputs and the flow of technology, knowledge, and people are sure to be significantly altered by these changes in firm behavior.

As Ralph Gomory, a former senior vice president for science and technology at IBM, has noted, the interests of corporations and countries are diverging. Corporate leaders, whose performance is not measured by how many U.S. workers they employ or the long-term health of the U.S. economy, will pursue their private interests with vigor even if their actions harm their U.S. employees or are bad prescriptions for the economy. Simply put, what’s good for IBM may not be good for the United States and vice versa. Although this may seem obvious, the policy and political processes have not fully adjusted to this reality. Policymakers still turn to the CEOs of GIEs for advice on what is best for the U.S. economy. Meanwhile, STEM workers have yet to figure out that they need to get together to identify and promote what is in their interest.

Most STEM workers have not embraced political activism. Consider employees in the information technology (IT) industry, one of the largest concentrations of STEM workers. They have by and large rejected efforts by unions to organize them. One might expect a professional organization such as the Institute of Electrical and Electronics Engineers (IEEE) to represent their interests, but IEEE is an international organization that sees little value in promoting one group of its members over another.

Because STEM workers lack an organized voice, their interests are usually neglected in policy discussions. There was no worker representative on the National Academies committee that drafted the influential report Rising Above the Gathering Storm. And although the Council on Competitiveness, which prepared the National Innovation Initiative, has representatives of labor unions in its leadership, they did not participate in any significant way on the initiative. Both studies had chairs who were CEOs of GIEs. It should come as no surprise, therefore, that neither of these reports includes recommendations that address the root problem of offshoring: the misalignment of corporate and national interests, in which firms compete by substituting foreign for U.S. workers. Instead, the reports diagnosed the problem as a shortage of qualified STEM workers and therefore advocated boosting R&D spending, expanding the pool of STEM workers, and recruiting more k-12 science and math teachers.

Low-cost countries attract R&D

Although everyone recognizes that globalization is remaking the R&D landscape, that U.S.-based companies are moving some of their high-value activities off shore, and that some low-income countries such as China and India are eager to enhance their capabilities, we actually have very little reliable and detailed data on what is happening. In fact, much of what we think we do know is contradictory. For example, in 2006, China was by far the leading exporter of advanced technology products to the United States, surpassing all of the European Union combined. On the other hand the number of triadic patents—those filed in Europe, the United States, and Japan—awarded to Chinese inventors in 2002 was a mere 177 versus more than 18,000 for American and more than 13,000 for Japanese inventors. A mixed picture also emerges from India. On the one hand, India’s indigenous IT services companies such as Infosys and Wipro have become the market leaders in their sector, forcing U.S.-based competitors such as IBM and HP to adopt their offshore outsourcing business model. But in 2003, India produced only 779 engineering doctorates compared to the 5,265 produced in the United States.

The standard indicators in this area are backward-looking and often out of date by the time they are published. More timely and forward-looking information might be gleaned from surveys of business leaders and corporate announcements. A survey by the United Nations Conference on Trade and Development of the top 300 worldwide R&D spenders found that China was the top destination for future R&D expansion, followed by the United States, India, Japan, the United Kingdom, and Russia. A 2007 Economist magazine survey of 300 executives about R&D site selection found that India was the top choice, followed by the United States and China.

No comprehensive list of R&D investments by U.S. multinational corporations exists, and the firms aren’t required to disclose the location of R&D spending in financial filings. We must rely on the information that companies offer voluntarily. From public announcements we know that eight of the top 10 R&D-spending companies have R&D facilities in China or India, (Microsoft, Pfizer, DaimlerChrysler, General Motors, Siemens, Matsushita Electric, IBM, and Johnson & Johnson), and that many of them plan to increase their innovation investments in India and China.

Although early investments were for customizing products for a local market, foreign-based facilities are now beginning to develop products for global markets. General Motors has a research presence in India and China, and in October 2007, it announced that it would build a wholly owned advanced research center to develop hybrid technology and other advanced designs in Shanghai, where it already has a 1,300-employee research center as part of a joint venture with the Shanghai Automotive Industry Corporation. Pfizer, the number two R&D spender, is outsourcing drug development services to India and already has 44 new drugs undergoing clinical trials there. The company has approximately 200 employees at its Shanghai R&D center, supporting global clinical development. Microsoft has a large and expanding R&D presence in India and China. Microsoft’s India Development Center, its largest such center outside the United States, employs 1,500 people. The Microsoft China R&D Group also employs 1,500, and in 2008, Microsoft broke ground on a new $280-million R&D campus in Beijing and announced an additional $1 billion investment for R&D in China. Intel has about 2,500 R&D workers in India and has invested approximately $1.7 billion in its Indian operations. Its Indian engineers designed the first all-India microprocessor, the Xeon 7400, which is used for high-end servers. Intel has been investing in startup companies in China, where it created a $500 million Intel Capital China Technology Fund II to be used for investments in wireless broadband, technology, media, telecommunications, and “clean tech.”

Although General Electric spends less than the above companies on R&D, it has the distinction of having the majority of its R&D personnel in low-cost countries. Jack Welch, GE’s former CEO, was an early and significant evangelizer of offshoring. The firm has four research locations worldwide, in New York, Shanghai, Munich, and Bangalore. Bangalore’s Jack Welch R&D Center employs 3,000 workers, more than the other three locations combined. Since 47% of GE’s revenue in 2008 came from the United States and only 16% from Asia, it is clear that it is not moving R&D to China and India just to be close to its market.

The fact that China and India are able to attract R&D is an indicator that they have improved their ability to attract the mid-skill technology jobs in the design, development, and production stages. The true benefit of attracting R&D activities might be the downstream spillover benefits in the form of startup firms and design and development and production facilities.

U.S. universities have been a magnet for talented young people interested in acquiring the world’s best STEM education. Many of these productive young people have remained in the United States, become citizens, and made enormous contributions to the productivity of the U.S. economy as well as its social, cultural, and political life. But these universities are beginning to think of themselves as global institutions that can deliver their services anywhere in the world.

Cornell, which already calls itself a transnational institution, operates a medical school in Qatar and sent its president to India in 2007 to explore opportunities to open a branch campus. Representatives of other top engineering schools, such as Rice, Purdue, Georgia Tech, and Virginia Tech, have made similar trips. Carnegie Mellon offers its technology degrees in India in partnership with a small private Indian college. Students take most of their courses in India, because it is less expensive, and then spend six months in Pittsburgh to complete the Carnegie Mellon degree.

If students do not have to come to the United States to receive a first-rate education, they are far less likely to seek work in the United States. More high-quality job opportunities are appearing in low-cost countries, many of them with U.S. companies. This will accelerate the migration of STEM jobs out of the United States. Even the perfectly sensible move by many U.S. engineering programs to provide their students with more international experience through study-abroad courses and other activities could contribute to the migration of STEM jobs by preparing these students to manage R&D activities across the globe.

Most of the information about university globalization is anecdotal. The trend is clearly in its early stages, but there are indications that it could grow quickly. This is another area in which more reliable data is essential. If the nation’s leaders are going to manage university activities in a way that will advance U.S. interests, they will need to know much more about what is happening and what is planned.

Uncertainty and risk

The emerging opportunities for GIEs to take advantage of high-skilled talent in low-cost countries have markedly increased both career uncertainty and risk for the U.S. STEM workforce. Many U.S. STEM workers worry about offshoring’s impact on their career prospects and are altering career selection. For instance, according to the Computing Research Association, enrollment in bachelors programs in computer science dropped 50% from 2002 to 2007. The rising risk of IT job loss, caused in part by offshoring, was a major factor in students’ shying away from computer science degrees.

Offshoring concerns have been mostly concentrated on IT occupations, but many other STEM occupations may be at risk. Princeton University economist Alan Blinder analyzed all 838 Bureau of Labor Statistics standard occupation categories to estimate their vulnerability to offshoring. He estimates that nearly all (35 of 39) STEM occupations are “offshorable,” and he described many as “highly vulnerable.” By vulnerable, he is not claiming that all, or even a large share, of jobs in those occupations will actually be lost overseas. Instead, he believes that those occupations will face significant new wage competition from low-cost countries. Further, he finds that there is no correlation between vulnerability and education level, so simply increasing U.S.education levels, as many have advocated, will not slow offshoring.

The National Science Foundation should work with the appropriate agencies such as the Bureaus of Economic Analysis, Labor Statistics, and the Census to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Workers need to know which jobs will be geographically sticky and which are vulnerable to being offshored so that they can make better choices for investing in their skills. But there is a great deal of uncertainty about how globalization will affect the level and mix of domestic STEM labor demand. The response of some workers appears to be to play it safe and opt for occupations, often non-STEM, that are likely to stay. Further, most employers, because of political sensitivities, are very reluctant to reveal what jobs they are offshoring, sometimes going to great lengths to mask the geographic rebalancing of their workforces. The uncertainty introduced by offshoring aggravates the already volatile job market that is characteristic of the dynamic high-tech sector.

For incumbent workers, especially those in mid-career, labor market volatility creates a special dilemma. The two prior technology recessions, 1991 to 1992 and 2002 to 2004, have been especially long, longer even than for the general labor force. At the same time, technology-obsolescence cycles are shortening, which means that unemployed STEM workers can find that there skills quickly become outdated. If unemployment periods are especially long, it will be even more difficult to reenter the STEM workforce when the market rebounds. An enormous amount of human capital is wasted when experienced STEM professionals are forced to move into other professions because of market vagaries.

Policy has done little to reduce risks and uncertainty for STEM workers. The government does not collect data on work that is moving offshore or real-time views of the STEM labor markets, both of which would help to reduce uncertainty. Trade Adjustment Assistance (TAA), the primary safety net for workers who lose their jobs due to international trade, has not been available for services industries, but it has been authorized as part of the recently passed stimulus legislation. This is one part of the stimulus that should be made permanent. In addition, Congress should ensure that the program is adequately funded, because it is often oversubscribed, and the Department of Labor should streamline the eligibility regulations, because bureaucratic rules often hamper the ability of displaced workers to obtain benefits. This will be especially true with services workers whose employers are reluctant to admit that workers are displaced due to offshoring.

Response to competition

One of the most important high-technology stories of the past decade has been the remarkably swift rise of the Indian IT services industry, including firms such as Wipro, Infosys, TCS, and Satyam, as well as U.S.-based firms such as Cognizant and iGate that use the same business model. There is no need to speculate about whether the Indian firms will eventually take the lead in this sector; they already have become market leaders. By introducing an innovative, disruptive business model, the Indian firms have turned the industry upside down in only four years. U.S. IT services firms such as IBM, EDS, CSC, and ACS were caught flat-footed. Not a single one of those firms would have considered Infosys, Wipro, or TCS as direct competitors as recently as 2003, but now they are chasing them by moving as fast as possible to adopt the Indian business model, which is to move as much work as possible to low-cost countries. The speed and size of the shift is breathtaking.

The Indian IT outsourcing firms have extensive U.S. operations, but they prefer to hire temporary guest workers with H-1B or L-1 visas. The companies train these workers in the United States, then send them home where they can be hired to do the same work at a lower salary. These companies rarely sponsor their H-1B and L-1 workers for U.S. legal permanent residence.

The important lesson is how the U.S. IT services firms have responded to the competitive challenge. Instead of investing in their U.S. workers with better tools and technologies, the firms chose to imitate the Indian model by outsourcing jobs to low-cost countries. IBM held a historic meeting with Wall Street analysts in Bangalore in June 2006, where its whole executive team pitched IBM’s strategy to adopt the Indian offshore-outsourcing business model, including an additional $6 billion investment to expand its Indian operations. IBM’s headcount in India has grown from 6,000 in 2003 to 73,000 in 2007, and is projected to be 110,000 by 2010. The U.S. headcount is about 120,000. And IBM is not alone. Accenture passed a historic milestone in August 2007, when its Indian headcount of 35,000, surpassed any of its other country headcounts, including the United States, where it had 30,000 workers. In a 2008 interview, EDS’s Rittenmeyer extolled the profitability of shifting tens of thousands of the company’s workers from the United States to low-cost countries such as India. He said outsourcing is “not just a passing fancy. It is a pretty major change that is going to continue. If you can find high-quality talent at a third of the price, it’s not too hard to see why you’d do this.” ACS, another IT services firm, recently told Wall Street analysts that it plans its largest increase in offshoring for 2009, when it will move many of its more complex and higher-wage jobs overseas so that nearly 35% of its workforce will be in low-cost countries.

As Alan Blinder’s analysis indicates, many other types of STEM jobs could be offshored. The initiative could come from foreign competitors or from U.S.-based GIEs.

Preserving STEM jobs

Private companies will have the final say about the offshoring of jobs, but the federal government can and should play a role in tracking what is happening in the global economy and taking steps that help the country adapt to change. Given the speed at which offshoring is increasing in scale, scope, and job sophistication, a number of immediate steps should be taken.

Collect additional, better, and timelier data. We cannot expect government or business leaders to make sound decisions in the absence of sound data. The National Science Foundation (NSF) should work with the appropriate agencies, such as the Bureaus of Economic Analysis (BEA) and Labor Statistics and the Census, to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Specifically, the NSF Statistical Research Service (SRS) should augment existing data on multinational R&D investments to include annual detailed STEM workforce data, including occupation, level of education, and experience for workers within and outside the United States. These data should track the STEM workforce for multinational companies in the United States versus other countries. The SRS should also collect detailed information on how much and what types of R&D and innovation activities are being done overseas. The NSF Social, Behavioral, and Economic Sciences division should do four things: 1) begin a research program to estimate the number of jobs that have been lost to offshoring and to identify the characteristics of jobs that make them more or less vulnerable to offshoring; 2) assess the extent of U.S. university globalization and then track trends; 3) identify the effects of university globalization on the U.S. STEM workforce and students, and launch a research program to identify and disseminate best practices in university globalization; and 4) conduct a study to identify the amount and types of U.S. government procurement that are being offshored. Finally, the BEA should implement recommendations from prior studies, such as the 2006 study by MIT’s Industrial Performance Center, to improve its collection of services data, especially trade in services.

Establish an independent institute to study the implications of globalization. Blinder has said that the economic transformation caused by offshoring could rival the changes caused by the industrial revolution. In addition to collecting data, government needs to support an independent institute to analyze the social and economic implications of these changes and to consider policy options to address the undesirable effects. A $40 million annual effort to fund intramural and extramural efforts would be a good start.

Facilitate worker representation in the policy process. Imagine if a major trade association, such as the Semiconductor Industry Association, was excluded from having any representative on a federal advisory committee making recommendations on trade and export control policy in the semiconductor industry. It would be unfathomable. But we have precisely this arrangement when it comes to making policies that directly affect the STEM workforce. Professional societies and labor unions should be invited to represent the views of STEM workers on federal advisory panels and in congressional hearings.

Create better career paths for STEM workers. STEM offshoring has created a pessimistic attitude about future career prospects for incumbent workers as well as students. To make STEM career paths more reliable and resilient, the government and industry should work together to create programs for continuing education, establish a sturdier safety net for displaced workers, improve information about labor markets and careers, expand the pool of potential STEM workers by making better use of workers without a college degree, and provide assistance for successful reentry into the STEM labor market after voluntary and involuntary absences. Some specific steps are:

  • The government should encourage the adoption and use of low-cost asynchronous online education targeted at incumbent STEM workers. The program would be coordinated with the appropriate scientific and engineering professional societies. A pilot program should assess the current penetration rates of online education for STEM workers and identify barriers to widespread adoption.
  • The Department of Labor should work with the appropriate scientific and engineering professional societies to create a pilot program for continuous education of STEM workers and retraining of displaced mid-career STEM workers. Unlike prior training programs, these should be targeted at jobs that require at least a bachelor’s degree. Funding could come from the H-1B visa fees that companies pay when they hire foreign workers.
  • The National Academies should form a study panel to identify on-ramps to STEM careers for students who do not go to college and recommend ways to eliminate barriers and identify effective strategies for STEM workers to more easily reenter the STEM workforce.
  • Congress should reform immigration policy to increase the number of highly skilled people admitted as permanent residents and reduce the number of temporary H-1B and L-1 work visas. Rules for H-1B and L-1 visas should be tightened to ensure that workers receive market wages and do not displace U.S. citizens and permanent resident workers.

Improve the competitiveness of the next generation of STEM workers. As workers in other countries develop more advanced skills, U.S. STEM workers must develop new skills and opportunities to distinguish themselves. They should identify and pursue career paths that are geographically sticky, and they should acquire more entrepreneurship skills that will enable them to create their own opportunities. The National Academies could help by forming a study panel to identify necessary curriculum reforms and best practices in teaching innovation, creativity, and entrepreneurship to STEM students. NSF should encourage and help fund study-abroad programs for STEM students to improve their ability to work in global teams.

Public procurement should favor U.S. workers. The public sector—federal, state, and local government—is 19% of the economy and is an important mechanism that should be used by policymakers. There is a long, strong, and positive link between government procurement and technological innovation. The federal government not only funded most of the early research in computers and the Internet but was also a major customer for those new technologies. U.S. taxpayers have a right to know that government expenditures at any level are being used appropriately to boost innovation and help U.S. workers. The first step is to do an accounting of the extent of public procurement that is being offshored. Then the government should modify regulations to keep STEM intensive-work at home.

We are at the beginning of a major structural shift in global distribution of R&D and STEM-intensive work. Given the critical nature of STEM to economic growth and national security, the United States must begin to adapt to these changes. The responses that have been proposed and adopted so far are based on the belief that nothing has changed. Simply increasing the amount of R&D spending, the pool of STEM workers, and the number of k-12 science and math teachers is not enough. The nation needs to develop a better understanding of the new dynamics of the STEM system and to adopt policies that will advance the interests of the nation and its STEM workers.

Closing the Environmental Data Gap

The compelling evidence that the global climate is changing significantly and will continue to change for the foreseeable future means that we can expect to see similarly significant changes in a wide variety of other environmental conditions such as air and water quality; regional water supply; the health and distribution of plant and animal species; and land-use patterns for food, fiber, and energy production. Unfortunately, we are not adequately monitoring trends in many of these areas and therefore do not have the data necessary to identify emerging problems or to evaluate our efforts to respond. As threats to human health, food production, environmental quality, and ecological well-being emerge, the nation’s leaders will be handicapped by major blind spots in their efforts to design effective policies.

In a world in which global environmental stressors are increasingly interactive and human actions are having a more powerful effect, the need for detailed, reliable, and timely information is essential. Yet environmental monitoring continues to be undervalued as an investment in environmental protection. We tolerated inadequate data in the past, when problems were relatively simple and geographically limited, such as air or water pollution from a single plant. But it is unacceptable today, as we try to grapple with far more extensive changes caused by a changing climate.

The effects of climate change will be felt across the globe, and at the regional level they are likely to present unique and hard-to-predict outcomes. For example, a small change in temperature in the Pacific Northwest has allowed bark beetles to survive the winter, breed prolifically, and devastate millions of acres of forest. Although scientists are working to improve forecasts of the future and anticipate such tipping points, observation of what is actually happening remains the cornerstone of an adequate response. Society needs consistent and reliable information to establish baselines, make projections and validate them against observed changes, and identify potential surprises as early as possible.

Fortunately, two developments are helping to facilitate the collection of more and better data. First, new technologies and techniques allow us to capture data more efficiently and effectively. Second, society is demanding greater accountability and the demonstration of true value for environmental investments. The ability to easily share large amounts of information, to combine observations from different programs by linking them to specific geographic locations, to monitor many environmental features from space or by using new microscale devices, and other innovations can greatly extend the reach and richness of our environmental baselines. At the same time, many corporations, foundations, and government entities are working to track the effects of their actions in ways that will demonstrate which approaches work and which do not. In much the same way as the medical community is embracing evidence-based medicine, managers are moving toward evidence-based environmental decisionmaking.

No responsible corporation would manage an asset as valuable and complex as the ecosystems of the United States without a better stream of information than can currently be delivered.

Recognition of the scale of environmental problems is also spurring increased collaboration among federal, state, local, and private entities. Wildlife managers recognize that species do not respect state or federal agency boundaries and that adequate response demands range-wide information. Likewise, addressing the expanding “dead zone” in the Gulf of Mexico demands collaboration and data from across the Mississippi River basin in order to understand how farmers’ actions in Missouri affect shrimpers’ livelihood in Louisiana. Evidence of this recognition and the collaboration it demands is growing. For example, state water monitoring agencies, the Environmental Protection Agency (EPA), and the U.S. Geological Survey (USGS) have developed a new multistate data-sharing mechanism that greatly expands access to each others’ data. And, public and private entities are increasingly working together in efforts such as the Heinz Center’s State of the Nation’s Ecosystems report, as well as in more local efforts such as the integrated monitoring of red cockaded woodpeckers by private timber companies, the U.S. Fish and Wildlife Service, state agencies, and the Department of Defense.

Despite these efforts, a coherent and well-targeted environmental monitoring system will not appear without concerted action at the national level. The nation’s environmental monitoring efforts grew up in specific agencies to meet specific program needs, and a combination of lack of funding for integration, fragmented decisionmaking, and institutional inertia cry out for a more strategic and effective approach. Without integrated environmental information, policymakers lack a broad view of how the environment is changing and risk wasting taxpayer dollars.

Since 1997, the Heinz Center’s State of the Nation’s Ecosystems project has examined the breadth of information on the condition and use of ecosystems in the United States and found that the picture is fragmented and incomplete. By publishing a suite of national ecological indicators, this project has provided one-stop access to high-quality, nonpartisan, science-based information on the state of the nation’s lands, waters, and living resources, using national data acceptable to people with widely differing policy perspectives. However, there are data gaps for many geographic areas, important ecological endpoints, and contentious management challenges as well as mismatched datasets that make it difficult to detect trends over time or to make comparisons across geographic scales.

The depth of these gaps can be seen in three case studies, two of which concern chemical elements (nitrogen and carbon) that play vital roles in global ecosystems but can also create havoc in the wrong times, places, and concentrations. The third case considers the condition of our nation’s wildlife.

Controlling nitrogen pollution

Nitrogen is a crucial nutrient for animals and plants as well as one of the most ubiquitous and problematic pollutants. Nitrogen in runoff from sewage treatment plants, farms, feedlots, and urban lawns is a prime cause of expanding dead zones in many coastal areas. Nitrogen in the air contributes to ozone formation and acidification of lakes and streams, as well as to overfertilization of coastal waters. Several nitrogen compounds are also potent greenhouse gases, and nitrogen in drinking water can cause health problems for children. In the environment, nitrogen moves readily from farmlands and forests to streams and estuaries, shifting across solid, liquid, and gas phases, and from biologically active forms to more inert forms and back again. Thus, any nitrogen release can result in multiple effects in sometimes quite-distant locations.

Controlling nitrogen pollution involves public and private action at the national, state, and local levels. We put air pollution controls on cars and power plants, invest in municipal sewage treatment, educate farmers and suburban residents on the risks of overfertilization, and design greenway strategies to cleanse runoff. Understanding how nitrogen moves through the environment is crucial to designing these controls effectively.

The delivery of nitrogen to streams and rivers, and thus to coastal waters, is highly variable by region, with very high levels originating in the upper Midwest and Northeast, and much less from other areas. However, data on nitrogen delivery from streams to coastal waters are not available in a consistent form for more than half the country—essentially all areas not drained by the Mississippi, Susquehanna, or Columbia Rivers. This includes, for example, much of Texas and North Carolina, where major animal feeding operations, a significant source of nitrogen releases, are located.

Moreover, nationally consistent monitoring is available only for limited areas, precluding more detailed tracking that would allow better understanding of the relationship between on-farm management strategies and nitrogen releases. Nitrogen in precipitation is not measured in coastal areas of the East, where it may contribute as much as one-third of the nitrogen delivered to estuaries such as the Chesapeake Bay.

Without such data, regulators cannot understand what inputs are contributing to the problem, which ones are being effectively addressed, and which ones remain as targets for future reduction. As a result, pollution control agencies are left without comprehensive feedback about baseline conditions and whether control strategies are effective, and thus are unable to fully account to the public for their success or failure.

Carbon Storage

Carbon is another element that plays a critical role in ecosystems but, in excess, is now wreaking havoc in the atmosphere. Carbon dioxide and methane (a carbon compound) are the major contributors to global warming, but carbon is also vital to ensuring the productive capacity of ecosystems, including the ability to provide services such as soil fertility, water storage, and resistance to soil erosion.

Carbon dioxide in the atmosphere has increased by more than 30% as compared with preindustrial concentrations, and methane concentrations have increased by more than 150%. Moreover, the data show that, so far, efforts to reverse these increases have been overwhelmed. Through measures designed to increase carbon stored in plants, soils, and sediments, where it does not contribute to the greenhouse effect, it is possible to help offset carbon emissions.

Different ecosystem types store carbon differently. For example, forests store more carbon than many other ecosystems and store more of it above ground (in trees) than do grasslands. Data-gathering by the U.S. Forest Service’s Forest Inventory and Analysis program documented an average annual gain of nearly 150 million metric tons of forest carbon per year in recent years, whereas cropland and grassland soils data show more modest carbon increases. We do not yet have comprehensive data on changes in carbon storage in all U.S. ecosystems and so cannot quantify the total contribution of ecosystems to offsetting the approximately two billion tons of carbon dioxide released in the United States each year. Changing carbon levels are not yet comprehensively monitored in wetlands and peat lands, urban and suburban areas, and aquatic systems. There are also gaps in national-scale data for carbon in forest soils and aboveground carbon in croplands, grasslands, and shrublands.

As we expand our ability to track carbon in the landscape, we will increasingly be able to quantify how different land management practices help or hinder carbon sequestration by ecosystems and to project future changes and set priorities more accurately. Baseline measurements and routine monitoring are also important in determining how changing temperature and moisture conditions as well as disturbances such as invasive weeds, wildfires, and pest outbreaks affect carbon storage. As we expand and improve our carbon-monitoring capability, managers will be able to answer critical questions such as how rising temperatures are affecting northern peat lands and whether invasive weeds and wild-fires are causing U.S. rangelands to lose carbon rather than store it.

As policymakers, land managers, and entrepreneurs push the frontiers of biofuel production and develop new institutions such as carbon-offset markets, greater investments will be needed to produce reliable sources of information about changes in carbon storage at relevant geographic scales. The technology exists or is being developed to gather data more rapidly, more efficiently, and at lower cost. Global agreements on mechanisms for including terrestrial carbon storage in the climate change solution can spur additional investment to refine technologies and implement monitoring systems. What is needed is a commitment to providing the necessary information and a strategic view of what data are needed and how they should be gathered and shared.

Tracking wildlife population trends

Most American would agree that fish and wildlife are an important part of the nation’s heritage. Each year, millions of Americans spend time hunting, fishing, or just enjoying wildlife for its intrinsic worth and beauty. Native species provide products, including food, fiber, and genetic materials, and are central components of ecosystems, determining their community structure, biomass, and ecological function. From bees that pollinate agricultural crops worth billions of dollars a year to oysters that filter coastal waters, wildlife provides a variety of services of direct benefit to humans.

During past decades, wildlife management often focused on huntable and fishable species. More recently, concern about loss of species and habitat has created a broader agenda that includes reducing the danger of extinction of other species and managing habitat to support several goals.

Simply knowing how many species are at risk of extinction is a crucial starting point. State-based Natural Heritage scientists consider how many individuals and populations exist, how large an area the species occupies (and when known, whether these numbers are decreasing or not), and any known threats. The data are compiled at a national scale by NatureServe, a nonprofit organization that also establishes standards for collecting and managing data to ensure that they are updated frequently enough to identify real trends. However, differential funding and sampling frequencies among the states has led to mixed data quality.

Information about extinction risk provides a crucial early warning to identify species in need of attention. In many cases, however, such status information is not backed up with information on how populations have changed over time, making it difficult to determine whether a population’s increased risk levels are due to a historical decline, a recent decline, or natural rarity—scenarios that can require quite different management responses. In 2006, NatureServe reported that information on short-term population trends was available for only about half of the vertebrate species at risk of extinction and only a quarter of invertebrates. The Breeding Bird Survey, managed by the USGS, has proven a consistent long-term source of population data, as have surveys of a number of charismatic species such as monarch butterflies. For many species, however, including many threatened species, population trend data are simply not available.

Our society spends significant amounts to conserve wildlife. In addition, land use and other activities can be disrupted or delayed if endangered or threatened species are present. Understanding which species are declining and which are not is crucial to maximizing the effectiveness of public spending and minimizing the effect of protections on private actions. Many recent conservation challenges have involved species not limited to small regions. As we have noted, no single state or federal agency can address the challenges facing these species alone, and consistent range-wide information is the lingua franca on which collaborative plans can be built.

Species-status information is only one of the keys to good wildlife management. Tracking phenomena such as unusual deaths and deformities provides a glimpse into overall ecosystem conditions. However, collection of these data is limited to certain species, such as marine mammals, while in other cases changes in reporting procedures make data impossible to compare.

In recent years, scientists have become increasingly aware of the threats to ecosystems from invasive species. Weeds cause crop losses, aquatic invasives clog channels and water intake pipes, and plants must be killed or animals trapped when they interfere with native species. Despite these effects, and the fact that federal spending on control and related programs exceeded $1 billion in 2006, little standardized data exists on invasive species, making a broad assessment of the threat and the effectiveness of society’s response difficult. The only group for which data are available at a national scale is fish, and even in this case the data are limited.

Understanding which species are declining and which are not is crucial to maximizing the effectiveness of public spending and minimizing the effect of protections on private actions.

Managing the nation’s environment involves keeping track of many more components than nitrogen, carbon, and wildlife. These three central management challenges, however, illustrate the degree to which information limitations constrain society’s ability to understand what issues must be faced, devise interventions to address these issues, and evaluate whether those interventions work. Although the challenge is clear and urgent, and there are some promising signs of increased collaboration and information sharing, more is needed.

Building a coherent system

As the planet warms, we have begun to experience a variety of changes in ecosystems, the first signs of the environment’s own potentially bumpy road ahead. To deal with the changes, policymakers need objective, detailed, big-picture data: the type of data that decisionmakers have long relied on to understand emerging economic trends. Yet, as noted above, data gaps still abound, obscuring our understanding of the condition and use of the nation’s ecosystems. In The State of the Nation’s Ecosystems 2008, only a third of the indicators could be reported with all of the needed data, another third had only partial data, and the remaining 40 indicators were left blank, largely because there were not enough data to present a big-picture view.

No responsible corporation would manage an asset as valuable and complex as the ecosystems of the United States without a better stream of information than can currently be delivered. We certainly do not wish to throw rocks at the dedicated professionals who manage environmental monitoring programs. Unfortunately, however, their work has been accorded low priority when it comes to setting environmental budgets, and independence, rather than collaboration, has been the primary strategy for managing these programs.

Dealing with the type of gaps we have discussed will require additional investment plus a serious commitment to harnessing the resources of existing environmental monitoring programs into a coherent whole. Identifying a small suite of environmental features that need to be tracked, identifying overlapping and incomplete coverage between programs, and establishing standard methods that can allow different programs to contribute to a larger whole are the kinds of steps that a nation truly committed both to the power of information and the value of our environment would take.

Congress should consider establishing a framework by which federal, state, nongovernmental, private, and other interests can jointly decide what information the nation really needs at different geographic scales, identify what pieces already exist, and decide what new activities are needed. This might be part of upcoming climate change legislation (which might also provide a funding source), but the imperative of improving the information system should not necessarily wait for this complex legislation to pass. The Obama administration has the opportunity to build on more than 10 years of experience in identifying environmental indicators and devising ways to integrate them more effectively. Federal and state agencies can radically increase the degree to which information consistency across related programs is treated as a priority. Nascent efforts such as the National Ecological Status and Trends (NEST) effort, begun in the waning days of the Bush administration, should be energized, expanded, and formalized. (This effort is beginning work on what may eventually become a formal system of national environmental indicators.) Oversight entities such as the Office of Management and Budget and congressional appropriators and authorizers can demand answers to questions about why multiple data collection programs exist, who they are serving, and why they cannot be harmonized to meet the larger-scale needs of the 21st century. They can also pay serious attention to requests for funds to support a larger and more integrated system. For example, it might be appropriate to consider one-time infusions of funds to ensure the consistency of state water-quality monitoring, something states are inadequately funded to do and have never been expected to do.

Building such a system is not a federal-only affair but rather should be governed as a collaborative venture among data users and producers to help ensure utility and practicality. Such a system would help distinguish between truly important needs and ones that may serve only minor interests, eliminate duplicative monitoring efforts, and provide incentives for more coordinated monitoring, including increased cooperation between states and federal agencies. Perhaps most important, such a system could ensure continued, consistent, high-quality, nonpartisan reporting, so that decisionmakers from a variety of sectors can rely on the same information as they forge ahead.

A Reverse Brain Drain

Although most of the national immigration debate originates with those who want to limit immigration, U.S. policymakers should be focusing on the more important task of attracting and keeping more highly skilled foreign-born scientists and engineers. The future strength of the nation’s economy will depend on the creation of vibrant new companies, and the development of innovative products and services will be produced by well-paid workers. In recent years, immigrants have been playing a rapidly expanding role as high-tech entrepreneurs and inventors, providing an essential service to the country.

The danger is that the United States is taking this immigrant contribution for granted at a time when changes in the global economy are providing alternative career opportunities for the most talented people. In the past, the United States was clearly the best place for the most talented scientists and engineers to work, and there was no need to do anything special to attract them. Those days are gone, and the United States must begin paying more attention to what is necessary to attract foreign talent and taking steps to eliminate barriers to immigration.

Even as the immigrant contribution to U.S. high technology grew steadily from 2000 to 2008, anecdotal evidence began to surface in the popular media and in the professional electronic networks of the emergence of a countertrend. Immigrants with technology and science skills were becoming more likely to leave the United States. Encouraged by the development of high-technology industries in their home countries and by the prospects for rapid economic expansion, they began to see their homelands as places of equal if not greater promise.

When immigrants recognized that they could pursue their career objectives outside the United States, they were able to consider other factors such as closeness to relatives, cultural appeal, and quality of life when deciding where to work. They were also able to think more about the U.S. immigration policies that keep over 500,000 highly skilled immigrant workers in limbo for years with little opportunity to advance or change jobs. With the current economic crisis darkening job prospects and evidence of growing U.S. xenophobia, it is no surprise that many immigrants who came to the United States for school and short-term jobs are heading home. President Obama even signed an economic stimulus law that includes a provision that makes it harder for some companies to hire non-U.S. citizens.

During the closing decades of the 20th century, roughly 80% of the Chinese and Indians who earned U.S. Ph.D.s in science, technology, engineering, and mathematics (STEM) fields have stayed in the United States and provided a critical boost to the nation’s economy. Perversely, now that China and India are becoming formidable economic competitors, the United States seems inclined to enhance their economic productivity by supplying them with an army of U.S.-trained scientists and engineers. These returnees are spurring a technology boom in their home countries, expanding their capacity to provide outsourcing services for U.S. companies, and adding increasingly sophisticated primary R&D capability in knowledge industries such as aerospace, medical devices, pharmaceutical research, and software design.

One obvious sign of U.S. complacency about these developments is the absence of data to confirm the anecdotal evidence. In spite of all the controversy surrounding immigration policy, the government has not bothered to determine how much immigrants contribute to the economy or to assess the likelihood and consequences of a major shift in their desire to work in the United States in the future. To fill this gap, the Global Engineering and Entrepreneurship project at Duke University has attempted to quantify the role of immigrants in founding entrepreneurial companies and developing new technologies, to understand how federal policies affect decisions about working in the United States, and to assess the competing opportunities in India and China and the other factors that influence life decisions.

With financial support from the Kauffman Foundation, a research team including Gary Gereffi of Duke University, AnnaLee Saxenian of University of California at Berkeley, Richard Freeman of Harvard University, Ben Rissing of the Massachusetts Institute of Technology, and Guillermina Jasso of New York University spent three years conducting multiple surveys of thousands of technology and engineering startup companies, interviewed hundreds of company founders, surveyed more than 1,000 foreign students and more than 1,000 returnees, and made several trips to India and China to understand the on-the-ground realities in those countries.

U.S. immigration policy has been made in an information void. Although our research is far from conclusive, we believe it is fair to say that current immigration policy significantly undervalues the contributions these skilled immigrants make to high-growth segments of the U.S. economy. It appears that immigrants spur innovation in the United States and even help foment innovation by non-immigrants. At present, U.S. technological preeminence is not in question. Furthermore, some degree of intellectual dispersion and circulation is inevitable and valuable to the global economy. The United States cannot expect to maintain its previously overwhelming technological superiority in an increasingly globalized economy. But it takes a rare blend of short-sightedness and hubris to fail to investigate trends in the movement of global talent or to reconsider immigration policies that are not only economically counterproductive but also potentially damaging to U.S. national security.

Innovators with accents

AnnaLee Saxenian’s 1999 report Silicon Valley’s New Immigrant Entrepreneurs was the first comprehensive assessment of the critical role that immigrant capital and labor were playing in Silicon Valley’s regional economy. She found Chinese and Indian engineers at the helm of 24% of the Silicon Valley technology businesses started from 1980 to 1998. Even those scientists and engineers who returned to their home countries were spurring technological innovation and economic expansion for California by seeding development of companies in the Golden State.

We updated and expanded her research with a nationwide survey of engineering and technology firms founded between 1995 and 2005. By polling 2,054 companies selected randomly from the Dun & Bradstreet Million Dollar database, we found that in one of four companies, the chief executive officer or chief technologist was foreign-born. A regional survey found that immigrant entrepreneurs were prominent in New York; Chicago; San Diego; Boston; Washington, DC; Austin; Seattle; Denver; and elsewhere. We estimated that in 2005, immigrant-founded tech companies generated $52 billion in revenue and employed 450,000 workers. In some industries and regions, immigrants played a particularly critical role (see Figure 1). In the semiconductor sector, immigrants founded 35% of startups. In Silicon Valley, the proportion of startups that were immigrant-founded had increased to 52%. Immigrants from India founded 26% of the immigrant-founded startups, more than the next four groups—those from Britain, China, Taiwan, and Japan—combined.

Interviews with 144 company founders selected randomly from the original data set of immigrant-founded companies revealed that 3 of 4 had graduate degrees, primarily in STEM fields. The vast majority of these company founders didn’t come to the United States as entrepreneurs: 52% came to study, 40% came to work, and 5.5% came for family reasons. Only 1.6% came with the intention of starting a company.

Founding a company is not the only way to contribute to the economy. We also examined the World Intellectual Property Organization (WIPO) Patent Cooperation Treaty (PCT) records for the international patent filings by U.S.-resident inventors (see Figure 2). We determined that in 2006, foreign nationals residing in the United States were named as inventors or co-inventors in one quarter of WIPO patent applications filed from the United States, a stunning increase from the 7.6% of applications filed in 1998. In some cases, foreign nationals working at U.S. corporations contributed to a significant majority of such patent applications for these companies. For example, immigrant patent filings represented 72% of the total at Qualcomm, 65% at Merck, 64% at General Electric, and 60% at Cisco Systems. More than 40% of the international patent applications filed by the U.S. government had foreign-national authors. And these numbers do not even include immigrants who had become citizens at the time of filing. Our manual inspection of patents found a healthy representation of Chinese and Indian names. Even though each group accounts for less than 1% of the U.S. population, 17% of patents included a Chinese name and 14% an Indian name. Clearly, immigrants are contributing significantly to U.S. intellectual property, a key ingredient for the country’s economic success.

Immigration woes

Surprised by the more than threefold growth in foreign-national patent filings in eight years, we sought to find out whether this was the result of a surge in the number of highly skilled immigrants. When we discovered that neither the U.S. State Department nor the Citizenship and Immigration Services (USCIS) collected these data, we developed a methodology to estimate the population of skilled immigrants.

Foreign nationals who file U.S. international patents include persons who acquire legal permanent residence (LPR) on family or diversity visas, as well as persons with temporary visas such as:

  • H-1B temporary work visas, for specialty occupations along with at least a bachelor’s degree or its equivalent;
  • L-1 visas, for intracompany transferees (foreign nationals employed by a company that has offices in the United States and abroad);
  • F-1 visas, to study or to conduct research at an accredited U.S. college or university.

Students on F-1 visas are allowed to work in the United States in occupations related to their fields of study for up to 29 months. After this, they must obtain an H-1B visa, which is valid for up to six years. To stay permanently, skilled workers need to obtain an LPR visa, which is granted only to those who have an offer of permanent employment offer from a U.S.-based firm. The elaborate and time-consuming approval process entails a number of steps:

In spite of all the controversy surrounding immigration policy, the government has not bothered to determine how much immigrants contribute to the economy or to assess the likelihood and consequences of a major shift in their desire to work in the United States in the future.
  1. The employer files a labor certification request with the Department of Labor.

  2. Once the labor certification is approved, the employer files a Petition for Alien Worker (Form I-140) with the USCIS. It must demonstrate that the company is in a good financial position and capable of paying the salary advertised for the job.

  3. Once the I-140 is approved, the employee must wait for the State Department to provide a visa number, which indicates that an immigrant visa is available for the applicant.

  4. The employee must file for adjustment of status (I-485) for him/herself and for family members.

We estimated that as of October 1, 2006, there were 200,000 employment-based principals waiting for labor certification. The number of pending I-140 applications stood at more than 50,232, which was more than seven times the number in 1996. The number of employment-based principals with approved I-140 applications and unfiled or pending I-485s stood at 309,823, an almost threefold increase from a decade earlier. Overall, we estimated there were 500,040 skilled workers waiting for LPR in the United States. The number including family members was 1,055,084.

The reason for the increasing backlog is that only around 120,000 visas are available each year in the key visa categories for skilled workers. Additionally, no more than 7% of the visas can be allocated to immigrants from any one country. Thus, immigrants from large countries such as India and China have the same number of visas available (8,400) as those from much smaller countries such as Iceland and Costa Rica. No one should be surprised that this long and growing queue generates anxiety and frustration among immigrants. We can easily imagine that this predicament will lead to a sizeable reverse migration of skilled workers to their home countries or to other countries, such as Canada, that welcome these workers.

New geography of innovation

An inhospitable immigration policy environment in the United States would not be enough by itself to discourage a large number of high-skill workers. They would also need to have alternative venues for challenging and rewarding work. We therefore decided to visit a cross-section of companies that would employ skilled workers in India and China. In particular, we wanted to learn more about how technology companies in these countries were progressing up the value chain from low and medium value-added information technology services to significantly higher-value services in core R&D, product development and design, and the creation of patents and intellectual property. We met with senior executives of more than 100 local companies and multinationals operating in these countries, toured their R&D labs, and interviewed employees. Although the information we collected is obviously anecdotal, it nevertheless is noteworthy and deserving of further exploration.

We learned that India is rapidly becoming a global hub for R&D outsourcing and is doing so, in part, by leveraging the knowledge and skills of returnees. In the pharmaceutical sector, a number of Indian companies, including Aurigene, Dr. Reddy’s, and Ranbaxy, have significant product development or basic research contracts with major multinational drug companies. These three Indian companies are also recruiting top scientists from the United States for their R&D teams. Dr. Reddy’s hired approximately 100 returnee scientists in 2006 alone. We also found evidence of startup activity in the pharmaceutical industry, with Indian startups relying on research or executive teams with experience working for major U.S. drug companies. For example, Advinus Therapeutics, an early-stage drug discovery company based in Bangalore, was founded by India-born former employees of Bristol-Meyers Squibb.

Technology outsourcing companies such as India’s HCL and TCS are no longer performing only system administration tasks. They are also moving into product design and core R&D in a number of areas, including semiconductor design and aerospace. For example, HCL and TCS teams are designing the interiors of luxury jets, in-flight entertainment systems, collision-control/navigation-control systems, and other key components of jetliners for U.S. and European corporations. These technology companies are also hiring U.S.-educated engineers. For example, HCL hired 350 U.S.-educated engineers between 2000 and 2006. IBM, Cisco, Microsoft, and many other leading U.S. technology companies maintain sizeable operations in India. These facilities are directly competing with the United States for talent and have been successful in attracting top-notch professionals who have been trained or educated in the United States. In IBM India’s advanced research labs, half of the Ph.D.’s are returnees from the United States. In General Electric’s Jack Welch Technology Center in Bangalore, where they are designing some of the company’s most advanced technologies, 34% of the R&D staff are returnees.

The Chinese situation is somewhat different. China is already the world’s biggest exporter of computers, telecommunications equipment, and other high-tech electronics. Multinationals and government-backed companies are pouring billions of dollars into next-generation plants to turn China into an export power in semiconductors, passenger cars, and specialty chemicals. China is lavishly subsidizing state-of-the-art labs in biochemistry, nanotech materials, computing, and aerospace. Despite these efforts, we found that China was far behind India in the size and scope of R&D outsourcing. Rather, multinationals were using Chinese workers to perform significant customization of their technologies and to develop new products for the Chinese market.

In all of the companies we visited in China, returnees from the United States were performing the most sophisticated R&D. Returnees were usually in senior-level management and R&D positions in engineering, technology, and biotech companies. China appears to be in desperate need of Western-educated R&D and management talent and is offering substantial incentives for returnees with these skills.

Our interviews with executives and human resource managers in both countries revealed that the numbers of résumés they receive from the United States has increased as much as 10-fold during the past few years. Indian companies have so many applicants that they often no longer find it necessary to offer salary premiums. In China, returnees still receive wages substantially higher than local averages.

We made several attempts to quantify the reverse migration of skilled workers to India and China and to determine what factors motivated workers to return home, but the United States does not collect such information. We therefore carried out a large survey of returnees to India and China.

Repatriates

We used the LinkedIn network of professionals to identify 1,203 highly skilled Indian and Chinese workers who had worked or received education in the United States and subsequently returned to their home country. The survey was conducted over a period of six months in 2008. Although our method of identifying returnees did not produce a rigorously scientific sample, we consider it at least illustrative, and the fact that we obtained a 90% response rate adds credibility to our results. Though our findings may not generalize to all highly educated returnees, they are representative of a critically important group of young professionals who are sufficiently savvy to be part of LinkedIn.

The average age of the respondents was in the low 30s, and more than 85% had advanced degrees. Among the strongest factors bringing these immigrants to the U.S. initially were professional and educational development opportunities.

To our surprise, visa status was not the most important factor determining their decision to return home. Three of four indicated that considerations regarding their visa or residency permit status did not contribute to their decision to return to their home country. In fact, 27% of Indian respondents and 34% of Chinese held permanent resident status or were U.S. citizens. For this highly select group of returnees, career opportunities and quality-of-life concerns were the main reasons for returning home.

Family considerations are also strong magnets pulling immigrants back to their home countries. The ability to better care for aging parents and the desire to be closer to friends and family were strong incentives for returning home. Indians in particular perceived the social situation in their home country to be significantly superior.

The move home also appeared to be something of a career catalyst. Respondents reported that they have moved up the organization chart by returning home. Only 10% of the Indian returnees held senior management positions in the United States, but 44% found jobs at this level in India. Chinese returnees went from 9% in senior management in the United States to 36% in China. Opportunities for professional advancement were considered to be better at home than in the United States for 61% of Indians and 70% of Chinese. These groups also felt that opportunities to launch their own business were significantly better in their home countries.

Restoring U.S. appeal

One of the reasons to survey those who left the United States was to understand what they liked and disliked about the country so that we might be able to convince them to return. We found areas in which the United States enjoyed an obvious advantage. One was gross salary and compensation: 54% of Indian and 43% of Chinese respondents indicated that total salary and compensation in their previous U.S. positions were better than at home. U.S. health care benefits were also considered somewhat better by a majority of Chinese respondents.

One somewhat surprising U.S. advantage was the ease of settling into the culture. Only 17% of Chinese respondents and 13% of Indian respondents found it difficult initially to settle in the United States, whereas 34% of Indians and 35% of Chinese reported difficulty settling in when they returned home. Significant numbers cited difficulties encountered by their family members. Indians complained about traffic and congestion, lack of infrastructure, excessive bureaucracy, and pollution. Chinese complained about pollution, reverse culture shock, inferior education for children, frustration with government bureaucracy, and health care quality. Because neither China nor India can improve these conditions significantly in the near term, the United States should advertise these relative advantages to foreign-born students and temporary workers.

One encouraging characteristic of the returnees is their mobility; one in four say that they are likely to return to the United States in the future. Offering those who have left the United States better career opportunities and permanent resident status could entice a significant percentage to return. When asked how they would respond to the offer of a suitable U.S. job and a permanent resident visa, 23% of Indians and 17% of Chinese said they would return to the United States, and an additional 40% of Indians and 54% of Chinese said they would consider the offer seriously. Considering that this group of people had already made the decision to make the difficult move back home, the number willing to consider returning to the United States indicates that U.S. immigration reforms could have a significant effect.

Scanning the horizon

Having surveyed young professionals, we turned our attention to the next generation: foreign nationals currently enrolled in U.S. universities. We were curious about how they viewed the United States, how they viewed their home countries, and where they planned to work after they graduated. This cohort is of critical importance to U.S. prospects. During the 2004–2005 academic year, roughly 60% of engineering Ph.D. students and 40% of Master’s students were foreign nationals, and foreign nationals make up a significant share of the U.S. graduate student population in all STEM disciplines. In the past, the overwhelming majority of these students worked in the United States after graduation. The five-year stay rate for Chinese Ph.D.s was 92% and for Indians 85%. Our survey found evidence that this could change dramatically, with serious consequences for the United States.

We used the social networking site Facebook to find 1,224 foreign nationals who are currently studying at U.S. universities or who graduated in 2008. The respondents included 229 students from China and Hong Kong, 117 from Western Europe, and 878 from India. Again, this is not a rigorously scientific sample, but the group is large and random enough to make the results worth considering.

The overall consensus among respondents was that the United States was no longer the destination of choice for professional careers. We learned that most students in our sample want to stay, but not for very long. An encouraging 58% of Indian, 54% of Chinese, and 40% of European students said that they would stay in the United States for at least a few years after graduation if given the chance, but only 6% of Indian, 10% of Chinese, and 15% of European students said they want to stay permanently. The largest group of respondents—55% of Indian, 40% of Chinese, and 30% of European students—wants to return home within five years. More than three-fourths of these students express concern about obtaining work visas, and close to that number worry that they will not be able to find U.S. jobs in their field. With such a dim view of their legal and work prospects, it is no wonder that so few see themselves settling permanently in the United States.

The United States can no longer expect highly skilled arrivals from other countries to endure the indignities and inefficiencies of an indifferent immigration system, and it must now actively compete to attract these people with good jobs, security, and other amenities.

Their assessment of their individual opportunities is reinforced by their view of the overall U.S. economy. Our survey found that only 7% of Chinese students, 9% of European students, and 25% of Indian students believe that the best days of the U.S. economy lie ahead. Conversely, 74% of Chinese students and 86% of Indian students believe that the best days for their home country’s economy lie ahead.

Our survey results led us to some alarming conclusions. The United States is in danger of losing large numbers of talented foreign-national students, particularly those from China and India. The students planning to leave cite the same reasons as the workers who have already done so: career and economic opportunities and the desire to be closer to family. When they look to the future, they see much more potential for economic growth in their home countries than in the United States. The pull of growing opportunities at home is reinforced by the push of immigration and visa rules that make the students uncertain about their ability to work in the United States.

Our study was not longitudinal, so all we can do is compare the stated intentions of current foreign-national students with the paths actually followed by their predecessors. If we accept the results at face value, the United States is facing a potentially disastrous exodus of young scientists and engineers who are likely to be among the world’s most productive inventors and engineers. Even if the loss of foreign nationals is only half as bad as our surveys indicate, there will still be serious economic consequences.

Turning the tide

Our research points to critical insights about U.S. history and its future. Looking back, we see that foreign nationals have played a much larger role in founding new companies and developing new technologies than was generally understood. Looking forward and having come to appreciate the importance of foreign nationals to U.S. economic strength, we now face the likelihood that a large number of these highly productive individuals will be leaving the United States and using their skills to enrich the economies of other countries, particularly the emerging global powerhouses of China and India. The United States is no longer the only place where talented people can put their skills to work. It can no longer expect them to endure the indignities and inefficiencies of an indifferent immigration system, and it must now actively compete to attract these people with good jobs, security, and other amenities.

The trends we have detected cannot be ignored. As an essential first step, the United States should fund research to confirm or overturn our findings. We acknowledge that our analysis is far from definitive. We need to know much more about the sources of U.S. economic success, the capability and potential of the Indian and Chinese tech sectors, the plans and aspirations of the many foreign nationals studying in U.S. universities, and the effect that immigration policies and procedures are having on the desirability of life in the United States. Perhaps the problem is not as grave as our studies indicate or the causes are different from what we found, but it seems highly unlikely that further research will fail to confirm the general tenor of our findings.

In the meantime, the United States can take several low-cost and low-risk actions that could have a salutary effect on the attitudes of foreign-national students and workers toward staying in the United States. To start with, it can increase the numbers of permanent resident visas available for skilled professionals and accelerate visa processing times. This will encourage the million who are already here to begin to lay down deeper roots. They may decide to start companies, buy houses, and accept the United States as their permanent home. Additionally, because concern about their extended families seems to be an important reason for returning home, the United States could also make it easier for highly valuable workers to bring their parents to the United States.

Human talent, especially in science and engineering, is becoming ever more essential to national well-being, and no country has benefited more than the United States from the influx of talent from other countries. The competition for that talent is clearly intensifying. One can hardly imagine what combination of arrogance, short-sightedness, and plain foolishness would be necessary to convince U.S. policymakers to ignore this emerging reality. At the very least, the United States should remove any barriers to talented foreign nationals who want to work in the United States. And if our findings are accurate, the nation should be taking many steps to attract top talent and to keep it.

Global Warming: The Hard Road Ahead

With a president committed to fighting climate change and a new Congress inclined to go along, the prospects for greenhouse gas emissions abatement legislation are bright. That’s good news. The Bush and Clinton administrations’ intransigence on this issue set back U.S. action by at least a decade. But it should not obscure the reality that one obstacle to a successful effort in slowing global warming—the cooperation of the rapidly developing economies—is truly daunting. Indeed, failure to acknowledge the difficulty of herding this particular pride of cats in the right direction could cost us another lost decade.

Although there is hardly a consensus about the content of the coming legislation, a market-based system that distributes carbon emissions rights among stakeholders and encourages them to minimize costs by freely trading the permits will probably be at the core of it. The biggest unknown is whether the legislation will tie U.S. containment efforts to those of other countries and whether it will include measures to encourage their cooperation.

What economists call the free rider problem is hardly a new issue here. China recently surpassed the United States as the world’s largest emitter of carbon dioxide, and relatively poor but rapidly expanding economies (think Brazil, India, Indonesia, and Russia as well as China) loom large in projections of emissions growth. Indeed, it is fair to say that any climate initiative that doesn’t engage the developing economies will, at best, deliver very little bang for a buck. Yale University economist William Nordhaus recently estimated that if emissions from half of the global economy remained uncontrolled, abatement costs would be more than twice as high as necessary. More likely, an international containment effort that failed to induce the major emerging economies to join would collapse in mutual recrimination. That fear (plus a friendly nudge from energy and automobile lobbies) explains why Washington refused to pledge support for the Kyoto accord in 2001.

Congress could encourage the emerging economic giants to get with the program in a variety of ways. It could offer government cash in return for foreigners’ agreement to emissions caps. Or it could pay subsidies for specific green commitments, anything from setting emissions standards for heavily polluting industries such as cement and steel to stopping construction of coal-fired power plants. Or the legislation could let U.S. companies meet their abatement obligations through carbon-sparing initiatives in poor countries, an idea embodied in the Kyoto agreement’s Clean Development Mechanism.

Of course, the United States could also play hardball, penalizing trade partners who refused to contain emissions. One way would be to bar imports of products made in ways that offend. Another way, and one more likely to meet U.S. treaty obligations, would be to offset the cost advantages of using polluting technology with green tariffs, much the way the country imposes “countervailing” duties on imports from countries that subsidize the production of exports.

Carrots, unfortunately, are problematic. Positive financial incentives would be hard to monitor: If no one can explain what happened to the trillions of dollars in aid sent to poor countries in the past half-century, why would it be any better this time around? If a sizable chunk of the cash dispensed over the years to build infrastructure has ended up in Swiss bank accounts or superhighways to nowhere, why is there reason to be optimistic that accounting for “green” aid would be any better?

Equally to the point, emissions are fungible, and the measurement of reductions is subject to manipulation: How would we know whether saving the rainforest in one part of a country didn’t lead to accelerated logging in another? How would we know whether the closing of an old coal-fired power plant in China or India would not have taken place even without a cash inducement from abroad?

Note, by the way, that the difficulties of measuring emissions reductions would likely be complicated by the interests of those doing the measuring. Under the Clean Development Mechanism, industries in Kyoto treaty countries can earn credits against their own abatement obligations by financing emissions reductions in poor countries, say by planting trees in Chad or replacing inefficient irrigation pumps in Pakistan. But once the money is spent and credits awarded, the sponsoring industries have little incentive to monitor their progress. Indeed, if this approach is to work on a scale large enough to make a real difference, it will take an enormous commitment on the part of governments to set the rules and make sure they are followed.

But the proverbial sticks would come with their own problems. The gradual opening of international trade has been critical to bringing billions of people out of poverty. Dare we risk losing the fruits of freer trade by encouraging the development of powerful political alliances between environmentalists and domestic industries happy to shut out foreign competitors in the name of saving the planet?

Past experience provides very good reasons to be pessimistic about the capacity of Washington (or any other Western government) to fairly judge compliance on the part of exporting countries. The Commerce Department’s administration of policies designed to enforce U.S. “fair trade” laws have long been a disgrace, a charade in which domestic industries facing foreign competition have effectively called the shots.

All this suggests that the task of limiting climate change will take longer and be more difficult in both political and economic terms than is generally understood. But that’s not a reason to keep fiddling while Gaia burns. Rather, it should change expectations of what U.S. involvement will be able to achieve in the short run, and in the process strengthen national resolve to set the pace in the long run.

First, symbols matter: A key goal of legislation should be to restore U.S. credibility as the leader in organizing carbon abatement efforts. “Do as I say, not as I do” isn’t cutting any ice with China or India, both of which have ample grounds for rationalizing their own reluctance to pay attention to emissions before they grow their way out of poverty. A good-faith effort at reducing emissions at home, in contrast, just might.

Second, the fact that inducing cooperation from developing countries is sure to be difficult is a poor excuse for not trying. For starters, the United States should certainly underwrite R&D aimed at making carbon abatement cheaper, which is bound to be a part of the new legislation in any case, and then subsidize the technology to all comers. The United States should also experiment with cash subsidies to induce targeted change (rainforest preservation in Brazil and Indonesia is probably the most promising), provided it is understood that much of the money could be wasted. By the same token, U.S. business should be given the chance to learn by doing in purchasing emissions offsets in other countries.

Third, policymakers should keep an open mind about what works and what doesn’t. That certainly includes the use of nuclear energy as an alternative to fossil fuels in slaking developing countries’ voracious appetites for energy. And it may include “geoengineering,” using as yet untested means for modifying Earth’s climate system to offset what now seem almost inevitable increases in atmospheric warming.

The world’s largest, richest economy, which is also the world’s second-largest carbon emitter, can’t afford to stay aloof from efforts to limit climate change. And, happily, it probably won’t. Once it does enter the fray, however, it can’t afford to allow the best of intentions to be lost to impatience or indignation over the frailty of international cooperation.

The Challenge for the Obama Administration Science Team

President Obama’s choices for top government science positions have made a strong statement about the importance of science and technology (S&T) in our society. In choosing Nobel prize–winning physicist Stephen Chu for Secretary of Energy, marine biologist Jane Lubchenko to run the National Oceanic and Atmospheric Administration (NOAA), and physicist and energy and arms control expert John Holdren to be his science advisor, Obama has assembled a team with not only impeccable technical credentials but considerable policy and administrative savvy as well.

Yet the ability of science policy leaders to contribute to the nation will not depend on technical expertise, or even effective advocacy on behalf of S&T in the new administration. Far more important will be the team’s capacity to ensure that our scientific enterprise improves our environment, enhances our energy security, prepares us for global health risks, and—perhaps most important—brings new insights to the complex challenges associated with maintaining and improving the quality of life across this crowded planet.

President Obama was elected on the promise of change, and in science policy, effective change means, above all, breaching the firewall between science and policy that compromises the nation’s ability to turn new knowledge into social benefit. Failure to acknowledge the critical interactions between science and policy has contributed to a scientific enterprise whose capacity to generate knowledge is matched by its inability to make that knowledge useful or usable. Consider, as but one example, that scientists have been able to deliver skillful predictions of the paths and effects of hurricanes while having virtually no impact on the nation’s hurricane preparedness, as we saw in 2005 when Hurricane Katrina forever changed our perceptions of extreme weather events. Or that 15 years and $30 billion of research on the climate system are matched by no discernible progress in preparing for or preventing climate change. Or that our marvelous biomedical research capacity, funded at $30 billion per year, is matched by a health care system whose cost, inequity, and performance rank near the bottom among affluent nations.

So even as we applaud our new national science policy leaders, we should also encourage the Obama administration to make the necessary transition from a campaign posture focused on countering political interference in science to a governing posture that connects the $150 billion U.S. public investment in S&T to our most urgent problems.

One key obstacle to strengthening this connection is a culture that values “pure” research above other types, as if some invisible hand will steer scientists’ curiosity toward socially useful inquiries. There is no such hand. We invest in the research necessary to refine hurricane forecasts, yet we neglect to develop new knowledge to support populations living in vulnerable areas. We spend 20 years refining our fundamental understanding of Earth’s climate while disinvesting in energy technology research. We spend billions each year on the molecular genetic causes of cancer while generally neglecting research on the behavior that can enhance cancer prevention. Overall, we act as if the intellectual goals of scientists are automatically and inevitably aligned with our most important goals as a society. They are not.

This is not about basic versus applied research; both are crucial, and in many cases the boundary between them is so fuzzy as to be meaningless. Rather, it is about the capacity of our research institutions to create knowledge that is as socially useful as it is scientifically meritorious, in areas as broad and complex as social justice, poverty alleviation, access to clean water, sustainable land use, and technological innovation. This challenge is therefore about institutional design; about designing knowledge-producing enterprises that understand and respond to their constituents. Any corporation that imitated our federal science effort, spewing out wonderful products without regard to consumer needs or preferences, would deservedly go bankrupt. Yet we continue to support a public scientific enterprise whose chief measures of productivity—for example, the hundreds of thousands of disciplinary peer-reviewed papers churned out each year—have little if any connection to the public values they allegedly support.

How can we steer the vast capacity of our scientific enterprise toward better meeting the goals and values that justify the confidence and investment of the public? By increasing the level and quality of interaction between our institutions of science and the diverse constituents who have a stake in the outcomes of science; by changing the ethos of research from insular to engaged, from elitist to communitarian; by giving the scientific workforce incentives to broaden the way it selects problems and defines excellence.

We do not need to start from zero. We can tap as exemplary models some promising efforts that align research with the outcomes we would most like to see. For example, the agricultural sciences have a long history of building institutions that bring scientists and users together in the service of food security, productivity, and affordability, from the extension services and experiment stations first developed in the 19th century to the distributed research centers of the Consultative Group on International Agricultural Research that helped create the Green Revolution.

We can learn from the experiences of federal agencies such as the National Institute of Standards and Technology, whose effectiveness depends on its ability to interact with and learn from its complex network of constituents, mostly in the private sector. At NOAA, several innovative (and poorly funded) programs, such as the Regional Integrated Sciences and Assessment, bring scientists together with environmental managers to craft research agendas that are relevant to the needs of decisionmakers in areas such as the management of water supplies and fisheries. A radical expansion of this participatory approach is necessary if we are to avoid endless repetitions of the Katrina debacle. In another realm, the National Nanotechnology Initiative includes a vibrant research network coordinated across 23 agencies aimed at applying social science research to signal emerging risks and help guide nanoscale research and innovation toward socially desirable outcomes. Although funded far too modestly, this effort shows that fundamental scientific research can be fully integrated with research on societal, ethical, environmental, and economic concerns from the outset, rather than assuming that the invisible hand of scientific inquiry will automatically lead to the maximal social benefit. This type of integrated approach should be implemented across all areas of frontier research.

The nation’s science policy leaders can lead the way here by tying R&D funds to institutional innovation of this sort. For example, universities, the site of much of the fundamental research sponsored by the federal government, should become much more aggressive and effective contributors to the solution of social problems. As a university president, I am only too well aware that the tenure process is still largely driven by counting grants, publications, and citations—a weak proxy for social value, and I would say even for scientific excellence. At my institution we try to encourage new modes of scientific success, but until the ability to attract federal funds is decoupled from outmoded notions of productivity and excellence, a process that must be led by the funding institutions themselves, this will be an uphill battle.

The success of President Obama’s new science team should be measured by its ability to break down the historical disconnect between science and policy. Our scientific enterprise excels at creating knowledge, but it continues to embrace the myth that new knowledge, emerging from the stubbornly disciplinary channels of today’s scientific programs, automatically and serendipitously turns into social benefit. A new administration facing a host of enormous challenges to human welfare can best unleash the power of S&T by rejecting this myth and building a government-wide knowledge-creating enterprise that strengthens the linkages between research and social need.

Biomedical Enhancements: Entering a New Era

Recently, the Food and Drug Administration (FDA) approved a drug to lengthen and darken eyelashes. Botox and other wrinkle-reducing injections have joined facelifts, tummy tucks, and vaginal reconstruction to combat the effects of aging. To gain a competitive edge, athletes use everything from steroids and blood transfusions to recombinant-DNA–manufactured hormones, Lasik surgery, and artificial atmospheres. Students supplement caffeine-containing energy drinks with Ritalin and the new alertness drug modafinil. The military spends millions of dollars every year on biological research to increase the warfighting abilities of our soldiers. Parents perform genetic tests on their children to determine whether they have a genetic predisposition to excel at explosive or endurance sports. All of these are examples of biomedical enhancements: interventions that use medical and biological technology to improve performance, appearance, or capability in addition to what is necessary to achieve, sustain, or restore health.

The use of biomedical enhancements, of course, is not new. Amphetamines were doled out to troops during World War II. Athletes at the turn of the 20th century ingested narcotics. The cognitive benefits of caffeine have been known for at least a millennium. Ancient Greek athletes swallowed herbal infusions before competitions. The Egyptians brewed a drink containing a relative of Viagra at least 1,000 years before Christ. But modern drug development and improvements in surgical technique are yielding biomedical enhancements that achieve safer, larger, and more targeted enhancement effects than their predecessors, and more extraordinary technologies are expected to emerge from ongoing discoveries in human genetics. (In addition, there are biomechanical enhancements that involve the use of computer implants and nanotechnology, which are beyond the scope of this article.)

What is also new is that biomedical enhancements have become controversial. Some commentators want to outlaw them altogether. Others are concerned about their use by athletes and children. Still others fret that only the well-off will be able to afford them, thereby exacerbating social inequality.

Banning enhancements, however, is misguided. Still, it is important to try to ensure that they are as safe and effective as possible, that vulnerable populations such as children are not forced into using them, and that they are not available only to the well-off. This will require effective government and private action.

A misguided view

Despite the long history of enhancement use, there recently has emerged a view that it is wrong. The first manifestation of this hostility resulted from the use of performance enhancements in sports in the 1950s, especially steroids and amphetamines. European nations began adopting antidoping laws in the mid-1960s, and the Olympic Games began testing athletes in 1968. In 1980, Congress amended the Federal Food, Drug, and Cosmetic Act (FFDCA) to make it a felony to distribute anabolic steroids for nonmedical purposes. Two years later, Congress made steroids a Schedule III controlled substance and substituted human growth hormone in the steroid provision of the FFDCA. Between 2003 and 2005, Congress held hearings lambasting professional sports for not imposing adequate testing regimens. Drug testing has also been instituted in high-school and collegiate sports.

The antipathy toward biomedical enhancements extends well beyond sports, however. Officially, at least, the National Institutes of Health (NIH) will not fund research to develop genetic technologies for human enhancement purposes, although it has funded studies in animals that the researchers tout as a step toward developing human enhancements. It is a federal crime to use steroids to increase strength even if the user is not an athlete. Human growth hormone is in a unique regulatory category in that it is a felony to prescribe it for any purpose other than a specific use approved by the FDA. (For example, the FDA has not approved it for anti-aging purposes.) There is an ongoing controversy about whether musicians, especially string players, should be allowed to use beta blockers to steady their hands. And who hasn’t heard of objections to the use of mood-altering drugs to make “normal” people happier? There’s even a campaign against caffeine.

From an era in which employees are tested to make sure they aren’t taking drugs, we might see a new approach in which employers test them to make sure they are.

If the critics had their way, the government would ban the use of biomedical enhancements. It might seem that this would merely entail extending the War on Drugs to a larger number of drugs. But remember that enhancements include not just drugs, but cosmetic surgery and information technologies, such as genetic testing to identify nondisease traits. So a War on Enhancements would have to extend to a broader range of technologies, and because many are delivered within the patient-physician relationship, the government would have to intrude into that relationship in significant new ways. Moreover, the FDA is likely to have approved many enhancement drugs for legitimate medical purposes, with enhancement use taking place on an “off-label” basis. So there would have to be some way for the enhancement police to identify people for whom the drugs had been legally prescribed to treat illness, but who were misusing them for enhancement purposes.

This leads to a far more profound difficulty. The War on Drugs targets only manufacture, distribution, and possession. There is virtually no effort to punish people merely for using an illegal substance. But a successful ban on biomedical enhancement would have to prevent people from obtaining benefits from enhancements that persisted after they no longer possessed the enhancements themselves, such as the muscles built with the aid of steroids or the cognitive improvement that lasts for several weeks after normal people stop taking a certain medicine that treats memory loss in Alzheimer’s patients. In short, a ban on enhancements would have to aim at use as well as possession and sale.

To imagine what this would be like, think about the campaign against doping in elite sports, where athletes must notify antidoping officials of their whereabouts at all times and are subject to unannounced, intrusive, and often indecent drug tests at any hour of the day or night. Even in the improbable event that regular citizens were willing to endure such an unprecedented loss of privacy, the economic cost of maintaining such a regime, given how widespread the use of highly effective biomedical enhancements might be, would be prohibitive.

A ban on biomedical enhancements would be not only unworkable but unjustifiable. Consider the objections to enhancement in sports. Why are enhancements against the rules? Is it because they are unsafe? Not all of them are: Anti-doping rules in sports go after many substances that pose no significant health risks, such as caffeine and Sudafed. (A Romanian gymnast forfeited her Olympic gold medal after she accidentally took a couple of Sudafed to treat a cold.) Even in the case of vilified products such as steroids, safety concerns stem largely from the fact that athletes are forced to use the drugs covertly, without medical supervision. Do enhancements give athletes an “unfair” advantage? They do so only if the enhancements are hard to obtain, so that only a few competitors obtain the edge. But the opposite seems to be true: Enhancements are everywhere. Besides, athletes are also tested for substances that have no known performance-enhancing effects, such as marijuana. Are the rewards from enhancements “unearned”? Not necessarily. Athletes still need to train hard. Indeed, the benefit from steroids comes chiefly from allowing athletes to train harder without injuring themselves. In any event, success in sports comes from factors that athletes have done nothing to deserve, such as natural talent and the good luck to have been born to encouraging parents or to avoid getting hurt. Would the use of enhancements confound recordkeeping? This doesn’t seem to have stopped the adoption of new equipment that improves performance, such as carbon-fiber vaulting poles, metal skis, and oversized tennis racquets. If one athlete used enhancements, would every athlete have to, so that the benefit would be nullified? No, there would still be the benefit of improved performance across the board—bigger lifts, faster times, higher jumps. In any case, the same thing happens whenever an advance takes place that improves performance.

The final objection to athletic enhancement, in the words of the international Olympic movement, is that it is against the “spirit of sport.” It is hard to know what this means. It certainly can’t mean that enhancements destroy an earlier idyll in which sports were enhancement-free; as we saw before, this never was the case. Nor can it stand for the proposition that a physical competition played with the aid of enhancements necessarily is not a “sport.” There are many sporting events in which the organizers do not bother to test participants, from certain types of “strong-man” and powerlifting meets to your neighborhood pickup basketball game. There are several interesting historical explanations for why athletic enhancement has gained such a bad rap, but ultimately, the objection about “the spirit of sport” boils down to the fact that some people simply don’t like the idea of athletes using enhancements. Well, not exactly. You see, many biomedical enhancements are perfectly permissible, including dietary supplements, sports psychology, carbohydrate loading, electrolyte-containing beverages, and sleeping at altitude (or in artificial environments that simulate it). Despite the labor of innumerable philosophers of sport, no one has ever come up with a rational explanation for why these things are legal and others aren’t. In the end, they are just arbitrary distinctions.

But that’s perfectly okay. Lots of rules in sports are arbitrary, like how many players are on a team or how far the boundary lines stretch. If you don’t like being all alone in the outfield, don’t play baseball. If you are bothered by midnight drug tests, don’t become an Olympian.

The problem comes when the opponents of enhancement use in sports try to impose their arbitrary dislikes on the wider world. We already have observed how intrusive and expensive this would be. Beyond that, there are strong constitutional objections to using the power of the law to enforce arbitrary rules. But most important, a ban on the use of enhancements outside of sports would sacrifice an enormous amount of societal benefit. Wouldn’t we want automobile drivers to use alertness drugs if doing so could prevent accidents? Shouldn’t surgeons be allowed to use beta blockers to steady their hands? Why not let medical researchers take cognitive enhancers if it would lead to faster cures, or let workers take them to be more productive? Why stop soldiers from achieving greater combat effectiveness, rescue workers from lifting heavier objects, and men and women from leading better sex lives? Competent adults who want to use enhancements should be permitted to. In some instances, such as in combat or when performing dangerous jobs, they should even be required to.

Protecting the vulnerable

Rejecting the idea of banning enhancements doesn’t mean that their use should be unregulated. The government has several crucial roles to play in helping to ensure that the benefits from enhancement use outweigh the costs.

In the first place, the government needs to protect people who are incapable of making rational decisions about whether to use enhancements. In the language of biomedical ethics, these are populations that are “vulnerable,” and a number of them are well recognized. One such group, of course, is people with severe mental disabilities. The law requires surrogates to make decisions for these individuals based on what is in their best interests.

Another vulnerable population is children. There can be little disagreement that kids should not be allowed to decide on their own to consume powerful, potentially dangerous enhancement substances. Not only do they lack decisionmaking capacity, but they may be much more susceptible than adults to harm. This is clearly the case with steroids, which can interfere with bone growth in children and adolescents.

The more difficult question is whether parents should be free to give enhancements to their children. Parents face powerful social pressures to help their children excel. Some parents may be willing to improve their children’s academic or athletic performance even at a substantial risk of injury to the child. There are many stories of parents who allow their adolescent daughters to have cosmetic surgery, including breast augmentation. In general, the law gives parents considerable discretion in determining how to raise their children. The basic legal constraint on parental discretion is the prohibition in state law against abuse or neglect, and this generally is interpreted to defer to parental decisionmaking so long as the child does not suffer serious net harm. There are no reported instances in which parents have been sanctioned for giving their children biomedical enhancements, and the authorities might conclude that the benefits conferred by the use of an enhancement outweighed even a fairly significant risk of injury.

Beyond the actions of parents, there remains the question of whether some biomedical enhancements are so benign that children should be allowed to purchase them themselves. At present, for instance, there is no law in the United States against children purchasing coffee, caffeinated soft drinks, and even high-caffeine–containing energy drinks. (Laws prohibiting children from buying energy drinks have been enacted in some other countries.)

At the same time, it may be a mistake to lump youngsters together with older adolescents into one category of children. Older adolescents, although still under the legal age of majority, have greater cognitive and judgmental capacities than younger children. The law recognizes this by allowing certain adolescents, deemed “mature” or “emancipated” minors, to make legally binding decisions, such as decisions to receive medical treatment. Older adolescents similarly may deserve some degree of latitude in making decisions about using biomedical enhancements.

Children may be vulnerable to pressure to use enhancements not only from their parents, but from their educators. Under programs such as No Child Left Behind, public school teachers and administrators are rewarded and punished based on student performance on standardized tests. Private schools compete with one another in terms of where their graduates are accepted for further education. There is also intense competition in school athletics, especially at the collegiate level. Students in these environments may be bull-dozed into using enhancements to increase their academic and athletic abilities. Numerous anecdotes, for example, tell of parents who are informed by teachers that their children need medication to “help them focus”; the medication class in question typically is the cognition-enhancing amphetamines, and many of these children do not have diagnoses that would warrant the use of these drugs.

Beyond students, athletes in general are vulnerable to pressure from coaches, sponsors, family, and teammates to use hazardous enhancements. For example, at the 2005 congressional hearings on steroid use in baseball, a father testified that his son committed suicide after using steroids, when in fact he killed himself after his family caught him using steroids, which the boy had turned to in an effort to meet his family’s athletic aspirations.

Another group that could be vulnerable to coercion is workers. Employers might condition employment or promotion on the use of enhancements that increased productivity. For example, an employer might require its nighttime work force to take the alertness drug modafinil, which is now approved for use by sleep-deprived swing-shift workers. Current labor law does not clearly forbid this so long as the drug is relatively safe. From an era in which employees are tested to make sure they aren’t taking drugs, we might see a new approach in which employers test them to make sure they are.

Members of the military may also be forced to use enhancements. The military now conducts the largest known biomedical enhancement research project. Under battlefield conditions, superiors may order the use of enhancements, leaving soldiers no lawful option to refuse. A notorious example is the use of amphetamines by combat pilots. Technically, the pilots are required to give their consent to the use of the pep pills, but if they refuse, they are barred from flying the missions.

The ability of government regulation to protect vulnerable groups varies depending on the group. It is important that educators not be allowed to give students dangerous enhancements without parental permission and that parents not be pressured into making unreasonable decisions by fearful, overzealous, or inadequate educators. The law can mandate the former, but not easily prevent the latter. Coaches and trainers who cause injury to athletes by giving them dangerous enhancements or by unduly encouraging their use should be subject to criminal and civil liability. The same goes for employers. But the realities of military life make it extremely difficult to protect soldiers from the orders of their superiors.

Moreover, individuals may feel pressure to use enhancements not only from outside sources, but from within. Students may be driven to do well in order to satisfy parents, gain admittance to more prestigious schools, or establish better careers. Athletes take all sorts of risks to increase their chances of winning. Workers may be desperate to save their jobs or bring in a bigger paycheck, especially in economically uncertain times. Soldiers better able to complete their missions are likely to live longer.

Surprisingly, while acknowledging the need to protect people from outside pressures, bioethicists generally maintain that we do not need to protect them from harmful decisions motivated by internal pressures. This position stems, it seems, from the recognition that, with the exception of decisions that are purely random, everything we decide to do is dictated at least in part by internal pressures, and in many cases, these pressures can be so strong that the decisions may no longer appear to be voluntary. Take, for example, seriously ill cancer patients contemplating whether or not to undergo harsh chemotherapy regimens. Bioethicists worry that, if we focused on the pressures and lack of options created by the patients’ dire condition, we might not let the patients receive the treatment, or, in the guise of protecting the patients from harm, might create procedural hurdles that would rob them of their decisionmaking autonomy. Similarly, these bioethicists might object to restricting the ability of workers, say, to use biomedical enhancements merely because their choices are highly constrained by their fear of losing their jobs. But even if we accept this argument, that doesn’t mean that we must be indifferent to the dangers posed by overwhelming internal pressure. As we will see, the government still must take steps to minimize the harm that could result.

Individuals may be vulnerable to harm not only from using enhancements, but from participating in experiments to see if an enhancement is safe and effective. Research subjects are protected by a fairly elaborate set of rules, collectively known as the “Common Rule,” that are designed to ensure that the risks of the research are outweighed by the potential benefits and that the subjects have given their informed consent to their participation. But there are many weaknesses in this regulatory scheme. For one thing, these rules apply only to experiments conducted by government-funded institutions or that are submitted to the FDA in support of licensing applications, and therefore they do not cover a great deal of research performed by private industry. Moreover, the rules were written with medically oriented research in mind, and it is not clear how they should be interpreted and applied to enhancement research. For example, the rules permit children to be enrolled as experimental subjects in trials that present “more than minimal risk” if, among other things, the research offers the possibility of “direct benefit” to the subject, but the rules do not say whether an enhancement benefit can count as a direct benefit. Specific research protections extend to other vulnerable populations besides children, such as prisoners and pregnant women, but do not explicitly cover students, workers, or athletes. In reports of a project several colleagues and I recently completed for the NIH, we suggest a number of changes to current regulations that would provide better protection for these populations.

Ensuring safety and effectiveness

Beginning with the enactment of the Pure Food and Drug Act in 1906, we have turned to the government to protect us from unsafe, ineffective, and fraudulent biomedical products and services. Regardless of how much freedom individuals should have to decide whether or not to use biomedical enhancements, they cannot make good decisions without accurate information about how well enhancements work. In regard to enhancements in the form of drugs and medical devices, the FDA has the legal responsibility to make sure that this information exists.

The FDA’s ability to discharge this responsibility, however, is limited. In the first place, the FDA has tended to rely on information from highly stylized clinical trials that do not reflect the conditions under which enhancements would be used by the general public. Moreover, the deficiencies of clinical trials are becoming more apparent as we learn about pharmacogenetics—the degree to which individual responses to medical interventions vary depending on the individual’s genes. The FDA is beginning to revise its rules to require manufacturers to take pharmacogenetics into consideration in studying safety and efficacy, but it will be many years, if ever, before robust pharmacogenetic information is publicly available. The solution is to rely more on data from actual use. Recently the agency has become more adamant about monitoring real-world experience after products reach the market, but this information comes from self-reports by physicians and manufacturers who have little incentive to cooperate. The agency needs to be able to conduct its own surveillance of actual use, with the costs borne by the manufacturers.

Many biomedical enhancements fall outside the scope of FDA authority. They include dietary supplements, many of which are used for enhancement purposes rather than to promote health. You only have to turn on late-night TV to be bombarded with claims for substances to make you stronger or more virile. Occasionally the Federal Trade Commission cracks down on hucksters, but it needs far greater resources to do an effective job. The FDA needs to exert greater authority to regulate dietary supplements, including those used for enhancement.

The FDA also lacks jurisdiction over the “practice of medicine.” Consequently, it has no oversight over cosmetic surgery, except when the surgeon employs a new medical device. This limitation also complicates the agency’s efforts to exert authority over reproductive and genetic practices. This would include the genetic modification of embryos to improve their traits, which promises to be one of the most effective enhancement techniques. Because organized medicine fiercely protects this limit on the FDA, consumers will have to continue to rely on physicians and other health care professionals to provide them with the information they need to make decisions about these types of enhancements. Medical experts need to stay on top of advances in enhancement technology.

Even with regard to drugs and devices that are clearly within the FDA’s jurisdiction, its regulatory oversight only goes so far. Once the agency approves a product for a particular use, physicians are free to use it for any other purpose, subject only to liability for malpractice and, in the case of controlled substances, a requirement that the use must comprise legitimate medical practice. Only a handful of products, such as Botox, have received FDA approval for enhancement use; as noted earlier, enhancements predominantly are unapproved, off-label uses of products approved for health-related purposes. Modafinil, for example, one of the most popular drugs for enhancing cognitive performance, is approved only for the treatment of narcolepsy and sleepiness associated with obstructive sleep apnea/hypopnea syndrome and shift-work sleep disorder. Erythropoietin, which athletes use to improve performance, is approved to treat anemias. The FDA needs to be able to require manufacturers of products such as these to pay for the agency to collect and disseminate data on off-label experience. The agency also has to continue to limit the ability of manufacturers to promote drugs for off-label uses, in order to give them an incentive to obtain FDA approval for enhancement labeling.

An enhancement technology that will increase in use is testing to identify genes that are associated with nondisease characteristics. People can use this information to make lifestyle choices, such as playing sports at which they have the genes to excel, or in reproduction, such as deciding which of a number of embryos fertilized in vitro will be implanted in the uterus. An area of special concern is genetic tests that consumers can use at home without the involvement of physicians or genetic counselors to help them interpret the results. Regulatory authority over genetic testing is widely believed to be inadequate, in part because it is split among the FDA and several other federal agencies, and there are growing calls for revamping this regulatory scheme that need to be heeded.

Any attempt to regulate biomedical enhancement will be undercut by people who obtain enhancements abroad. The best hope for protecting these “enhancement tourists” against unsafe or ineffective products and services lies in international cooperation, but this is costly and subject to varying degrees of compliance.

To make intelligent decisions about enhancement use, consumers need information not only about safety and effectiveness, but about whether they are worth the money. Should they pay for Botox injections, for example, or try to get rid of facial wrinkles with cheaper creams and lotions? When the FDA approved Botox for cosmetic use, it ignored this question of cost-effectiveness because it has no statutory authority to consider it. In the case of medical care, consumers may get some help in making efficient spending decisions from their health insurers, who have an incentive to avoid paying for unnecessarily costly products or services. But insurance does not cover enhancements. The new administration is proposing to create a federal commission to conduct health care cost-effectiveness analyses, among other things, and it is important that such a body pay attention to enhancements as well as other biomedical interventions.

Subsidizing enhancement

In these times of economic distress, when we already question whether the nation can afford to increase spending on health care, infrastructure, and other basic necessities, it may seem foolish to consider whether the government has an obligation to make biomedical enhancements available to all. Yet if enhancements enable people to enjoy a significantly better life, this may not be so outlandish, and if universal access avoids a degree of inequality so great that it undermines our democratic way of life, it may be inescapable.

There is no need for everyone to have access to all available enhancements. Some may add little to an individual’s abilities. Others may be so hazardous that they offer little net benefit to the user. But imagine that a pill is discovered that substantially improves a person’s cognitive facility, not just their memory but abilities such as executive function—the highest form of problem-solving capacity—or creativity. Now imagine if this pill were available only to those who already were well-off and could afford to purchase it with personal funds. If such a pill were sufficiently effective, so that those who took it had a lock on the best schools, careers, and mates, wealth-based access could drive an insurmountable wedge between the haves and have-nots, a gap so wide and deep that we could no longer pretend that there is equality of opportunity in our society. At that point, it is doubtful that a liberal democratic state could survive.

So it may be necessary for the government to regard such a success-determining enhancement as a basic necessity, and, after driving the cost down to the lowest amount possible, subsidize access for those unable to purchase it themselves. Even if this merely maintained preexisting differences in cognitive ability, it would be justified in order to prevent further erosion of equality of opportunity.

The need for effective regulation of biomedical enhancement is only going to increase as we enter an era of increasingly sophisticated technologies. Existing schemes, such as the rules governing human subjects research, must be reviewed to determine whether additions or changes are needed to accommodate this class of interventions. Government agencies and private organizations need to be aware of both the promise and the peril of enhancements and devote an appropriate amount of resources in order to regulate, rather than stop, their use.

In Defense of Biofuels, Done Right

Biofuels have been getting bad press, not always for good reasons. Certainly important concerns have been raised, but preliminary studies have been misinterpreted as a definitive condemnation of biofuels. One recent magazine article, for example, illustrated what it called “Ethanol USA” with a photo of a car wreck in a corn field. In particular, many criticisms converge around grain-based biofuel, traditional farming practices, and claims of a causal link between U.S. land use and land-use changes elsewhere, including tropical deforestation.

Focusing only on such issues, however, distracts attention from a promising opportunity to invest in domestic energy production using biowastes, fast-growing trees, and grasses. When biofuel crops are grown in appropriate places and under sustainable conditions, they offer a host of benefits: reduced fossil fuel use; diversified fuel supplies; increased employment; decreased greenhouse gas emissions; enhanced habitat for wildlife; improved soil and water quality; and more stable global land use, thereby reducing pressure to clear new land.

Not only have many criticisms of biofuels been alarmist, many have been simply inaccurate. In 2007 and early 2008, for example, a bumper crop of media articles blamed sharply higher food prices worldwide on the production of biofuels, particularly ethanol from corn, in the United States. Subsequent studies, however, have shown that the increases in food prices were primarily due to many other interacting factors: increased demand in emerging economies, soaring energy prices, drought in food-exporting countries, cut-offs in grain exports by major suppliers, market-distorting subsidies, a tumbling U.S. dollar, and speculation in commodities markets.

Although ethanol production indeed contributes to higher corn prices, it is not a major factor in world food costs. The U.S. Department of Agriculture (USDA) calculated that biofuel production contributed only 5% of the 45% increase in global food costs that occurred between April 2007 and April 2008. A Texas A&M University study concluded that energy prices were the primary cause of food price increases, noting that between January 2006 and January 2008, the prices of fuel and fertilizer, both major inputs to agricultural production, increased by 37% and 45%, respectively. And the International Monetary Fund has documented that since their peak in July 2008, oil prices declined by 69% as of December 2008, and global food prices declined by 33% during the same period, while U.S. corn production has remained at 12 billion bushels a month, one-third of which is still used for ethanol production.

In another line of critique, some argue that the potential benefits of biofuel might be offset by indirect effects. But large uncertainties and postulations underlie the debate about the indirect land-use effects of biofuels on tropical deforestation, the critical implication being that use of U.S. farmland for energy crops necessarily causes new land-clearing elsewhere. Concerns are particularly strong about the loss of tropical forests and natural grasslands. The basic argument is that biofuel production in the United States sets in motion a necessary scenario of deforestation.

According to this argument, if U.S. farm production is used for fuel instead of food, food prices rise and farmers in developing countries respond by growing more food. This response requires clearing new land and burning native vegetation and, hence, releasing carbon. This “induced deforestation” hypothesis is based on questionable data and modeling assumptions about available land and yields, rather than on empirical evidence. The argument assumes that the supply of previously cleared land is inelastic (that is, agricultural land for expansion is unavailable without new deforestation). It also assumes that agricultural commodity prices are a major driving force behind deforestation and that yields decline with expansion. The calculations for carbon emissions assume that land in a stable, natural state is suddenly converted to agriculture as a result of biofuels. Finally, the assertions assume that it is possible to measure with some precision the areas that will be cleared in response to these price signals.

A review of the issues reveals, however, that these assumptions about the availability of land, the role of biofuels in causing deforestation, and the ability to relate crop prices to areas of land clearance are unsound. Among our findings:

First, sufficient suitably productive land is available for multiple uses, including the production of biofuels. Assertions that U.S. biofuel production will cause large indirect land-use changes rely on limited data sets and unverified assumptions about global land cover and land use. Calculations of land-use change begin by assuming that global land falls into discrete classes suitable for agriculture—cropland, pastures and grasslands, and forests—and results depend on estimates of the extent, use, and productivity of these lands, as well as presumed future interactions among land-use classes. But several major organizations, including the Food and Agriculture Organization (FAO), a primary data clearinghouse, have documented significant inconsistencies surrounding global land-cover estimates. For example, the three most recent FAO Forest Resource Assessments, for periods ending in 1990, 2000, and 2005, provide estimates of the world’s total forest cover in 1990 that vary by as much as 470 million acres, or 21% of the original estimate.

Cropland data face similar discrepancies, and even more challenging issues arise when pasture areas are considered. Estimates for land used for crop production range from 3.8 billion acres (calculated by the FAO) to 9 billion acres (calculated by the Millennium Ecosystem Assessment, an international effort spearheaded by the United Nations). In a recent study attempting to reconcile cropland use circa 2000, scientists at the University of Wisconsin-Madison and McGill University estimated that there were 3.7 billion acres of cropland, of which 3.2 billion were actively cropped or harvested. Land-use studies consistently acknowledge serious data limitations and uncertainties, noting that a majority of global crop lands are constantly shifting the location of cultivation, leaving at any time large areas fallow or idle that may not be captured in statistics. Estimates of idle croplands, prone to confusion with pasture and grassland, range from 520 million acres to 4.9 billion acres globally. The differences illustrate one of many uncertainties that hamper global land-use change calculations. To put these numbers in perspective, USDA has estimated that in 2007, about 21 million acres were used worldwide to produce biofuel feedstocks, an area that would occupy somewhere between 0.4% and 4% of the world’s estimated idle cropland.

Diverse studies of global land cover and potential productivity suggest that anywhere from 600 million to more than 7 billion additional acres of underutilized rural lands are available for expanding rain-fed crop production around the world, after excluding the 4 billion acres of cropland currently in use, as well as the world’s supply of closed forests, nature reserves, and urban lands. Hence, on a global scale, land per se is not an immediate limitation for agriculture and biofuels.

In the United States, the federal government, through the multiagency Biomass Research and Development Initiative (BRDI), has examined the land and market implications of reaching the nation’s biofuel target, which calls for producing 36 billion gallons by 2022. BRDI estimated that a slight net reduction in total U.S. active cropland area would result by 2022 in most scenarios, when compared with a scenario developed from USDA’s so-called “baseline” projections. BRDI also found that growing biofuel crops efficiently in the United States would require shifts in the intensity of use of about 5% of pasture lands to more intensive hay, forage, and bioenergy crops (25 million out of 456 million acres) in order to accommodate dedicated energy crops, along with using a combination of wastes, forest residues, and crop residues. BRDI’s estimate assumes that the total area allocated to USDA’s Conservation Reserve Program (CRP) remains constant at about 33 million acres but allows about 3 million acres of the CRP land on high-quality soils in the Midwest to be offset by new CRP additions in other regions. In practice, additional areas of former cropland that are now in the CRP could be managed for biofuel feedstock production in a way that maintains positive impacts on wildlife, water, and land conservation goals, but this option was not included among the scenarios considered.

Yields are important. They vary widely from place to place within the United States and around the world. USDA projects that corn yields will rise by 20 bushels per acre by 2017; this represents an increase in corn output equivalent to adding 12.5 million acres as compared with 2006, and over triple that area as compared with average yields in many less-developed nations. And there is the possibility that yields will increase more quickly than projected in the USDA baseline, as seed companies aim to exceed 200 bushels per acre by 2020. The potential to increase yields in developing countries offers tremendous opportunities to improve welfare and expand production while reducing or maintaining the area harvested. These improvements are consistent with U.S. trends during the past half century showing agricultural output growth averaging 2% per year while cropland use fell by an average of 0.7% per year. Even without large yield increases, cropland requirements to meet biofuel production targets may not be nearly as great as assumed.

Concerns over induced deforestation are based on a theory of land displacement that is not supported by data. U.S. ethanol production shot up by more than 3 billion gallons (150%) between 2001 and 2006, and corn production increased 11%, while total U.S. harvested cropland fell by about 2% in the same period. Indeed, the harvested area for “coarse grains” fell by 4% as corn, with an average yield of 150 bushels per acre, replaced other feed grains such as sorghum (averaging 60 bushels per acre). Such statistics defy modeling projections by demonstrating an ability to supply feedstock to a burgeoning ethanol industry while simultaneously maintaining exports and using substantially less land. So although models may assume that increased use of U.S. land for biofuels will lead to more land being cleared for agriculture in other parts of the world, evidence is lacking to support those claims.

Second, there is little evidence that biofuels cause deforestation, and much evidence for alternative causes. Recent scientific papers that blame biofuels for deforestation are based on models that presume that new land conversion can be simulated as a predominantly market-driven choice. The models assume that land is a privately owned asset managed in response to global price signals within a stable rule-based economy—perhaps a reasonable assumption for developed nations.

However, this scenario is far from the reality in the smoke-filled frontier zones of deforestation in less-developed countries, where the models assume biofuel-induced land conversion takes place. The regions of the world that are experiencing first-time land conversion are characterized by market isolation, lawlessness, insecurity, instability, and lack of land tenure. And nearly all of the forests are publicly owned. Indeed, land-clearing is a key step in a long process of trying to stake a claim for eventual tenure. A cycle involving incremental degradation, repeated and extensive fires, and shifting small plots for subsistence tends to occur long before any consideration of crop choices influenced by global market prices.

The causes of deforestation have been extensively studied, and it is clear from the empirical evidence that forces other than biofuel use are responsible for the trends of increasing forest loss in the tropics. Numerous case studies document that the factors driving deforestation are a complex expression of cultural, technological, biophysical, political, economic, and demographic interactions. Solutions and measures to slow deforestation have also been analyzed and tested, and the results show that it is critical to improve governance, land tenure, incomes, and security to slow the pace of new land conversion in these frontier regions.

Perennial biofuel crops can help stabilize land cover, enhance soil carbon sequestration, provide habitat to support biodiversity, and improve soil and water quality.

Selected studies based on interpretations of satellite imagery have been used to support the claims that U.S. biofuels induce deforestation in the Amazon, but satellite images cannot be used to determine causes of land-use change. In practice, deforestation is a site-specific process. How it is perceived will vary greatly by site and also by the temporal and spatial lens through which it is observed. Cause-and-effect relationships are complex, and the many small changes that enable larger future conversion cannot be captured by satellite imagery. Although it is possible to classify an image to show that forest in one period changed to cropland in another, cataloguing changes in discrete classes over time does not explain why these changes occur. Most studies asserting that the production and use of biofuels cause tropical deforestation point to land cover at some point after large-scale forest degradation and clearing have taken place. But the key events leading to the primary conversion of forests often proceed for decades before they can be detected by satellite imagery. The imagery does not show how the forest was used to sustain livelihoods before conversion, nor the degrees of continual degradation that occurred over time before the classification changed. When remote sensing is supported by a ground-truth process, it typically attempts to narrow the uncertainties of land-cover classifications rather than research the history of occupation, prior and current use, and the forces behind the land-use decisions that led to the current land cover.

First-time conversion is enabled by political, as well as physical, access. Southeast Asia provides one example where forest conversion has been facilitated by political access, which can include such diverse things as government-sponsored development and colonization programs in previously undisturbed areas and the distribution of large timber and mineral concessions and land allotments to friends, families, and sponsors of people in power. Critics have raised valid concerns about high rates of deforestation in the region, and they often point an accusing finger at palm oil and biofuels.

Palm oil has been produced in the region since 1911, and plantation expansion boomed in the 1970s with growth rates of more than 20% per year. Biodiesel represents a tiny fraction of palm oil consumption. In 2008, less than 2% of crude palm oil output was processed for biofuel in Indonesia and Malaysia, the world’s largest producers and exporters. Based on land-cover statistics alone, it is impossible to determine the degree of attribution that oil palm may share with other causes of forest conversion in Southeast Asia. What is clear is that oil palm is not the only factor and that palm plantations are established after a process of degradation and deforestation has transpired. Deforestation data may offer a tool for estimating the ceiling for attribution, however. In Indonesia, for example, 28.1 million hectares were deforested between 1990 and 2005, and oil palm expansion in those areas was estimated to be between 1.7 million and 3 million hectares, or between 6% and 10% of the forest loss, during the same period.

Initial clearing in the tropics is often driven more by waves of illegitimate land speculation than agricultural production. In many Latin American frontier zones, if there is native forest on the land, it is up for grabs, as there is no legal tenure of the land. The majority of land-clearing in the Amazon has been blamed on livestock because, in part, there is no alternative for classifying the recent clearings and, in part, because land holders must keep it “in production” to maintain claims and avoid invasions. The result has been the frequent burning and the creation of extensive cattle ranches. For centuries, disenfranchised groups have been pushed into the forests and marginal lands where they do what they can to survive. This settlement process often includes serving as low-cost labor to clear land for the next wave of better-connected colonists. Unless significant structural changes occur to remove or modify enabling factors, the forest-clearing that was occurring before this decade is expected to continue along predictable paths.

Testing the hypothesis that U.S. biofuel policy causes deforestation elsewhere depends on models that can incorporate the processes underlying initial land-use change. Current models attempt to predict future land-use change based on changes in commodity prices. As conceived thus far, the computational general equilibrium models designed for economic trade do not adequately incorporate the processes of land-use change. Although crop prices may influence short-term land-use decisions, they are not a dominant factor in global patterns of first-time conversion, the land-clearing of chief concern in relating biofuels to deforestation. The highest deforestation rates observed and estimated globally occurred in the 1990s. During that period, there was a surplus of commodities on world markets and consistently depressed prices.

Third, many studies omit the larger problem of widespread global mismanagement of land. The recent arguments focusing on the possible deforestation attributable to biofuels use idealized representations of crop and land markets, omitting what may be larger issues of concern. Clearly, the causes of global deforestation are complex and are not driven merely by a single crop market. Additionally, land mismanagement, involving both initial clearing and maintaining previously cleared land, is widespread and leads to a process of soil degradation and environmental damage that is especially prevalent in the frontier zones. Reports by the FAO and the Millennium Ecosystem Assessment describe the environmental consequences of repeated fires in these areas. Estimates of global burning vary annually, ranging from 490 million to 980 million acres per year between 2000 and 2004. The vast majority of fires in the tropics occur in Africa and the Amazon in what were previously cleared, nonforest lands. In a detailed study, the Amazon Institute of Environmental Research and Woods Hole Research Center found that 73% of burned area in the Amazon was on previously cleared land, and that was during the 1990s, when overall deforestation rates were high.

Fire is the cheapest and easiest tool supporting shifting subsistence cultivation. Repeated and extensive burning is a manifestation of the lack of tenure, lack of access to markets, and severe poverty in these areas. When people or communities have few or no assets to protect from fire and no incentive to invest in more sustainable production, they also have no reason to limit the extent of burning. The repeated fires modify ecosystem structure, penetrate ever deeper into forest margins, affect large areas of understory vegetation (which is not detected by remote sensing), and take an ever greater cumulative toil on soil quality and its ability to sequester carbon. Profitable biofuel markets, by contributing to improved incentives to grow cash crops, could reduce the use of fire and the pressures on the agricultural frontier. Biofuels done right, with attention to best practices for sustained production, can make significant contributions to social and economic development as well as environmental protection.

Furthermore, current literature calculates the impacts from an assumed agricultural expansion by attributing the carbon emissions from clearing intact ecosystems to biofuels. If emission analyses consider empirical data reflecting the progressive degradation that occurs (often over decades) before and independently of agriculture market signals for land use, as well as changes in the frequency and extent of fire in areas that biofuels help bring into more stable market economies, then the resulting carbon emission estimates would be worlds apart.

Brazil provides a good case in point, because it holds the globe’s largest remaining area of tropical forests, is the world’s second-largest producer of biofuel (after the United States), and is the world’s leading supplier of biofuel for global trade. Brazil also has relatively low production costs and a growing focus on environmental stewardship. As a matter of policy, the Brazilian government has supported the development of biofuels since launching a National Ethanol Program called Proálcool in 1975. Brazil’s ethanol industry began its current phase of growth after Proálcool was phased out in 1999 and the government’s role shifted from subsidies and regulations toward increased collaboration with the private sector in R&D. The government helps stabilize markets by supporting variable rates of blending ethanol with gasoline and planning for industry expansion, pipelines, ports, and logistics. The government also facilitates access to global markets; develops improved varieties of sugarcane, harvest equipment, and conversion; and supports improvements in environmental performance.

New sugarcane fields in Brazil nearly always replace pasture land or less valuable crops and are concentrated around production facilities in the developed southeastern region, far from the Amazon. Nearly all production is rain-fed and relies on low input rates of fertilizers and agrochemicals, as compared with other major crops. New projects are reviewed under the Brazilian legal framework of Environmental Impact Assessment and Environmental Licensing. Together, these policies have contributed to the restoration or protection of reserves and riparian areas and increased forest cover, in tandem with an expansion of sugarcane production in the most important producing state, Sao Paulo.

Yet natural forest in Brazil is being lost, with nearly 37 million acres lost between May 2000 and August 2006, and a total of 150 million acres lost since 1970. Some observers have suggested that the increase in U.S. corn production for biofuel led to reduced soybean output and higher soybean prices, and that these changes led, in turn, to new deforestation in Brazil. However, total deforestation rates in Brazil appear to fall in tandem with rising soybean prices. This co-occurrence illustrates a lack of connection between commodity prices and initial land clearing. This phenomenon has been observed around the globe and suggests an alternate hypothesis: Higher global commodity prices focus production and investment where it can be used most efficiently, in the plentiful previously cleared and underutilized lands around the world. In times of falling prices and incomes, people return to forest frontiers, with all of their characteristic tribulations, for lack of better options.

Biofuels done right

With the right policy framework, cellulosic biofuel crops could offer an alternative that diversifies and boosts rural incomes based on perennials. Such a scenario would create incentives to reduce intentional burning that currently affects millions of acres worldwide each year. Perennial biofuel crops can help stabilize land cover, enhance soil carbon sequestration, provide habitat to support biodiversity, and improve soil and water quality. Furthermore, they can reduce pressure to clear new land via improved incomes and yields. Developing countries have huge opportunities to increase crop yield and thereby grow more food on less land, given that cereal yields in less developed nations are 30% of those in North America. Hence, policies supporting biofuel production may actually help stop the extensive slash-and-burn agricultural cycle that contributes to greenhouse gas emissions, deforestation, land degradation, and a lifestyle that fails to support farmers and their families.

Biofuels alone are not the solution, however. Governments in the United States and elsewhere will have to develop and support a number of programs designed to support sustainable development. The operation and rules of such programs must be transparent, so that everyone can understand them and see that fair play is ensured. Among other attributes, the programs must offer economic incentives for sustainable production, and they must provide for secure land tenure and participatory land-use planning. In this regard, pilot biofuel projects in Africa and Brazil are showing promise in addressing the vexing and difficult challenges of sustainable land use and development. Biofuels also are uniting diverse stakeholders in a global movement to develop sustainability metrics and certification methods applicable to the broader agricultural sector.

Given a priority to protect biodiversity and ecosystem services, it is important to further explore the drivers for the conversion of land at the frontier and to consider the effects, positive and negative, that U.S. biofuel policies could have in these areas. This means it is critical to distinguish between valid concerns calling for caution and alarmist criticisms that attribute complex problems solely to biofuels.

Still, based on the analyses that we and others have done, we believe that biofuels, developed in an economically and environmentally sensible way, can contribute significantly to the nation’s—indeed, the world’s—energy security while providing a host of benefits for many people in many regions.

In the Zone: Comprehensive Ocean Protection

For too long, humanity’s effects on the oceans have been out of sight and out of mind. Looking at the vast ocean from the shore or a jet’s window, it is hard to imagine that this seemingly limitless area could be vulnerable to human activities. But during the past decade, reports have highlighted the consequences of human activity on our coasts and oceans, including collapsing fisheries, invasive species, unnatural warming and acidification, and ubiquitous “dead zones” induced by nutrient runoff. These changes have been linked not to a single threat but to the combined effects of the many past and present human activities that affect marine ecosystems directly and indirectly.

The declining state of the oceans is not solely a conservation concern. Healthy oceans are vital to everyone, even those who live far from the coast. More than 1 billion people worldwide depend on fish as their primary protein source. The ocean is a key component of the climate system, absorbing solar radiation and exchanging, absorbing, and emitting oxygen and carbon dioxide. Ocean and coastal ecosystems provide water purification and waste treatment, land protection, nutrient cycling, and pharmaceutical, energy, and mineral resources. Further, more than 89 million Americans and millions more around the world participate in marine recreation each year. As coastal populations and demand for ocean resources have grown, more and more human activities now overlap and interact in the marine environment.

Integrated management of these activities and their effects is necessary but is just beginning to emerge. In Boston Harbor, for example, a complicated mesh of navigation channels, offshore dumping sites, outflow pipes, and recreational and commercial vessels crisscrosses the bay. Massachusetts, like other states and regions, has realized the potential for conflict in this situation and is adopting a more integrated framework for managing these and future uses in the harbor and beyond. In 2007, California, Oregon, and Washington also agreed to pursue a new integrated style of ocean management that accounts for ecosystem interactions and multiple human uses.

This shift in thinking is embodied in the principles of ecosystem-based management, an integrated approach to management that considers the entire ecosystem, including humans. The goal of ecosystem-based management is to maintain an ecosystem in a healthy, productive, and resilient condition so that it can provide the services humans want and need, taking into account the cumulative effects and needs of different sectors. New York State has passed legislation aimed at achieving a sustainable balance among multiple uses of coastal ecosystems and the maintenance of ecological health and integrity. Washington State has created a regional public/private partnership to restore Puget Sound, with significant authority for coordinated ecosystem-based management.

These examples reflect a promising and growing movement toward comprehensive ecosystem-based management in the United States and internationally. However, efforts to date remain isolated and relatively small in scale, and U.S. ocean management has largely failed to address the cumulative effects of multiple human stressors.

Falling short

A close look at several policies central to the current system of ocean and coastal management reveals ways in which ecosystem-based management was presaged as well as reasons why these policies have fallen short of a comprehensive and coordinated ocean management system. The 1972 Coastal Zone Management Act (CZMA) requires coastal states, in partnership with the federal government, to protect and preserve coastal wetlands and other ecosystems, provide healthy fishery harvests, ensure recreational use, maintain and improve water quality, and allow oil and gas development in a manner compatible with long-term conservation. Federal agencies must ensure that their activities are consistent with approved state coastal zone management plans, providing for a degree of state/federal coordination. At the state level, however, management of all of these activities is typically fragmented among different agencies, generally not well coordinated, and often reactive. In addition, the CZMA does not address important stressors on the coasts, such as the runoff of fertilizers and pesticides from inland areas that eventually make their way into the ocean.

The 1970 National Environmental Policy Act (NEPA) also recognized the importance of cumulative effects. Under NEPA and related state environmental laws, agencies are required to assess cumulative effects, both direct and indirect, of proposed development projects. In addition, they must assess the cumulative effects of all other past, present, and future developments on the same resources. This is an onerous and ambiguous process, which is seldom completed, in part because cumulative effects are difficult to identify and measure. It is also a reactive process, triggered when a project is proposed, and therefore does not provide a mechanism for comprehensive planning in the marine environment.

Congress in the early 1970s also passed the Endangered Species Act, the Marine Mammal Protection Act, and the National Marine Sanctuaries Act, which require the National Marine Fisheries Service and National Oceanic and Atmospheric Administration (NOAA) to address the cumulative effects of human activities on vulnerable marine species and habitats. NOAA’s National Marine Sanctuary Program is charged with conserving, protecting, and enhancing biodiversity, ecological integrity, and cultural legacy within sanctuary boundaries while allowing uses that are compatible with resource protection. The sanctuaries are explicitly managed as ecosystems, and humans are considered to be a fundamental part of those ecosystems. However, sanctuary management has often been hampered by a lack of funding and limited authority to address many important issues. Although the endangered species and marine mammal laws provide much stronger mandates for action, the single-species approach inherent in these laws has limited their use in dealing with broader-scale ecological degradation.

Existing laws may look good on paper, but in practice they have proven inadequate. Overall, U.S. ocean policy has five major shortcomings. First, it is severely fragmented and poorly matched to the scale of the problem. Second, it lacks an overarching set of guiding principles and an effective framework for coordination and decisionmaking. Third, the tools provided in the various laws lead mostly to reactive planning. Fourth, many policies lack sufficient regulatory teeth or funding to implement their mandates. Finally, scientific information and methods for judging the nature and extent of cumulative effects have been insufficient to support integrated management.

The oceans are still largely managed piecemeal, one species, sector, or issue at a time, despite the emphasis on cumulative effects and integrated management in existing laws. A multitude of agencies with very different mandates have jurisdiction over coastal and ocean activities, each acting at different spatial scales and locations. Local and state governments control most of what happens on land. States generally have authority out to three nautical miles, and federal agencies govern activities from the three-mile limit to the exclusive economic zone (EEZ) boundary, 200 nautical miles offshore. Layered on top of these boundaries are separate jurisdictions for National Marine Sanctuaries, National Estuarine Reserves, the Minerals Management Service, regional fisheries management councils, and many others. There is often no mechanism or mandate for the diverse set of agencies that manage individual sectors to communicate or coordinate their actions, despite the fact that the effects of human activities frequently extend across boundaries (for example, land-based pollutants may be carried far from shore by currents) and the activities of one sector may affect those of another (for example, proposed offshore energy facilities may affect local or regional fisheries). The result is a de facto spatial configuration of overlapping and uncoordinated rules and regulations.

Important interconnections—between human uses and the environment and across ecological and jurisdictional realms—are being ignored. The mandate for meaningful coordination among these different realms is weak at best, with no transparent process for implementing coordinated management and little authority to address effects outside of an agency’s direct jurisdiction. The increasing number and severity of dead zones are examples of this. A dead zone is an area in which coastal waters have been depleted of oxygen, and thus of marine life, because of the effects of fertilizer runoff from land. But the sources of the problem are often so distant from the coast that proving the land/coast connection and then doing something about the problem are major challenges for coastal managers.

Ocean and coastal managers are forced to react to problems as they emerge, often with limited time and resources to address them, rather than being able to plan for them within the context of all ocean uses. For example, states and local communities around the country are grappling with plans for liquefied natural gas facilities. Most have no framework for weighing the pros and cons of multiple potential sites. Instead, they are forced to react to each proposal individually, and their ability to plan ahead for future development is curtailed. Approval of an individual project requires the involvement of multiple federal, state, and local agencies; the makeup of this group varies depending on the prospective location. This reactive and variable process results in ad hoc decisionmaking, missed opportunities, stalemates, and conflicts among uses. Nationwide, similar problems are involved in the evaluation of other emerging ocean uses, such as offshore aquaculture and energy development.

The limited recovery of endangered and threatened salmon species is another example of how our current regulatory framework can fail, while also serving as an example of the potential of the ecosystem-based approach. A suite of land- and ocean-based threats, including overharvesting, habitat degradation, changing ocean temperatures, and aquaculture threaten the survival of salmon stocks on the west coast. In Puget Sound, the problem is made more complex by the interaction of salmon with their predators, resident killer whales, which are also listed as endangered. Despite huge amounts of funding and the mandates of the Endangered Species Act, salmon recovery has been hampered by poor coordination among agencies with different mandates, conflicts among users, and management approaches that have failed to account for the important influence of ecosystem interactions. In response, a novel approach was developed called the Shared Strategy for Puget Sound. Based on technical input from scientists about how multiple stressors combine to affect salmon populations, local watershed groups developed creative, feasible, salmon recovery plans for their own watersheds. Regional-scale action was also needed, so watershed-level efforts were merged with input from federal, county, and tribal governments as well as stakeholders to create a coordinated Puget Sound Salmon Recovery Plan. That plan is now being implemented by the Puget Sound Partnership, a groundbreaking public/private alliance aimed at coordinated management and recovery of not just salmon but the entire Puget Sound ecosystem.

As in Puget Sound, most areas of the ocean are now used or affected by humans in multiple ways, yet understanding how those various activities and stresses interact to affect marine ecosystems has proven difficult. Science has lagged behind policies aimed at addressing cumulative effects, leaving managers with few tools to weigh the relative importance and combined effects of a multitude of threats. In some cases, the stressors act synergistically, so that the combination of threats is worse than just the sum of their independent effects. Examples abound, such as when polycyclic aromatic hydrocarbon pollutants combine with increases in ultraviolet radiation to increase the mortality of some marine invertebrates. Recent work suggests that these synergistic effects are common, especially as more activities are undertaken in a particular place. The science of multiple stressors is still in its infancy, and only recently has a framework existed for mapping and quantifying the impacts of multiple human activities on a common scale. This scientific gap is a final critical limitation on active and comprehensive management of the marine environment.

Toward ocean zoning

Advocates for changing U.S. ocean policy are increasingly calling for comprehensive ocean zoning, a form of ecosystem-based management in which zones are designated for different uses in order to separate incompatible activities and reduce conflicts, protect vulnerable ecosystems from potential stressors, and plan for future uses. Zoning is already in the planning stages in a variety of places.

Creating a comprehensive, national, ecosystem-based, ocean zoning policy could address many of the ubiquitous problems with current policy by providing a set of overarching guiding principles and a standardized mechanism for the planning of ocean uses that takes into account cumulative effects. In order to be successful, the policy must mandate and streamline interagency coordination and integrated decisionmaking. It should also provide for public accountability and effective stakeholder engagement. Finally, it should be supported by scientific tools and information that allow participants in the process to understand where and why serious cumulative effects occur and how best to address them.

An integrated management approach would be a major shift in ocean policy. Managers will need not only a new governance framework but also new tools for prioritizing and coordinating management actions and measuring success. They must be able to:

  • Understand the spatial distribution of multiple human activities and the direct and indirect stresses on the ecosystem associated with those activities
  • Assess cumulative effects of multiple current and future activities, both inside and out of their jurisdictions, that affect target ecosystems and resources in the management area
  • Identify sets of interacting or overlapping activities that suggest where and when coordination between agencies is critical
  • Prioritize the most important threats to address and/or places to invest limited resources
  • Effectively monitor management performance and changing threats over time

Given the differences among the stressors—in their effects, intensity, and scale—comparing them with a common metric or combining them into a measure of cumulative impact in order to meet these needs has been difficult. Managers have lacked comprehensive data and a systematic framework for measuring cumulative effects, effectively making it impossible for them to implement ecosystem-based management or comprehensive ecosystem-based ocean zoning. Thanks to a new tool developed by a diverse group of scientists and described below, managers are now able to assess, visualize, and monitor cumulative effects. This tool is already being used to address the needs listed above and to guide the implementation of ecosystem-based management in several places. The resulting maps of cumulative human impact produced by the process show an over-taxed ocean, reinforcing the urgent need for careful zoning of multiple uses.

An assessment tool

The first application of this framework for quantifying and mapping cumulative effects evaluated the state of the oceans at a global scale. Maps of 17 different human activities, from pollution to fishing to climate change, were overlaid and combined into a cumulative impact index. The results dramatically contradict the common impression that much of the world’s ocean is too vast and remote to be heavily affected by humans (figure 1). As much as 41% of the ocean has been heavily influenced by human activity (the orange to red areas on the map), less than 4% is relatively unaffected (the blue areas), and no single square mile is unaffected by the 17 activities mapped. In coastal areas, no fewer than 9 and as many as 14 of the 17 activities co-occur in every single square mile. The consequences of this heavy use may be missed if human activities are evaluated and managed in isolation from one another, as they typically have been in the United States and elsewhere. The stunning ubiquity of multiple effects highlights the challenges and opportunities facing the United States in trying to achieve sustainable use and long-term protection of our coasts and oceans.

Source: Modified from Halpern et al., 2008, Science

The global map makes visible for the first time the overall impact that humans have had on the oceans. More important, the framework used to create it can help managers move beyond the current ad hoc decisionmaking process to assess the effects of multiple sectors simultaneously and to consider their separate and cumulative effects, filling a key scientific gap. The approach makes the land/sea connection explicit, linking the often segregated management concerns of these two realms. It can be applied to a wide range of management issues at any scale. Local and regional management have the most to gain by designing tailored analyses using fine-scale data on relevant activities and detailed habitat maps. Such analyses have recently been completed for the Northwest Hawaiian Islands Marine National Monument and the California Current Large Marine Ecosystem, which stretches from the U.S.-Canada border to Mexico. Armed with this tool, policymakers and managers can more easily make complex decisions about how best to design comprehensive spatial management plans that protect vulnerable ecosystems, separate incompatible uses, minimize harmful cumulative effects, and ultimately ensure the greatest overall benefits from marine ecosystem goods and services.

Policy recommendations

Three key recommendations emerge as priorities from this work. First, systematic, repeatable integration of data on multiple combined effects provides a way to take the pulse of ocean conditions over time and evaluate ocean restoration and protection plans, as well as proposed development. Policymakers should support efforts by scientists and agencies to develop robust ways to collect these critical data.

The framework developed to map human effects globally is beginning to play an important role in cumulative-impact assessment and the implementation of ecosystem-based management and ocean zoning. This approach integrates information about a wide variety of human activities and ecosystem types into a single, comparable, and updatable impact index. This index is ecologically grounded in that it accounts for the variable responses of different ecosystems to the same activity (for example, coral reefs are more sensitive to fertilizer runoff than are kelp forests). The intensity of each activity is assessed in each square mile of ocean on a common scale and then weighted by the vulnerability of the ecosystems in that location to each activity. Weighted scores are summed and displayed in a map of cumulative effects.

With this information, managers can answer questions such as, where are areas of high and low cumulative impact? What are the most important threats to marine systems? And what are the most important data gaps that must be addressed to implement effective integrated management of coastal and marine ecosystems? They can also identify areas in which multiple, potentially incompatible activities overlap, critical information for spatial planning. For example, restoring oyster reefs for fisheries production may be ineffective if nearby activities such as farming and urban development result in pollution that impairs the fishery. Further, managers can use the framework to evaluate cumulative effects under alternative management or policy scenarios, such as the placement of new infrastructure (for example, wind or wave energy farms) or the restriction of particular activities (for example, certain kinds of fishing practices).

This approach highlights important threats to the viability of ocean resources, a key aspect of NOAA’s framework for assessing the health of marine species’ populations. In the future, it might also inform a similar framework for monitoring the condition of coastal and ocean ecosystems. In particular, this work could contribute to the development of a national report card on ocean health to monitor the condition of our oceans and highlight what is needed to sustain the goods and services they provide. NOAA’s developing ecosystem approach to management is fundamentally adaptive and therefore will depend critically on monitoring. This tool is a valuable contribution to such efforts, providing a benchmark for future assessments of ocean conditions and a simple new way to evaluate alternative management strategies.

The second key policy recommendation is that, because of the ubiquity and multitude of human uses of the ocean, cumulative effects on marine ecosystems within the U.S. EEZ can and must be addressed through comprehensive marine spatial planning. Implementing such an effort will require new policies that support ocean zoning and coordinated regional planning and management under an overarching set of ecosystem-based guiding principles.

The vast extent, patchwork pattern, and intensity of stressors on the oceans highlight the critical need for integrated planning of human activities in the coastal and marine environments. However, this patchwork pattern also represents an opportunity, because small changes in the intensity and/or location of different uses through comprehensive ocean zoning can dramatically reduce cumulative effects.

Understanding potential tradeoffs among diverse social and ecological objectives is a key principle of ecosystem-based management and will be critical to effective ocean zoning. Quantification and mapping of cumulative effects can be used to explore alternative management scenarios that seek to balance tradeoffs. By assessing how cumulative effects change as particular human activities are added, removed, or relocated within the management area, managers, policymakers, and stakeholders can compare the potential costs and benefits of different decisions. Decisionmaking by the National Marine Sanctuaries, coastal states, and regional ecosystem initiatives could all potentially benefit from revealing areas of overlap, conflict, and incompatibility among human activities. In the Papahānaumokuākea Marine National Monument in the Northwestern Hawaiian Islands, this approach is already being used to help guide decisions on where different activities should be allowed in this highly sensitive ecosystem.

Other spatial management approaches, particularly marine protected areas (MPAs), can also benefit from this framework and tool. Most MPA regulations currently restrict only fishing, but the widespread overlap of multiple stressors suggests that for MPAs to be successful, managers must either expand the list of activities that are excluded from MPAs, locate them carefully to avoid negative effects from other human activities, or implement complementary regulations to limit the impact of other activities on MPAs. Comprehensive assessments and mapping of cumulative effects can be used at local or regional levels to highlight gaps in protection, select areas to protect, and help locate MPAs where their beneficial effects will be maximized. For example, the Great Barrier Reef Marine Park Authority of Australia embedded its large network of MPAs within other zoning and regulations to help address the many threats to coral reef ecosystems not mitigated by protection in MPAs.

The transition to comprehensive ocean zoning will not happen overnight. The governance transition, coordination with states and neighboring countries, engagement of diverse stakeholder groups, and scientific data collection and integration, among other challenges, will all take significant time, effort, and political will. These efforts would be galvanized by the passage of a U.S. ocean policy act, one that cements the country’s commitment to protecting ocean resources, just as the Clean Air and Clean Water Acts have done for air and fresh water. In addition, continued strengthening and funding of the CZMA would support states’ efforts to reduce the impact of coastal and upstream land use on coastal water quality and protect vulnerable ecosystems from overuse and degradation. Lawmakers could also increase the authority of the National Marine Sanctuary Program as a system of protected regions in which multiple human uses are already being managed. Finally, legislation to codify NOAA and support and increase its efforts to understand and manage marine ecosystems in an integrated way is urgently needed.

The third key policy recommendation is that protection of the few remaining relatively pristine ecosystems in U.S. waters should be a top priority. U.S. waters harbor important intact ecosystems that are currently beyond the reach of most human activities. Unfortunately, less than 2% of the U.S. EEZ is relatively unaffected. These areas are essentially national ocean wilderness areas, offering rich opportunities to understand how healthy systems work and important baselines to inform the restoration of those that have been degraded. These areas deserve immediate protection so that they can be maintained in a healthy condition for the foreseeable future. The fact that these areas are essentially unaffected by human activity means that the cost of protecting them in terms of lost productivity or suspended economic activity is minimal. The opportunity cost is small and the returns likely very large, making a strong case for action. If the nation waits to create robust marine protected areas to permanently conserve these intact places, it risks their degradation and a lost opportunity to protect the small percentage of marine systems that remains intact.

Book Review: Follow the money

Follow the money

Science for Sale: The Perils, Rewards, and Delusions of Campus Capitalism by Daniel S. Greenberg. Chicago: The University of Chicago Press, 2007, 324 pp.

Melissa S. Anderson

On the one hand, it appears that the sky is falling yet again. Science is caught up in a competitive arms race for funding, universities are driven by internal and external forces to enter into questionable relationships with the for-profit sector, scientists’ integrity buckles under pressure and, in short, as Daniel S. Greenberg puts it, “much is amiss in the house of science.” On the other hand, despite this general mayhem, scientists as a group demonstrate altruism, work with the best intentions toward scientific progress, and maintain a collective sense of ethical responsibility. Such is the two-handed perspective that dominates Greenberg’s Science for Sale.

The book’s strength lies in Greenberg’s skill as an interviewer of scientists and interpreter of complex developments. The first seven chapters of the book address a range of troublesome issues in science, including financial strain, federal and corporate funding, varieties of academy/industry relations, consequences of the Bayh-Dole Act, academic capitalism and entrepreneurship, breaches of human-subjects and conflict-of-interest regulations, and the regulatory environment. Greenberg covers these through detailed stories and analyses drawn from over 200 interviews with researchers, administrators, regulators, and others, as well as press reports and relevant literature. The strongest of these chapters is a lively review of interactions among federal regulators, institutions, and academic associations concerning human-subjects protection during the past 10 years. Here is Greenberg at his best, revealing the drama and personalities behind federal shutdowns of research programs that violated human- subjects regulations.

Each of the next six chapters is based primarily on Greenberg’s interviews with a single informant. Here we meet Robert Holton, known for his involvement in the development of the drug Taxol, who expresses his disgust with university/industry collaboration, and William Wold, who does his best to explain the financial arrangements that support his work as a professor at Saint Louis University and his role as president of the biotechnology company VirRx. Lisa Bero from the University of California, San Francisco, talks about deliberations and decisions in conflict- of-interest committees; and Drummond Rennie, who has held editorial positions at the New England Journal of Medicine and the Journal of the American Medical Association, expresses frustration at the limited capacity of journals to catch and correct fraudulent research. Greenberg‘s interview with Timothy Mulcahy, who was then at the University of Wisconsin–Madison, concerns technology transfer but actually reveals more about how universities protect their students and postdoctoral fellows in the context of university/industry relationships.

It comes as no surprise that Greenberg, a longtime science journalist, takes a balanced, skeptical stance toward the issues he covers. He gives detailed accounts of things gone wrong but never loses sight of a certain nobleness of character underlying science generally. He documents major cases of malfeasance in academy/industry relationships but also argues that “a lot of steam has gone out of the belief that the linkage of universities and business is fundamentally unholy.“ He balances the need for stronger regulation against the justifiable resistance of scientific associations to inept or excessive regulatory control. He contrasts universities’ enthusiastic expectation of financial windfalls from entrepreneurship with the realities of unlikely jackpots. Universities and the National Institutes of Health have great wealth, but it is never enough to satisfy the demands of scientific potential, leading Greenberg to question the willingness of researchers and research institutions to make tough decisions about financial priorities. Scientists acknowledge a need to be attentive and responsive to public demands for accountability but repeatedly stumble on the gap between rhetoric and reality.

The interview transcripts show Greenberg’s skepticism in action, as he plays both devil’s and angel’s advocate, countering both optimism and pessimism. Such evenhandedness makes this volume a useful counterweight to those who think they have U.S. science figured out. Readers will come away with a realistic sense of the calamity-prone, headache-inducing complexity of the research enterprise. Not all of the issues Greenberg addresses are new; some have been covered in other volumes, and many will be familiar to those who have been reasonably attentive to the scientific press. Not all of the topics fall under the rubric of “science for sale”; some have to do with the organization of scientific work (the federal grant system, peer review, the postdoctoral system) or with broad issues of research integrity, quite apart from involvement with the for-profit sector or marketlike behavior on the part of academic institutions.

Greenberg is less interested in proposing courses of action than in reviewing the effectiveness of current “correctives” of scientific behavior. Institutional policies and regulations, journals’ editorial diligence, academic associations’ influence, oversight by federal agencies such as the Office of Research Integrity and the Office of Human Research Protections, and attention by the press and Congress all play important roles in ensuring the integrity of science; roles that Greenberg argues are imperfectly executed. Academic commercialism raises the stakes: “The sins arising from scientific commercialism pose a far more challenging problem: keeping science honest while potent forces push it hard to make money.“

GREENBERG REPEATEDLY ENDORSES AN ODDLY OLD-FASHIONED IDEA: THE POWER OF SHAME AS A BEHAVIORAL CORRECTIVE IN SCIENCE.

There will always be some whose behavior is determinedly or heedlessly deviant, with or without commercial involvement. There will always be many who conduct their research according to the highest ethical standards. There will always be a middle group who occasionally yield to temptation or misbehave in ways that escape attention. The question is whether the temptations that commercial forces present to the middle group are offset by counterpressures. Skeptical of the adequacy of institutional correctives, Greenberg nonetheless concludes that “for protecting the integrity of science and reaping its benefits for society, wholesome developments now outweigh egregious failings—though not by a wide margin.“

To bolster these wholesome developments, Greenberg repeatedly endorses an oddly old-fashioned idea: the power of shame as a behavioral corrective in science. In one paragraph of the final chapter, he invokes the terms shame, embarrassment, pride, reputations, exposure, harmful publicity, prestige, humiliation, norms, judgments of colleagues and the public, vulnerability, and ethical sensitivity. The shame weapon, as he calls it, has the capacity to keep scientists and their institutions honest because of the public’s expectations for ethical conduct in research, the critical importance of reputation to a successful career, and the severe consequences of having one’s wrongdoing exposed. Greenberg argues, “The scientific profession exalts reputation. Among scientists and journal editors, the risks of being classed as a rogue would have a wondrously beneficial effect on attention to the rules.“ He notes that shaming and public humiliation at universities where research projects were shut down by federal agencies seemed to yield “salutary effects.“

It is perhaps not surprising that Greenberg, recognizing the temptations and pressures facing what he sees as a basically righteous population, proposes a journalist’s sharpest weapon, exposure, as a promising solution. What is old-fashioned about the idea is its connection to professional self-regulation. During the past 20 years, self-regulation has proved an inadequate counterweight against the surge of federal and institutional regulations, rules, oversight, accountability, formal assurances, and training mandates. Greenberg does not, of course, recommend abandoning all of these in favor of exposure and humiliation, but he does seem to argue that shame could pick up much of the slack from these other mechanisms. This argument may not hold up. First, the very importance of a scientist’s reputation, which is critical to shame’s effectiveness, makes potential whistleblowers reluctant to come forward, lest they erroneously inflict damage merely by accusation. Second, both academic and government research institutions depend on the public’s high regard for continued funding, a fact that heightens their susceptibility to shame but also substantially increases their incentive to hide emergent embarrassments. Third, recent research published in Science suggests that a finding of misconduct does not necessarily derail a career in science.

Shame may also be limited as a deterrent by the considerable control that individuals and institutions have over the terms of disclosure of their own activities. Rationalization is not necessarily a weak defense when one’s public questioners have little understanding of the details of scientific research or funding. Institutions can and do defend questionable actions as justifiable or even prudent in the context of changing and challenging environments. Both academic and public attitudes adjust when universities convincingly interpret their arrangements with industry as normative in the current academic economy.

Greenberg brackets his analysis with consideration of universities’ obsession with growth, competitiveness, and prestige. These focal points are relentless pressures that shape the context of many of the problems he addresses in the rest of the book. As he points out, “Risk and disappointment are built into the financial system of science, feeding a mood of adversity among university administrators, research managers, scientists, and graduate students.“ Such systemic fault lines should not be ignored; they give rise to competitive pressures and a sense of injustice that my colleagues and I have found to be strongly linked to scientific misconduct. I challenge Greenberg to turn his estimable analytic skill and spot-on questions toward an investigation of these perverse and fundamental problems in the organization and funding of science.


Melissa S. Anderson () is professor of higher education and director of the Postsecondary Education Research Institute at the University of Minnesota.

Archives – Winter 2009

SIDNEY NAGEL, Two-Fluid Snap Off, Ink jet print, 52.5 × 34.25 inches, 1999.

Two-Fluid Snap Off

A drop falling from a faucet is a common example of a liquid fissioning into two or more pieces. The cascade of structure that is produced in this process is of uncommon beauty. As the drop falls, a long neck, connecting two masses of fluid, stretches out and then breaks. What is the shape of the drop at the instant of breaking apart?

National Academy of Sciences member Sidney Nagel is the Stein-Freiler Distinguished Service Professor in Physics at the University of Chicago. Nagel’s work has drawn attention to phenomena that scientists have regarded as outside the realm of physics, such as the science of drops, granular materials, and jamming. Using photographic techniques, as illustrated by this image in the National Academy of Sciences collection, Nagel and his team study such transitions to understand how these phenomena can be tamed and understood.

Restoring and Protecting Coastal Louisiana

The challenges facing the Gulf Coast reflect a national inability to come to grips with the need to deal with neglected infrastructure, both natural and built.

The sustainability of coastal Louisiana is critical to the nation. It is the location of a large part of the nation’s oil and gas industry and its largest port complex. It provides vital habitat for economically important fisheries and threatened and endangered species. Yet this region is under siege. The catastrophic effects of Hurricane Katrina in 2005 and recent storms in 2008 brought to the nation’s attention the fragility of the region’s hurricane defenses and the continuing loss of wetlands and ecosystems; a loss that has continued for more than a century with little or no abatement. Slowly, the flood protection system in New Orleans is being restored; even more slowly, attention is shifting to restoring the coastal deltaic system. But there is a lack of strong support for these two linked efforts, protection and restoration. There is a lack of funding but also the lack of a prioritization system at the federal level for allocating funds for critical water resources infrastructure. The challenges facing the Gulf Coast reflect a national inability to come to grips with the need to deal with neglected infrastructure, both natural and built, and the realization that both provide security to coastal communities. It will not be possible to protect and restore coastal Louisiana without significant changes in the way federal and state governments deal with these issues.

According to the American Society of Civil Engineers (ASCE), in its frequent report cards on the status of the nation’s infrastructure, the United States is not maintaining and upgrading its infrastructure and is especially neglecting its natural and built water resources infrastructure. The ASCE indicates that the cost of all needed infrastructure work in the United States exceeds $1.5 trillion. Funding for water and wastewater treatment facilities is falling behind at a rate of more than $20 billion each year. Funding for flood-risk management, navigation, hydropower, and ecosystem restoration (wetland and aquatic), not including the short-term levee repair efforts in New Orleans, also continues to decline. With so many clear and pressing needs, it is vital that the United States devise more rational approaches to the funding and prioritization of infrastructure projects, including critical water resource projects such as those in coastal Louisiana.

The 2005 disaster in New Orleans awakened the nation to the serious vulnerabilities in flood protection that exist across the country and to the fact that the nation lacks a realistic assessment of the infrastructure, both built and natural, it takes to reduce these vulnerabilities. The failures of levees and other infrastructure that have occurred since Katrina, including those that occurred during the Midwest floods of 2008, have more clearly defined this issue as national in scope. At the same time, the need for national priorities in ecosystem restoration has lacked attention. The loss of coastal wetlands along the Gulf had been well known for decades, and environmental groups had been campaigning for action to restore this deltaic coast. Resources were going to projects in other parts of the country such as the $7.8 billion federal initiative to restore the Florida Ever-glades and the joint federal/state efforts to reduce pollution in the Chesapeake Bay. Other regions also deserve attention. The need for ecosystem restoration has been recognized in the Missouri River, the upper Mississippi River, the California Bay Delta, the Great Lakes, and numerous smaller areas across the country. There is an urgent need to assess investments in natural and built environments to reduce vulnerabilities to increased flooding risks.

Coastal Louisiana sits at the end of a natural funnel that drains 41% of the coterminous United States and parts of two provinces of Canada. This watershed, the Mississippi River basin, delivers water to the Gulf of Mexico through the mouths of the Mississippi and Atchafalaya Rivers. Extending more than 11,400 square miles, this coastal area was formed during the past 6,000 years by a variety of deltaic lobes formed by the Mississippi River switching east and west from Lafayette to Slidell, creating an extensive system of distributaries and diverse wetland landscapes as freshwater and silt mixed with coastal processes of the Gulf of Mexico. Periodic river flooding by breaches in natural levee ridges (crevasses) along the numerous distributaries across the deltaic landscape out to the barrier islands limited salt water intrusion and added sediments to coastal basins. These river and coastal processes built and sustained an extensive wetland ecosystem, the eighth largest delta in the world. In addition to providing nurseries for fish and other marine life and habitat for one of the largest bird migration routes in North America, these wetlands serve as green infrastructure, providing natural buffers that reduce flood risks to the vast energy production and port facilities of the Gulf area as well as human settlements inland from the coast. Early settlers in New Orleans were more concerned by flooding from the Mississippi than by the threat of Gulf storms, which would be buffered by extensive coastal forests that stood between the city and the Gulf of Mexico.

Long before Katrina, coastal wetlands were disappearing because of considerable human influence and disruption in the natural processes of a deltaic coast. Levees were built along the banks of the Mississippi to keep the river from overflowing into floodplains and coastal environments to protect lands that had been converted to agriculture, industry, and human settlement. The sediment that once breached natural levees and nourished the wetlands was instead channeled out into the Gulf of Mexico, in essence starving the delta and causing it to recede rather than grow. The effect of levees was exacerbated by the construction of channels and pipeline corridors that crisscrossed the wetland landscape to provide access for extracting much needed domestic oil and gas resources by providing reliable navigation channels that could be connected to Mississippi River commerce. During the 1960s and 1970s, coastal land, mostly wetlands, disappeared at the rate of 39 square miles per year.

The potential conflict of human activities and processes necessary for a sustainable deltaic coast were identified after the 1927 flood. But pressure for protection and economic development ignored the call for more prudent management of river resources to integrate both protection and restoration policies. By the mid-1980s, coastal scientists had brought the public’s attention to the loss of wetlands and the degradation of the Mississippi River delta. Very little was done to address the enormous problem because the environmental consequences were not deemed sufficient to justify the expense of restoration and mitigation. In 1992, the Mississippi River Commission, recognizing the problem of increased salinity that threatened deltaic habitats along the coast, opened a diversion structure through a Mississippi River levee at Caernarvon, south of New Orleans. This diversion structure simulates a levee breach by allowing Mississippi River water to flow by gravity (flood gates are opened during elevated river levels) into the wetlands behind the levees during certain periods of the year. This became the first significant step in what may become a series of such structures to the south of New Orleans.

New Orleans and the surrounding region have been protected in various ways from potential Mississippi River floods since the city was settled in 1717. After the disastrous 1927 flood, the Army Corps of Engineers instituted a massive river levee-rebuilding program that was accompanied by floodways and channel modification. This river-protection system has performed as expected since that time.

Coastal protection became the additional authority of the Corps in 1965, when Hurricane Betsy flooded parts of New Orleans. Until the arrival of Katrina, federal and local efforts had focused on providing protection against a storm defined by the National Oceanic and Atmospheric Administration (NOAA) as the standard project hurricane. Shortly after construction began in earnest, NOAA increased the estimated size of the standard project hurricane. In contrast to the river-protection system, funding for the coastal-protection system was through individual projects that came in dribs and drabs, thus limiting the ability of the Corps to change its design to accommodate the new, larger target hurricane. Instead, the Corps decided to move ahead to first complete all the work at the original level of protection. But as individual construction projects took place, ever-present subsidence was diminishing the level of protection provided by the newly constructed levees. When Katrina hit, the degree of completion of the major components of the protection system varied from 65 to 98% of the original design standards, not taking into account datum errors, subsidence, and sea level rise that had taken place since the original design. The failure during Katrina of several components of the protection system, together with the massive size of the hurricane itself and the loss of coastal habitat, resulted in a loss of more than 1,400 lives, the devastation of major housing districts within the city, and other damage throughout the region.

Finding solutions

Postmortems on the impact of the hurricane flooding recognized the longstanding relationship between extensive coastal wetlands and community protection, resulting in a great deal of debate about whom or what was to blame for failing to implement integrated protection and restoration. Now, however, it is more important that we devote our attention to finding solutions that will leave this important region with reduced risks from hurricanes, a navigation system that will support the substantial foreign trade through the Port of New Orleans, support for the area as a viable energy producer for the nation, and a rich and vibrant coastal wetland ecosystem.

Although there are now cooperative efforts to deal with the problems of coastal Louisiana, the picture is far from rosy. Two parallel efforts, one led by the state of Louisiana and the other by the Corps, have been under way since Katrina to determine the appropriate combination of structural activity (levees, flood walls, gates, and so forth), non-structural features (for example, building codes and evacuation planning), and wetland restoration needed to protect urban areas and distributed assets across the coastal landscape. The state plan has been approved by the Louisiana legislature, but the Corps plan has yet to be completed and submitted to Congress. Both plans call for restoration of the wetlands through diversions of the Mississippi River, and both would rely on adaptive management of the process to address the substantial design uncertainties in such a large dynamic deltaic system. A coastal ecosystem restoration program, much like that for the Everglades, was authorized by Congress in the Water Resources Development Act of 2007. Only a few preliminary projects were authorized, however, and funding has not yet been provided. This authorization establishes a structure to oversee this work but does not identify methods to be used to determine priorities among the various components of the overall program, nor does it provide an effective means for competent project authorization and funding. The state has recently announced plans to spend nearly $1.2 billion over the next three years on protection and restoration projects that are consistent with the state master plan. Although this is an impressive investment, it is an order of magnitude less than even some of the conservative estimates of system-level project costs for both coastal ecosystem restoration and storm risk reduction.

The specter of climate change is adding to the water and coastal management challenges. Climate change will bring about changes in weather patterns and the potential for increased flooding, drought, and sea-level rise. Existing projects will have to be modified to accomplish the purposes for which they were originally designed, and additional attention will be required to deal with the already significant strain on recovering ecosystems. The vulnerabilities of coastal landscapes to projected environmental changes are relative to the capacity of ecosystems to adapt. The present rate of wetland loss in this region suggests that these adaptive mechanisms are insufficient to deal with present rates of sea-level rise and subsidence.

Those working on coastal Louisiana restoration and protection have attempted to deal with the program on a comprehensive (watershed) basis, recognizing that the problems of southern Louisiana are not solely those of that state. The sediment required to replenish the wetlands will come from lands scattered throughout the basin and will be affected by the activities in the basin states. Much of the original sediment load of the Mississippi is trapped behind major dams on the Missouri River system. A major dead zone (an area where marine life is stressed because of lack of oxygen) now exists in the Gulf of Mexico along Louisiana and parts of Texas as a result of excessive nutrients traveling down the Mississippi from the farmland of the Midwest. The flux of nitrate has increased threefold since the 1960s. Although sediments are critical to rebuilding the wetlands of the Mississippi River Delta, additional nutrients flowing through river diversion structures could potentially impair inland waters of the state. Two strategies have been suggested to limit the potential water quality issues along coastal Louisiana. An upstream strategy is a significant reduction in the application of chemicals to the farmland of the Midwest, along with restoring wetland buffer strips on the edge of fields that can reduce nutrient loading in river waters. Downstream in the coastal delta, wetland restoration is considered another mechanism of nutrient reduction to coastal waters. Both strategies have uncertainties in system capacity of nutrient reduction and political will in implementation. So a potential conflict in diverting river sediment for wetland restoration may be limited by coincident nutrient enhancement of hypoxia.

Funding limitations

Even though the nation’s largest port and energy complex, a metropolitan area of nearly a million residents, and coastal wetlands of immense value are at risk, funds to support the restoration and protection of coastal Louisiana have been slow in coming. The Corps has been provided with about $8 billion to restore the levee system around New Orleans to the level of a 100-year flood. This level of protection is below that of a 400-year storm such as Katrina, but it will relieve New Orleans residents of the requirement to buy flood insurance against a potential hurricane. Congress has directed the Corps to study and report on the costs of providing New Orleans with protection against a category 5 hurricane. Early estimates indicate that the costs of such a project would exceed $10 billion. The cost of coastal restoration has been estimated at as much as $20 billion. Even in these days of mega-bailouts, those are big numbers.

The ability to move ahead with the protection and restoration of coastal Louisiana will require substantial funding. The Bush administration’s budgets have kept funding for the water sector flat except for periods when disasters required immediate attention. In constant-dollar terms, the funds available for these projects are going down each year. In the tight funding environment of recent years, budget decisions have been driven largely by the historical record of funding, not an evaluation of the nation’s risks and needs. The current fiscal crisis will only increase the pressure on the limited dollars that are available.

The largest source of funds for dealing with major water projects is found in the budget of the Corps. But the restoration and protection of coastal Louisiana is but one of many flood and hurricane protection, navigation, ecosystem restoration, and other projects that demand Corps and related federal water dollars. Major flood problems in the central valley of California, the reconstruction of levees in the Midwest, and the repair and upgrade of other structures identified in recent levee system inspections provide competition for New Orleans and coastal Louisiana. The aggregate projected costs of restoration projects in the Everglades (now $10.9 billion), upper Mississippi, Chesapeake Bay, Great Lakes, and California Bay Delta exceed $50 billion. Costs for other programs, such as the Missouri River basin, remain to be calculated.

Unfortunately, priority setting is tied to a rudderless system for allocating federal funds and assessing national needs. It is difficult to justify a national priority when objectives at the national level are not clear. Developing a needs assessment is dependent on having national policies that appropriately define national goals for water use. Whom do we protect from flooding? What infrastructure is at risk? What losses and risks will have national consequences? What ecosystems need to be restored or are the most valuable to the economic, ecological, and social well-being of the nation? How important are ports to the economy of the country? Recent National Research Council studies of the Corps’ planning processes and projects have indicated that the Corps is faced with conflicting laws and regulations that make prioritization and description of needs difficult to achieve.

Within the federal government, requests for funds are initiated by the departments and are based on guidance from the Office of Management and Budget, which establishes prioritization criteria for items to be included in the president’s budget. But these priorities are only tangentially related to actual needs and are driven by economic cost/benefit criteria, not national needs. In making decisions on the budget, Congress, as was noted at a recent hearing on watershed planning, tends to deal with the authorizations and appropriations for specific projects with little consideration of the relationship of the projects to the greater needs of the nation or even the watershed in which the projects are to be built. With some exceptions, Congress supports projects on the basis of the political weight they carry.

Prioritizing funding on a watershed basis would not be new to the United States. In 1927, Congress directed the Corps to conduct studies of all U.S. river basins in order to plan for integrated development of the water resources of these basins. These “308 reports” (named for the section of the law that authorized the studies) became the basis for the development of the Tennessee Valley and Columbia River basins, among many others. In cases in which such basin/watershed planning has taken place in a collaborative manner, the results have been outstanding. The Delaware River Basin Commission brings together the states of New York, Pennsylvania, and New Jersey for cooperative management of that important river basin.

In recent years, members of the House and Senate have tried to establish a needs-based approach for allocating funds, but the efforts failed because too few members were interested in giving up the benefits of selecting projects on their political merit. During a 2007 debate on an amendment to a bill to create a bipartisan water resources commission to establish priorities for water project funding, Sen. John McCain (R-AZ) noted that, “We can best ensure safety of our nation’s water resources system by establishing a process that helps us to dedicate funding to the most critical projects. The current system allows more of the same, where members demand projects that are in the members’ interests, but not always in the public’s.” The amendment went nowhere.

Looking for other approaches

Is there a substitute for federal money to support water resource projects? Because of the massive costs of major restoration efforts, doing without Congress doesn’t seem to be a reasonable approach. States are already participating in the funding of major projects. Louisiana has announced its intention to allocate substantial funding to coastal restoration and protection activities (more than $1 billion in the next three years). California recently passed a $5 billion bond issue to repair levees. With federal appropriations slow in coming, Florida has contributed more funding for restoring the Everglades and acquiring critical lands. But states are also in a funding squeeze and cannot provide all that is needed to support projects that are in the national interest.

Several alternative ways of financing infrastructure projects have been proposed and should be seriously considered. Former senator Warren Rudman and New York investment banker Felix Rohatyn have proposed the establishment of a National Investment Corporation (NIC) with the authority to issue bonds with maturities of up to 50 years to finance infrastructure projects. The bonds would be guaranteed by the federal government and, as long-lived instruments, would align the financing of infrastructure investments with the benefits they create. Bond repayment would allow the NIC to be self-financing. In a similar approach begun after Katrina, a working group commissioned by the Corps proposed the creation of a congressionally chartered coastal investment corporation to support needed development projects. In 2007, Louisiana established the Coastal Protection and Restoration Financing Corporation that “will be responsible for selling bonds based on the expected revenue from future oil and gas royalty payments” and that will allow funding of projects over the next 10 years “instead of having to wait until a steady revenue stream arrives from the federal government in 2017.” In the face of the current fiscal crisis and the need to develop a long-term approach, the development of the NIC offers the most realistic method of dealing with the need for the development of a sustainable funding stream.

Another challenge is coordinating federal funding and establishing regional priorities. In the past, the United States successfully established processes to deal with the challenge of developing priorities and funding to deal with water issues of national significance. In 1879, Congress established the Mississippi River Commission with the mission of providing a navigable Mississippi and reducing the ravages of frequent floods. After the 1927 flood, Congress passed the Flood Control Act of 1928, which created a comprehensive a Mississippi River and Tributaries (MR&T) project. This permitted the commission to deal with the lower valley as a whole: one mission, one entity, working cooperatively with all interested parties to integrate the resources needed to meet the challenge. Although the operations and size of government have changed since 1879 and 1928, the need to deal with work in the lower Mississippi Valley in a comprehensive manner remains. The continuous funding of work on the lower Mississippi River for nearly 80 years and the comprehensiveness of the effort show the utility of developing a separate federal project, similar to the MR&T, for restoring and protecting coastal Louisiana.

Protection and restoration of coastal Louisiana should be a major priority for the United States. The nation cannot live without its water resources and deltaic coast. It cannot continue to watch coastal Louisiana disappear. Sooner or later, it will have to address the problem. The longer we wait, the more difficult the problem will become, and the more money the eventual solution will cost.

Recommended reading

J.W. Day Jr., D.F. Boesch, E.J. Clairain, G.P. Kemp, S.B. Laska, W.J. Mitsch, K. Orth, H. Mashriqui, D.R. Reed, L. Shabman, C.A. Simenstad, B.J. Streever, R.R. Twilley, C.C. Watson, J.T. Wells, and D.F. Whigham, “Restoration of the Mississippi Delta: Lessons from Hurricanes Katrina and Rita,” Science 315 (2007): 1679–1684.

Everett Ehrlich, Public Works, Public Wealth: New Directions for America’s Infrastructure (Washington, DC: Center for Strategic and International Studies, 2005).

Committee on Environment and Natural Resources, Integrated Assessment of Hypoxia in the Northern Gulf of Mexico (Washington, DC: National Science and Technology Council, 2000), (available at http://oceanservice.noaa.gov/products/pubs_hypox.html#Intro).

National Research Council, U.S. Army Corps of Engineers Water Resources Planning: A New Opportunity for Service (Washington. DC: National Academies Press, 2004).

National Research Council, Drawing Louisiana’s New Map: Addressing Land Loss in Coastal Louisiana (Washington, DC: National Academies Press, 2005).

National Research Council, Regional Cooperation for Water Quality Improvement in Southwestern Pennsylvania (Washington, DC: National Academy Press, 2005).

Felix G. Rohatyn and Warren Rudman, “It’s Time to Rebuild America. A Plan for Spending More—and Wisely—on Our Decaying Infrastructure,” Washington Post, December 13, 2005, p. A27.

Working Group for Post-Hurricane Planning for the Louisiana Coast 2006, A New Framework for Planning the Future of Coastal Louisiana after the Hurricanes of 2005 (Cambridge, MD: University of Maryland Center for Environmental Science, 2006) (available at ).


Gerald E. Galloway () is the Glenn L. Martin Professor of Engineering at the University of Maryland, a former chief of the U.S. Army Corps of Engineers, and a former member of the Mississippi River Commission. He was recently appointed to the Louisiana Governor’s Advisory Commission on Coastal Protection, Restoration and Conservation. Donald F. Boesch is professor of marine science and president of the University of Maryland Center for Environmental Science. He serves as chair of the Science Board for the Louisiana Coastal Area Ecosystem Restoration Program. Robert R. Twilley is professor of oceanography and coastal sciences, and associate vice chancellor of the Coastal Sustainability Agenda at Louisiana State University, Baton Rouge

Forum – Winter 2009

Budget doubling defended

Richard Freeman and John Van Reenen (“Be Careful What You Wish For: A Cautionary Tale about Budget Doubling,” Issues, Fall 2008) provided a thought-provoking analysis of the budget doubling for the National Institutes of Health (NIH). They raised an important point that we must view future research funding increases in terms of their impact on increasing educational opportunities and financial support for young researchers. However, the NIH doubling was needed because chronic underinvestment in scientific R&D created a situation in which many of our federal scientific agencies were in need of significant short-term increases in funding. We must learn to avoid the complacency that led to the current funding deficiencies. Unfortunately, the flat-funding of NIH since the doubling ended has effectively caused their budget to decline by 13% due to inflation, creating a whipsaw effect after a decade of growth. Research policy is long-term, and we must commit to a sustainable funding model for federal science agencies.

A dramatic increase in the NIH budget was necessary, but focusing on a single agency ignored the interconnectedness of the scientific endeavor. Much of the innovative instrumentation, meth od ology, and workforce needed for advancing biomedical research come through programs funded by the National Science Foundation and other agencies. For instance, the principles underlying magnetic resonance imaging (MRI) were first discovered by physicists and further refined by physicists and chemists; MRI is now a fundamental imaging tool for medical care. From MRI to laser surgery, biomedical advances often rely on knowledge and tools generated by other scientific fields. The recognition of these interconnections drove the passage of the America COMPETES Act, a law that I was proud to support, which authorized balancing the national research portfolio by improving funding for physical sciences and engineering.

The current economic situation has constricted our financial resources. However, a sustained investment in science and our scientific workforce will contribute to our nation’s long-term economic growth and ensure a stronger economy in the future.

REPRESENTATIVE RUSH HOLT

Democrat of New Jersey

www.holt.house.gov


Innovating for innovation

In “Creating a National Innovation Foundation” (Issues, Fall 2008), Robert Atkinson and Howard Wial make a compelling case for public policies that address how research discoveries become innovations, creating economic activity, jobs, and new capabilities. This line of discussion is too often ignored when the case for public support for science is made. The transition from research to innovation to commercial success is far from automatic. That is why this process is called crossing a “valley of death.”

The United States has indeed been losing ground in the race for global leadership in high-tech innovation. The question is not whether we need a national innovation policy but how it should be constructed. The authors propose a new federal entity called the National Innovation Foundation (NIF) that would include the National Institute of Standards and Technology’s (NIST’s) Technology Innovation Program, the Department of Labor’s WIRED program, and perhaps the National Science Foundation’s (NSF’s) innovation programs. This is a substantial challenge for three reasons. First, the history of solving deficiencies in government capability and priority by creating yet another box in the government organization chart takes years, is fraught with failures, and always arouses strong opposition in Congress. Second, as the authors point out, conservatives in Congress are on record as opposing even modest forms of the NIF idea; witness their determination (successful in 2007) to zero out the budget of NIST’s Advanced Technology Program. Third, removing existing programs from their current homes and assembling them into a new agency is more often a way to kill their effectiveness than to enhance it, witness the problems of the Department of Homeland Security.

Even if the skepticism of conservatives about a government role in private markets was not a problem, the scope of issues any unified technology policy agency must encompass is too broad to be brought together in one place. For example, it must embrace not only technology issues but tax, trade, intellectual property, securities regulation, and antitrust policies as well.

How then might a much stronger and better coordinated federal focus on innovation be established in a new administration more convinced of the need for it?

Atkinson and Wial suggest several alternative forms for their NIF. It could be part of the Department of Commerce, a government-related nonprofit organization, an independent agency (such as NSF), or an arm of the Office of the President. Because NIF would be an operating agency, it cannot be in the Executive Office of the President.

I would suggest a more modest approach, built on a greatly strengthened Technology Administration in the Department of Commerce. (It, too, was recently abolished by the Bush administration.) If it was restored (no new legislation is needed) and a secretary of commerce qualified to lead the restoration of an innovation-intensive economy was appointed, NIST could remain a core and important capability for the NIF function. Both the technical and economic dimensions of a NIF are well within the existing authority of Commerce. The Office of Science and Technology Policy should be responsible for helping the president to integrate all the key functions of an innovation policy, including the major technology departments and agencies. Thus, although it would not be as glamorous as a new agency, it could be created quickly, with only marginal need for amendments to current legislation.

LEWIS M. BRANSCOMB

Adjunct Professor

School of International Relations and Pacific Studies

University of California, San Diego


Robert Atkinson and Howard Wial are worried about the current and future state of U.S. innovation. They point to the gradual decline in America’s standing in everything from R&D funding to the publishing of scientific papers. People in the innovation community in the United States, from the research universities to the companies bringing products to market, share their concern. So do I.

Atkinson and Wial do more than worry. They propose a timely set of policies and a new National Innovation Foundation (NIF). They start by praising and urging full funding for the America Competes Act but also call for a more expansive focus on the entire innovation system, from R&D to the introduction of new commercial products, processes, and services.

Their NIF would be designed to take policy several steps farther by bringing coherence to separate innovation-related federal programs, providing support for state-based and regional initiatives, and strengthening efforts to diffuse as well as develop ideas. Atkinson and Wial urge specific funding to promote collaboration among firms, research institutions, and universities.

They provide enough detail to give the reader a good sense of how the NIF could function, but remain agnostic about where it might best fit in a new administration.

Their proposal opens several doors for action:

Fully fund the America Competes

Act. Congress should fully fund the America Competes Act when it turns again to consideration of the fiscal year 2009 budget.

Think systems. The country needs to think about the innovation system as a whole and develop an innovation strategy to build on it. I particularly liked their call for an annual Innovation Report similar to the annual Economic Report of the President. In that same spirit, I would call on the president to make an Annual State of American Innovation address and require a quadrennial articulation (just as the military does) of the nation’s innovation strategy.

Increase support for current innovation programs. We should broaden and increase funding for the National Institute of Standards and Technology’s Manufacturing Extension Program, the Technology Innovation Program, and similar innovation-related programs. Whether or not the new administration establishes a new institution, Congress should establish programs to support state and regional innovation initiatives as well as collaborative ventures.

Start institutional change. Make one of the associate directors in the Office of Science and Technology Policy responsible for innovation policy. Restore and update the Office of Technology Assessment in Congress with a specific mandate to consider the innovation system.

Looking ahead to 2009, as we respond to the financial crisis and expected recession, we need to think about the impact of new policies on our innovation system—the long-term driver of higher wages, the foundation for economic strength, and a key element in national security. Too often, innovation, and the national system that supports it, is not even an afterthought, let alone a forethought.

KENT HUGHES

Woodrow Wilson Center

Washington, DC

Kent Hughes is the author of Building the Next American Century: The Past and Future of American Economic Competitiveness (Wilson Center Press, 2005).


Better environmental treaties

Lawrence Susskind has identified some key problems with the very structure of environmental treaty formulation (“Strengthening the Global Environmental Treaty System,” Issues, Fall 2008). Some of the remedies he proposes are, however, already taking place but attaining mixed results. For example, one of the solutions presented is the involvement of civil society groups as part of the treaty-making process. This is already happening with many environmental agreements, because civil society groups play an essential role at most Conferences of the Parties where treaty implementation is worked out.

Secretariats of environmental agreements such as the Ramsar Convention on Wetland Protection are housed at the International Union for the Conservation of Nature, which boasts over 700 national nongovernmental organizations as its members. Hence, even if voting rights remain with nation-states, civil society groups have considerable influence through such organizational channels. What often happens is that many of these civil society groups are co-opted by the protracted treaty process as well and are thus not as effective as one may expect them to be.

There are also two seemingly contradictory trends in the politics of international treaties. On the one hand, nationalism is gaining strength along linguistic and religio-cultural divides, as exemplified by the emergence of new states within the past few years such as East Timor and Kosovo. On the other hand, the legitimacy of national jurisdiction is gently being eroded by institutions such as the World Trade Organization and the International Criminal Court.

In this regard, Susskind’s critique is most valid regarding the asymmetry of action caused by powerful recalcitrant states and the delinking of environmental issues from security imperatives. Within the United Nations (UN) system, the only institution with a clear mandate for international regulatory action is the Security Council. However, seldom are environmental issues brought to its attention as a cause for intervention. The inertia within the UN system to reform the structure of the Security Council filters down to all levels of international treaty-making.

The economic power of certain nation-states such as India and Brazil is beginning to provide an antidote to the hegemony of the old guard in the Security Council, as exemplified by the recent failure of the Doha round of trade negotiations. Yet environmental negotiations are still largely decoupled from these more powerful international negotiation forums and are thus not affected by this new locus of influence.

As Susskind notes, the role of science in international treaties can often be diluted by the need to have global representation, as exemplified by the Intergovernmental Panel on Climate Change. However, such pluralism is essential despite its drawbacks of diminishing purely meritocratic research output in order to gain acceptance across all member states.

There are some efforts to reconcile the contradictory trends in environmental policymaking that are beginning to emerge and where Susskind’s concerns may have already been adequately addressed. The European Union’s environmental laws exemplify a process by which national sovereignty can be recognized at a fundamental level while acknowledging ecological salience across states with large economic inequalities. Ultimately, if we are to have an efficacious environmental treaty system, a similar approach with clear targets and penalties for noncompliance will be needed to ensure that policy responses can keep up with ecological impact.

SALEEM H. ALI

Associate Professor of Environmental Policy and Planning

Rubenstein School of Environment and Natural Resources

University of Vermont

Burlington, Vermont


International environmental law has been greatly expanded during the past 40 years. Although some success can be noted with respect to, for example, phasing out ozone-depleting substances, many environmental problems remain unabated. Lawrence Susskind correctly notes that the current system “is not working very well.” Based on his assessment of the system’s weaknesses, Susskind offers several practical suggestions for improving the effectiveness of international environmental governance.

Susskind’s suggestions focus on specific ways in which the environmental treaty-making system can be improved without requiring major changes to basic structures of international law and cooperation. Some may criticize this approach as being too modest given the severity of the environmental challenges we face, but it has the advantage of being more realistic in the short to medium term than any call for fundamentally altering the roles and responsibilities of international organizations and states in international lawmaking and implementation of environmental treaties.

Of Susskind’s many constructive proposals, a few stand out as being both important and relatively achievable. These include setting more explicit targets and timetables for mitigation, establishing more comprehensive and authoritative mechanisms for monitoring and enforcement, and developing new structures for formulating scientific advice. None of these issues are unproblematic—if they were easy, they would already have been addressed—but discussions around several of them are advancing under multiple environmental treaties (albeit painstakingly slowly).

At the heart of many difficult discussions lies the fact that states remain reluctant to surrender sovereignty and decisionmaking rights under environmental treaties. This draws attention to the importance of norms and principles guiding collective behavior. Susskind touches on this in his discussion about the United States’ rejection of the principle of common but differentiated responsibilities intended to aid industrialized and developing countries to move forward on specific issues while recognizing that there are fundamental differences between them in terms of their ability to lead and act.

Political science and negotiations analysis tell us that a shared understanding of the cause, scope, and severity of a problem is critical for successful communal problem-solving involving the redistribution of costs and benefits. It is unlikely that global environmental governance will be significantly improved until there is a much greater acceptance among leading industrialized and developing countries about the characters and drivers of environmental problems and shared norms and principles for how they are best addressed (including the generation of funds for mitigation and adaptation).

In other words, many of the practical suggestions for improving global governance put forward by Susskind should be debated and pursued across issue areas, because they would help us address specific environmental problems more effectively. At the same time, the magnitude of collective change ultimately needed to tackle the deepening environmental crisis is unlikely to come about without more widespread global acceptance of common norms and principles guiding political and economic action and policymaking.

HENRIK SELIN

Assistant Professor

Department of International Relations

Boston University

Boston, Massachusetts


Managing military reform

In “Restructuring the Military” (Issues, Fall 2008), Lawrence J. Korb and Max A. Bergmann call the Pentagon “the world’s largest bureaucracy,” implying that it can be managed much like other very large organizations. They then go on to discuss policies that they believe should be put in place, skirting the question of how such policies would be received in the many semiautonomous centers of power within the Department of Defense (DOD), which in reality is more a loose confederation of tribes than a bureaucracy. “Bureaucracy,” after all, signifies hierarchy. Well-defined hierarchies do exist within the DOD, but they are found within the four services and the civilian employees who answer ultimately to the Secretary of Defense. Otherwise, lines of authority are ambiguous and contested, more so than in any other part of our famously fragmented government. To considerable extent, policy in the DOD is what happens, not what is supposed to happen.

Each of the services has its own vision of warfighting. Before World War II, this made little difference. Since then it has, and the stark contrast between chains of command within the services and the tangled arrangements for coordination among them affect almost everything the DOD attempts. Civilians find it hard simply to discern, much less unravel, conflicts within and among the services from which decisions and priorities emerge concerning, for example, acquisition (R&D and procurement), and if civilian decisionmakers cannot understand what is going on, except grossly, they cannot exert much influence over outcomes.

Korb and Bergmann laud the 1986 Goldwater-Nichols reforms for enhancing “coordination” and “cohesion” and call for extension of this “model” to the “the broader bureaucracy that oversees the nation’s warfighting, diplomatic, and aid agencies.” That seems wishful thinking. Indeed, many of the examples they adduce suggest that Goldwater-Nichols changed relatively little. As I argue in Trillions for Military

Technology: How the Pentagon Innovates and Why It Costs So Much, many of the difficulties so evident in acquisition can be traced as far back as World War I. Why, after all, is it that “soldiers in an Army vehicle have been unable to communicate with Marines in a vehicle just yards away” when such problems have existed literally since the invention of radio?

As president, Dwight Eisenhower, uniquely able to see inside the Pentagon, exerted personal oversight over many aspects of military policy. Other presidents have had to rely on their defense secretaries. A few of those, notably Robert McNamara, tried actively to manage the Pentagon. Unfortunately, the organizational learning that began under McNamara was tarred by his part in the Vietnam debacle, and too many of his successors, such as Caspar Weinberg, were content to be figureheads. As a result, and notwithstanding Goldwater-Nichols, the services have nearly as much autonomy today as in the past.

The reasons should not be misunderstood. They stem from professionalism. The aversion of military leaders to civilian “interference” is little different from the aversion of physicians to being told how to practice medicine. The difference is that as consumers we can always switch physicians, whereas military force is not, and let us hope never will be, a commodity that can be purchased in the global marketplace.

It may be that Korb and Bergmann hope that calling for the armed forces to cooperate more effectively with other parts of the government will, if enough money is forthcoming, lead to real change. Any review of the relationship between the DOD and the State Department and Atomic Energy Commission after World War II would indicate how forlorn such a hope must be. Reform must start inside the service hierarchies. That is a precondition for the sorts of steps Korb and Bergmann recommend.

JOHN ALIC

Avon, North Carolina


Not just for kids

Brian Bosworth’s assessment in “The Crisis in Adult Education” (Issues, Summer 2008) is right on target. At a time when our country faces unparalleled economic uncertainty and unprecedented competition, the United States must dramatically increase the number of individuals with postsecondary credentials and college-level skills if we are to maintain our economy’s vitality. We agree with Bosworth that strong state and federal policies can help millions of adult learners reap the benefits of innovative approaches to improving postsecondary learning and adult skill development.

STRONG STATE AND FEDERAL POLICIES CAN HELP MILLIONS OF ADULT LEARNERS REAP THE BENEFITS OF INNOVATIVE APPROACHES TO IMPROVING POSTSECONDARY LEARNING AND ADULT SKILL DEVELOPMENT.

For 25 years, Jobs for the Future and its partners have been at the forefront of innovations in education for low-income and low-skill Americans. We work side by side with practitioners and policymakers in 159 communities in 36 states in programs that provide evidence for the importance of Bosworth’s proposals and models for their application on the ground. In Oregon, for example, Portland Community College offers a wide range of services specifically designed to meet the needs of academically unprepared adult students. As a participant in the 18-state Breaking Through initiative, a collaboration of Jobs for the Future and the National Council for Workforce Education, the college is redesigning developmental education programs to serve as a bridge between adult basic education and credit-bearing courses. Adult students who very likely struggled in high school or didn’t receive a diploma at all are provided with mentors, tutors, and other supports that help them navigate the complex and often intimidating college environment to shore up their academic achievement. The result: More students are staying in school and working toward the credentials they need to succeed.

Portland Community College is aided in these efforts through Oregon state policy, which encourages enrollment in both adult basic education and developmental education programs as paths to better jobs. One way state policy does this is by providing a match of 80% of the roughly $5.6 million the federal government has invested in Oregon’s adult basic education program. Oregon also reimburses colleges at the same per-student rate for adult basic education, developmental education, and credit-level students. This uniform rate raises the academic standing of adult basic education and indicates that it is just as important as other programs.

In Maryland, Community College of Baltimore County is giving occupational training to frontline entry-level workers in health care—not in classrooms but right at the hospitals where they work. Workers do not have to commute and pay little to no cost for their training and college credit. The program is part of Jobs to Careers, a $15.8 million national initiative of the Robert Wood Johnson Foundation, in collaboration with The Hitachi Foundation and the U.S. Department of Labor. A national initiative with 17 sites, Jobs to Careers uses a “work-based learning” model. It embeds learning into workers’ day-to-day tasks, learning that is developed and taught by both employers and the educational institution. This way, employees can move up career ladders, employers benefit from higher retention rates, patients receive better care, and the college has a new way to deliver its services and strengthen its local economy. Bringing college to the work site is not only a groundbreaking strategy; it’s a common-sense solution to the skills gap affecting local economies across the country. Jobs to Careers is matching the jobs that need to be done with the individuals who need them most.

Innovative and cost-effective investments in a skilled workforce are key to keeping high-paying jobs in America. These human capital investments, particularly for low-skill and low-income youth and adults, address two pressing national challenges: greater equity and stronger economic performance. Our thanks to Bosworth for putting a spotlight on these issues.

MARLENE B. SELTZER

President and Chief Executive Officer

Jobs for the Future

Boston, Massachusetts

www.jff.org


Science and foreign policy

Gerald Hane has a valuable piece in the Fall 2008 Issues (“Science, Technology and Global Reengagement”) arguing that the new administration must recognize the critical role of science and technology (S&T) in the conduct of the nation’s foreign policy and that that role must be reflected in the structure of the White House and State Department. It is not a new argument but one that has bedeviled many administrations and many Secretaries of State, with results that have varied but almost always have fallen short of what is needed. Today, it is of even greater importance as the consequences of inadequate response become steadily more damaging in light of the rapid upgrading of scientific and technological competence throughout the world and the emergence of global-scale S&T–rich issues as major elements of foreign affairs. The threat to America’s competitive economic position as well as its national security is real and growing.

Hane asserts the critical importance of the new president’s having in his immediate entourage a science adviser able to participate in formulating policy to deal with the flood of these issues. He calls for an upgrading of the Office of Science and Technology Policy and for the director of that office to also be a deputy assistant to the president for science, technology, and global affairs. Whatever the title, he is quite right that it is not enough only to be at the table; the science adviser must have the clout and the personal drive to frame the discussion and influence decisions. In this setting, the power that comes from proximity to the president is essential in order to be able to cut through often contentious agency debates and congressional opposition.

But Hane’s arguments, absolutely sound, arrive in the new president’s inbox along with those of many others, all clamoring for immediate attention to their needs. In this maelstrom, it is all too possible that science will be seen as just another self-pleading interest.

What the result will be depends on whether the new president believes in and understands the need for this kind of close scientific advice, asserts the leadership required to create the necessary White House climate, and is prepared to include a senior science adviser in White House policy deliberations across a wide swath of subjects. During his campaign, President-elect Barack Obama reflected the intention to deal urgently with science-rich international issues such as climate change and has signaled that he will appoint an individual to oversee technology implementation across government agencies. Early in his campaign, he formed a science advisory committee, led by a successful science administrator and Nobel Laureate, Harold Varmus, which apparently was in communication almost daily with the campaign’s policy leaders. Thus, there is reason to hope that Obama does appreciate well why many of the policies he is most concerned about will require scientists of quality to be centrally involved in the many policy choices he will have to make.

EUGENE B. SKOLNIKOFF

Professor of Political Science Emeritus

Massachusetts Institute of Technology

Cambridge, Massachusetts


Gerald Hane has rung a bell with his article on the need for the United States to pay more attention to international S&T. It reminded me of a phrase we coined in the 1970s when U.S. diplomats were leveraging scientific cooperation to improve relations with China and the Soviet Union: Science and technology is the new international currency. As national commitments to research and innovation strengthen in all parts of the world, this mantra has become ever more relevant.

Hane knows from his years of tending the international portfolio at the Office of Science and Technology Policy that his ambitious vision for an enhanced role for science and technology can be realized only with the president’s direct involvement.Thus, he recommends that the president establish the position of deputy assistant to the president for science, technology, and global affairs and make his science advisor a member of the National Security Council and the National Economic Council.

The president could implement these suggestions immediately. It would be consistent with the new administration’s expressed desire to pay more attention to the soft power dimensions of U.S. foreign policy, and that is where science cooperation can play an especially fruitful role—if there is adequate funding.

Earlier attempts to find funding for cooperative international science projects fell short. In the Carter administration a proposal to establish an Institute for Scientific and Technological Cooperation (ISTC) for this purpose made it through three of the four mandatory wickets in Congress before it failed for lack of appropriations in the Senate. That or a similar federal approach could be tried again.

One thorny question is where to house the effort. The ISTC was intended to be a semi-autonomous body inside the U.S. Agency for International Development. Another option would be the State Department, but Congress is not readily inclined to support science projects through State, even though precedents exist in the Cooperative Threat Reduction (CTR) programs (funded as a nonproliferation measure) and the Support for Eastern European Democracy Act to assist with economic recovery after the breakup of the Soviet Union.

The activity could also be managed through an existing private organization or a nongovernmental organization created specifically for this purpose. For example, the Civilian Research and Development Foundation was created for similar work as part of the CTR initiative and could be expanded to fulfill this larger role. The American Association for the Advancement of Science, which has just created a Center for Science Diplomacy, also has the prestige and the commitment to take on this responsibility.

Other than the government, the only sources of funding would be private foundations. However, such funding would likely be limited in scope and duration. Federal support is clearly the preferred route, though it could be complemented by private sources. One could even conceive of something modeled on the Small Business Innovation Research program, whereby the technical agencies would be encouraged or required to spend a certain percent of their total scientific funding on projects with an international dimension.

To doubters this may sound like more international charity, but the reality is that as scientific capability and research excellence continue to develop abroad, it is certain that the United States can reap great scientific and political benefits from these relationships. The potential for such a double return on these investments in cooperation is very large. It is a concept whose time has surely come and that deserves a serious attempt to make it work.

NORMAN NEUREITER

Director

Center for Science, Technology, and Security Policy

American Association for the Advancement of Science

Washington, DC


Gerald Hane has done a superb job of laying out the steps involved in strengthening the role of science and engineering in the international arena. His concerns are not new and have been documented over many decades. Eugene B. Skolnikoff of the Massachusetts Institute of Technology comes to mind as one of the more thoughtful and eloquent students of interactions of science and technology (S&T) with international affairs. Emilio Q. Daddario, chair of the House Science subcommittee in the 1960s and later the first director of the Office of Technology Assessment, championed closer interaction of S&T with foreign policy in Congress.

Science and engineering interact with foreign policy in two very distinct ways. The first (and easy one) relates to policies that bring international partners together to do science or to provide a policy framework for international cooperation in research. This is perhaps best seen in successful “big science” projects where remarkable international partnerships have been established in diverse fields such as high-energy physics and global change. Permanent or semipermanent international cooperation institutions have been established, some governmental and some nongovernmental.

The second is more difficult and complex: the role of science and engineering in the development and implementation of foreign policy. Forward thinking is difficult, especially in a government bureaucracy. But there have been thoughtful efforts to reorganize the U.S. government (especially the Department of State) to better inject S&T into the foreign policy process. The most successful of these (at least institutionally) probably was P.L. 95-126, the fiscal year 1979 authorization for the State Department. With the strong backing of Congress, it included new high-level positions, reporting requirements, and mandatory agency cooperation. It appeared to be the do-all and end-all for science and diplomacy. The only problem was that under both parties it was mostly ignored by the federal agencies and the White House. Even today the federal government lacks an agency with funding and a personnel system that will support a world-class analytic capability in science and foreign policy.

The current (fall 2008) global financial crisis underscores the crucial role science plays in international relations. Fundamentally the creation of scientists and mathematicians, the complex financial instruments at the root of the problem were not understood by the financial community. This state of affairs has resulted in unintended consequences, in which the U.S. financial system has become more like the Chinese system, reversing a multidecade flow in the opposite direction.

J. THOMAS RATCHFORD

Distinguished Visiting Professor

George Mason University School of Law

Director, Science and Trade Policy Program

George Mason University

Fairfax, Virginia


Gerald Hane argues thoughtfully for greater U.S. leadership in making international science collaboration a foreign policy priority. Hane exhorts the next administration to act quickly and decisively. He calls for the creation of a Global Priorities S&T (Science and Technology) Fund “to support grants to encourage international S&T activities that support U.S. foreign policy priorities.” In these days of global financial turmoil, rising U.S. deficits, an array of competing demands for tax-payer dollars, and an already significant U.S. investment in R&D, is such a fund really critical? The answer is unequivocally yes.

S&T solutions are needed to address many of today’s global challenges—in energy, food security, public health, and environmental protection. The United States cannot tackle these challenges alone. Today’s most vexing problems are global in nature and require global expertise and experience to solve. Many nations, such as Saudi Arabia, China, the United Kingdom, India, and Australia, are investing in science infrastructure and are partnering globally to advance their own competitiveness and national security interests. To remain competitive, the United States must demonstrate leadership in engaging the world’s best scientists and engineers to find common solutions through collaborative research activities. This is good for U.S. science because it gives our scientists and engineers access to unique facilities and research sites and exposes them to new approaches. It is economically sound because it leverages U.S. resources and provides a means to benchmark U.S. capabilities. As a diplomatic tool, we know that scientists and engineers can work together in ways that transcend cultural and political differences. International collaboration helps to build relationships of trust and establish pathways of communication and collaboration even when formal government connections are strained. In short, S&T must be a central component of U.S. foreign policy.

INTERNATIONAL COLLABORATION HELPS TO BUILD RELATIONSHIPS OF TRUST AND ESTABLISH PATHWAYS OF COMMUNICATION AND COLLABORATION EVEN WHEN FORMAL GOVERNMENT CONNECTIONS ARE STRAINED.

Hane is correct to note that making progress in these areas requires new policy approaches and resources that spur government agencies and non-governmental organizations to take action. During congressional testimony this past summer, the U.S. Civilian Research & Development Foundation (CRDF), a nongovernmental organization created by Congress that supports international science collaboration with more than 30 countries, called on the U.S. government to launch a strategic global initiative to catalyze and amplify S&T cooperation for the benefit of the United States and its partners around the world. The Global Science Fund would be a public/private partnership, with the U.S. government taking the lead in challenging private donors and other governments to match the U.S. contribution. This new initiative would provide funding for grants and other activities that would engage scientists internationally to address energy alternatives, food security, vanishing ecosystems, or other global challenges. It would seek to reach young scientists and support a robust R&D infrastructure while building mutually beneficial economic partnerships.

The new U.S. administration will face many challenges. Advancing U.S. economic, security, and diplomatic interests by drawing on one of America’s greatest assets—its scientists and engineers—must be one of them.

CATHLEEN A. CAMPBELL

President and Chief Executive Officer

U.S. Civilian Research & Development Foundation

Arlington, Virginia


Biotech regulation

In “Third-Generation Biotechnology: A First Look” (Issues, Fall 2008), Mark Sagoff raises a number of useful and interesting points regarding ethical, legal, and social concerns about third-generation biotechnology. In my view, however, some of his criticisms of regulatory agencies, particularly the U.S. Department of Agriculture (USDA), are somewhat overstated. It is certainly arguable that we lack a truly coherent regulatory framework for genetically engineered organisms in the United States, with responsibility for regulation and registration divided somewhat arbitrarily among different agencies (the Environmental Protection Agency, the Food and Drug Administration, and the USDA). Even within the latter agency, the Biotechnology Regulatory Service that Sagoff references is part of APHIS, a regulatory branch that is distinct from the more research-focused (and research-funding) Agricultural Research Service. But rather than “systematically” avoiding sponsorship of inquiry into the ethical, legal, and social implications of biotechnology, I would suggest that the USDA and the other agencies have struggled gamely with a fairly miniscule amount of funding that must be apportioned among extensive regulatory and research needs. It is hardly surprising that the USDA’s priority for limited funds has been for ecologically based research rather than social or ethical inquiries, given the agency’s mission and expertise.

The comparison with the National Institutes of Health‘s efforts to foster public acceptance of the Human Genome Project is valid, but should be tempered with an understanding of the disparity in funding for these programs. I would also suggest that obtaining public acceptance of the need to unravel the human genome, with its very demonstrable medical implications, is probably easier than fostering an informed democratic debate about the ecology of genetically engineered microorganisms in the environment. Sagoff provides several excellent examples illustrating one reason why this is so: The scientific research community has the ability to engineer and release some truly scary recombinant organisms into the environment, such as entomopathogenic fungi expressing scorpion toxin or animal viruses engineered for immunosuppressive capabilities.

Sagoff is correct that the old “process/product” distinction, which has been plaguing regulators for more than two decades, remains a conundrum. The question of how much we should focus on the process of genetic engineering versus the resulting products has been a recurrent theme in both regulatory and academic discussions about the potential release of genetically modified organisms. To a large extent, the dichotomy is a misleading one. The process of genetic engineering inherently produces non-native variants of organisms, and I would contend that there is merit in evaluating the environmental introduction of these organisms with scrutiny similar to that used for the introduction of wild-type non-native microbes. Although a particular recombinant “product” may be well characterized in terms of its genotype as well as a variety of phenotypical attributes, the new ecological niche (also a “product”) that it will fill is always a matter for prediction. Enhanced scrutiny of new introductions based on process (that is, based on their recombinant nature) is admittedly a blunt tool, analogous in some ways to the passenger profiling that might take place at an airline check-in counter. But we don’t afford microbes any equal protection rights, and we might just manage to ward off a few bad actors.

GUY KNUDSEN

Professor of Microbial Ecology and Plant Pathology

Soil and Land Resources Division

University of Idaho

Moscow, Idaho


Research on patents

I support the policy direction proposed by Robert Hunt and Brian Kahin in “Reexamining the Patent System” (Issues, Fall 2008), but does their analysis go far enough? Arguably, innovation is as important to the long-run health of the economy as are interest rates. To set interest rates, our country has the autonomous institution of the Federal Reserve, which excels at gathering and analyzing data in support of its financial decisions. An institution with a similar data-driven orientation for the patent system only seems logical.

The kind of data that Hunt and Kahin talk about gathering would, as they propose, help us evaluate the performance of the patent system in different technologies and industries. However, I think such data are important for another reason (perhaps Hunt and Kahin had this in mind): They can guide the refinement of patent institutions. Indeed, some of the most successful applications of economic analysis to policymaking, such as the programs for tradable pollution permits, began with extensive data analysis, but then applied this analysis to improving the structure and effectiveness of these programs. Similarly, extensive patent data and economic analysis can help improve the functioning of the Patent and Trademark Office (PTO) and of the courts by providing crucial feedback. How well do PTO programs to improve patent quality work? What fee structures can improve patent quality, reduce litigation, and also reduce the huge PTO backlog? Do certain court decisions increase the uncertainty of the patent grant, as some have charged, or not?

These questions can be answered and the answers can be used to improve patent performance. The patent system is, after all, an unusual beast; it is a set of legal institutions charged with carrying out an economic policy. But until now, the tools of economic analysis and economic policymaking have been missing.

JAMES BESSEN

Lecturer

Boston University School of Law

Boston, Massachusetts


Yes to an RPS

“A National Renewable Portfolio Standard? Not Practical” (Issues, Fall 2008), by Jay Apt, Lester B. Lave, and Sompop Pattanariyankool, correctly asserts that the United States needs a comprehensive strategy to address climate change and that energy efficiency is a critical component. But the rest of the article, which maintains that a national renewable portfolio standard (RPS) is impractical, is off the mark.

First, numerous studies contradict the authors’ claim that a national RPS would be too expensive for ratepayers. More than 20 comprehensive economic analyses completed during the past decade found that a strong national standard is achievable and affordable. For example, a 2007 Union of Concerned Scientists (UCS) study, using the Energy Information Administration’s (EIA’s) national energy modeling system, found that establishing a 15% national RPS by 2020 would lower electricity and natural gas bills in all 50 states by reducing demand for fossil fuels and increasing competition. Cumulative national savings would reach $28 billion to $32 billion by 2030. An EIA study arrived at similar conclusions despite its more pessimistic assumptions about renewable technologies. That study projected that a 25% RPS by 2025 would slightly lower natural gas bills, more than off-setting slightly higher (0.4%) electricity bills, saving consumers $2 billion cumulatively through 2030.

Second, the authors incorrectly allege that a national RPS would undermine U.S. electricity system reliability by increasing reliance on wind and solar power. EIA and UCS analyses project that base load technologies, such as biomass, geothermal, landfill gas, and incremental hydroelectric plants, would generate 33 to 66% of the renewable electricity under a national standard. Regional electricity systems could easily integrate the remaining power produced by wind and solar at a very modest cost and without storage. Studies by U.S. and European utilities have found that wind penetrations of as much as 25% would add no more than $5 per megawatt-hour in grid integration costs, or less than 10%, to the wholesale cost of wind.

Third, the need for new transmission lines and upgrades to deliver power to urban areas is not unique to renewable energy. Additional capacity would be necessary for many proposed coal and nuclear plants, which are often sited at considerable distances from load centers. A 2007 analysis by Black & Veatch, a leading power plant engineering firm, found that 142 new coal unit proposals at 116 plants were located on average 109 miles from the nearest large U.S. city, with some located 400 to 500 miles away.

For these reasons and others, we disagree with the conclusion that a national RPS would be ineffective in reducing global warming emissions and meeting other national goals. In fact, EIA’s study showed that a 25% national RPS could reduce global warming emissions from coal and natural gas plants by 20% below business as usual by 2025. Because scientists have called on the United States to reduce global warming emissions by at least 80% below current levels by 2050, we need to dramatically increase both efficiency and renewable energy use. Therefore, efficiency measures and RPSs are key complements to federal cap-and-trade legislation.

STEVE CLEMMER

Research Director, Clean Energy Program

Union of Concerned Scientists

Cambridge, Massachusetts


Practical Pieces of the Energy Puzzle: Getting More Miles per Gallon

The answer may require looking beyond CAFE standards and implementing other consumer-oriented policy options to wean drivers away from past habits.

In December 2007, concerns over energy security and human-induced climate change prompted Congress to increase Corporate Average Fuel Economy (CAFE) standards for the first time in 20 years. The new standards aim to reduce petroleum consumption and greenhouse gas (GHG) emissions in the United States by regulating the fuel economy of new cars and light trucks, including pickups, SUVs, and minivans. The standards will require these vehicles to achieve a combined average of 35 miles per gallon (mpg) by 2020, up 40% from the current new-vehicle average of 25 mpg.

Since Congress acted, the nation witnessed a dramatic rise in the prices of petroleum and gasoline, which reached record levels during the summer of 2008, increasing pressure on policymakers to reduce transportation’s dependence on petroleum. Prices have since fallen markedly with the arrival of an economic crisis. But few observers expect prices to stay low when the economy recovers, and many see a future of steadily rising prices, driven by global economic expansion. Thus, the goal of reducing the nation’s thirst for gasoline remains an important goal. And although striving to meet the CAFE standards will be an important part of the mix, other policy initiatives will be necessary to make timely progress.

Although the nation’s collective gas-pump shock has lessened, the lessons from recent experiences are telling. In June 2008, the average price of crude oil doubled from a year earlier, and gasoline prices rose by one third. High fuel costs sharpened the public’s awareness of fuel use in light-duty vehicles, causing them to seek alternatives to gas-guzzling private vehicles. Sales of light trucks during the first half of 2008 were down by 18% relative to the previous year, and total light-duty vehicle sales dropped by 10%. The total distance traveled by motor vehicles fell by 2.1% in the first quarter of 2008 relative to the same period in 2007. At the same time, ridership on public transportation systems showed rapid growth in the first quarter of 2008, with light-rail ridership increasing by 7 to 16% over 2007 in Min neapolis-St. Paul, Miami, and Denver.

Increasing the federal fuel tax over a number of years would encourage consumers to adopt vehicles that get more miles to the gallon.

These changes marked major changes from trends of the past two decades, when fuel prices were low and relatively stable. During this period, fuel economy standards remained unchanged for cars and largely constant for light trucks. Proponents of more demanding CAFE requirements argue that the standards stagnated during this period, allowing automakers to direct efficiency improvements toward off-setting increases in vehicle size, power, and performance rather than improving fuel economy. On the other hand, critics of CAFE standards contend that mandated fuel economy requirements impose costs disproportionately across manufacturers with no guarantee that consumers will be willing to pay for increased fuel economy over the longer term.

Now that renewed CAFE standards have passed and more stringent targets may be on the way, the discourse over CAFE must shift to the critical issues of the changes that will be necessary to achieve the mandated improvements in fuel economy, the costs of these changes relative to their benefits in fuel savings and reductions in GHG emissions, and the implementation of other policy options to help achieve ambitious fuel economy targets.

We have assessed the magnitude and cost of vehicle design and sales-mix changes required to double the fuel economy of new vehicles by 2035—a longer-term target similar in stringency to the new CAFE legislation. Both targets require the fuel economy of new vehicles to increase at a compounded rate of about 3% per year. We argue that the necessary shifts in vehicle technology and market response will need a concerted policy effort to alter the current trends of increasing vehicle size, weight, and performance. In addition to tougher CAFE standards, coordinated policy measures that stimulate consumer demand for fuel economy will likely be needed to pull energy-efficient technologies toward reducing the fuel consumption of vehicles. This coordinated policy approach can ease the burden on domestic auto manufacturers and improve the effectiveness of regulations designed to increase the fuel economy of cars and light trucks in the United States.

Although the term fuel economy (the number of miles traveled per gallon of fuel consumed) is widely used in the United States, it is the rate of fuel consumption (the number of gallons of fuel consumed per mile traveled) that is more useful in evaluating fuel use and GHG emissions. For example, consider improving the fuel economy of a large, gas-guzzling SUV from 10 to 15 mpg; this reduces the SUV’s fuel consumption from one gallon per 10 miles to two-thirds of a gallon per 10 miles, which saves a third of a gallon of gasoline every 10 miles. If, however, a decent gas-sipping small car that gets 30 mpg is replaced with a hybrid that achieves an impressive 45 mpg—the same proportional improvement in fuel economy as the SUV—this corresponds to a fuel savings of only about one-tenth of a gallon every 10 miles. Both improvements are important and worthwhile, but because of the inverse relationship between these two terms, a given increase in fuel economy does not translate into a fixed proportional decrease in fuel consumption. So even as most people probably will continue to talk about fuel economy, it is important to keep the distinction between fuel economy and fuel consumption in mind.

Leverage points

There are three primary ways in which vehicle fuel economy may be improved: ensuring that the efficiency gains from vehicle technology improvements are directed toward increasing fuel economy, rather than continuing the historical trend of emphasizing larger, heavier, and more powerful vehicles; increasing the market share of alternative power-trains that are more efficient than conventional gasoline engines; and reducing the weight and size of vehicles.

Efficiency. Even though sales-weighted average fuel economy has not improved since the mid-1980s, the efficiency (a measure of the energy output per unit of energy input) of automobiles has consistently increased, at the rate of about 1 to 2% per year. This trend of steadily increasing efficiency in conventional vehicles is expected to continue during the next few decades. Lightweight materials and new technologies such as gasoline direct injection, variable valve lift and timing, and cylinder deactivation are making inroads into today’s vehicles and individually achieve efficiency improvements of 3 to 10%. Between now and 2035, gasoline vehicles can realize a 35% efficiency gain through expected technology improvements and moderate reductions in weight.

Unfortunately, efficiency gains in the past 20 years have been used to offset improvements in the weight and power of new vehicles, rather than improving fuel economy. Compared to 1987, the average new vehicle today is 90% more powerful, 33% heavier, and 25% faster. With the help of lightweight materials and efficiency improvements, all of this improvement has been accomplished with only a 5% penalty in fuel economy. Had performance and weight instead remained at 1987 levels, however, fuel economy could have been increased by more than 20% in new 2007 light-duty vehicles.

TABLE 1 Illustrative strategies that double the fuel economy of new vehicles in 2035

The first three strategies maximize different combinations of two of the options (italicized), and set the remaining option to the level necessary to double new vehicle fuel economy. The market shares of alternative powertrains are arbitrarily fixed at a ratio of 5 to 5 to 7 for turbocharged gasoline, diesel, and hybrid gasoline vehicles, respectively. The fourth strategy puts heavy emphasis on hybrid powertrains, which improve vehicle performance slightly and reduce the level of weight reduction required.

% of efficiency from expected technology improvements directed to improving FE % vehicle weight reduction from current weight by 2035 % of new vehicle market share, by powertrain
Strategy (avg. car 0-100 km/hr acceleration time) (avg. car curb weight) Conventional gasoline Turbo-charged gasoline Diesel Hybrid gasoline
Current fleet in 2006 95% 1% 2% 2%

  (9.5 secs) (1,620 kg)

1. Maximize conventional vehicle improvements and weight reduction 100% 35% 66% 10% 10% 14%

  (9.4 secs) (1,050 kg)

2. Maximize conventional vehicle improvements and alternative powertrains 96% 19% 15% 25% 25% 35%

  (9.2 secs) $1,320 kg)

3. Maximize alternative powertrains and weight reduction 61% 35% 15% 25% 25% 35%

  (7.6 secs) (1,060 kg)

4. Emphasize aggressive hybrid penetration 75% 20% 15% 15% 15% 55%

  (8.1 secs) (1,300 kg)

TABLE 2 Retail price increase of conventional vehicle technology improvements and alternative powertrains in 2035

Technology option Description and assumptions Retail price increase [USS 2007]
Cars Light Trucks
Future gasoline vehicle Includes expected engine and transmission improvements; a 20% reduction in vehicle weight; a more streamlined body; and reduced tire rolling friction $2,000 $2,400

ADDITIONAL PRICE INCREASE FROM SHIFTING TO ALTERNATIVE POWERTRAINS

Future turbocharged gasoline vehicle Includes a turbo-charged gasoline engine $ 700 $ 800

Future diesel vehicle Includes a high-speed, turbocharged diesel engine compliant with future emissions standards $1,700 $2,100

Future hybrid gasoline vehicle Includes an electric motor, battery, and control system that supplements a downsized gasoline engine $2,500 $3,200

Powertrains. In addition to steady improvements in conventional vehicle technology, alternative technologies such as turbocharged gasoline and diesel engines and gasoline hybrid-electric systems could realize a 10 to 45% reduction in fuel consumption relative to gasoline vehicles by 2035. These are proven alternatives that are already present in the light-duty vehicle fleet and do not require significant changes in the nation’s fueling infrastructure. Turbocharged gasoline and diesel-powered vehicles are already popular in Europe, and several vehicle manufacturers have plans to introduce them in a wide range of vehicle classes in the U.S. market. More than 1 million hybrid electric vehicles such as the Toyota Prius and Ford Escape have been sold cumulatively in the United States during the past 10 years.

The role that alternative powertrains can play in improving fuel economy, however, depends on how successfully they can capture a sizeable share of new vehicle sales. Currently, approximately 5% of the U.S. market is comprised of diesel and hybrid powertrains. In the past, new powertrain and other vehicle technologies have, at best, sustained average market share growth rates of around 10% per year, suggesting that aggressive penetration into the market might see alternative powertrains account for some 85% of all new vehicle sales by 2035.

Size and weight. Reducing a vehicle’s weight reduces the overall energy required to move it, thus enabling the down sizing of the powertrain and other components. These changes provide fuel efficiency gains that can be directed toward improving fuel economy. Reductions in vehicle weight can be achieved by a combination of substituting lightweight materials, such as aluminum, high-strength steel, or plastics and polymer composites, for iron and steel; redesigning and downsizing the powertrain and other components; and shifting sales away from the larger, heaviest vehicles to smaller, lighter models.

With aggressive use of aluminum, high-strength steel, and some plastics and polymer composites, a 20% reduction in vehicle weight is possible through material substitution and associated component downsizing by 2035. Additional redesign and component downsizing could account for another 10% reduction in vehicle weight. Further, reducing the size of the heaviest vehicles could achieve an additional 10% reduction in average vehicle weight. For instance, downsizing from a large SUV, such as a Ford Expedition, to a mid-sized SUV, such as a Ford Explorer, cuts weight by 15%. Combining these reductions multiplicatively indicates that a 35% reduction in the average weight of new vehicles is possible by 2035.

Increasing costs

Combining these three options to double the fuel economy of new vehicles in 2035 reveals a series of trade-offs among attributes of the light-duty vehicle fleet (see Table 1). No one or two options can reach the target on their own; doubling fuel economy in 2035 requires a major contribution from all of the available options, regardless of the strategy employed. The most sensitive options are directing efficiency improvements directly toward reducing fuel consumption and reducing vehicle weight. These changes can affect all new vehicles entering the fleet, yet during the past two decades these powerful levers for increasing fuel economy have been applied in the opposite direction.

Implementing these improvements will increase the cost of manufacturing vehicles (see Table 2). By 2035, new engine and transmission technologies, a 20% reduction in weight, body streamlining, and reductions in the rolling friction of tires could increase the cost to manufacture a car by $1,400 and by $1,600 for a light truck (in current dollars relative to the same vehicles today). These costs do not take into account the costs of distributing vehicles to retailers, nor profit margins for manufacturers and auto dealers. Adding an additional 40% to these costs gives a reasonable estimate of the retail price increase that could be expected, although the price arrived at in a competitive auto market would be subject to various pricing strategies that may increase or decrease the final price tag. With a strong emphasis on reducing fuel consumption over the next 25 years, the average price of a conventional gasoline vehicle could increase by around 10% relative to today’s mid-sized sedan such as the Toyota Camry or light truck such as the Ford F-150.

Shifting from a conventional gasoline engine to an alternative powertrain would further increase the cost of manufacturing a vehicle. In 2035, the retail price of a vehicle could increase by $700 to $800 for a turbocharged gasoline engine and by $1,700 to $2,100 for a diesel engine. Future hybrid-electric powertrains could increase the manufacturing cost of a conventional gasoline vehicle by $2,500 to $3,200 in 2035. These costs correspond to a retail price increase of 5 to 15% above the price of today’s gasoline vehicle. Achieving a 35% reduction in vehicle weight by 2035 would add roughly $2 to the cost of manufacturing a vehicle for every kilogram of weight removed. This would increase the retail price of a conventional gasoline vehicle in 2035 by roughly 10% compared to today.

Not accounting for fuel savings, the total extra manufacturing cost to double fuel economy in the average vehicle by 2035 would be between $55 billion and $65 billion in constant 2007 dollars in the 2035 model year alone, or an additional 15% to 20% of the estimated baseline manufacturing cost in 2035 if fuel economy were to remain unchanged from today. Over 15 years of vehicle operation, this corresponds to a cost of $65 to $75 to reduce one ton of greenhouse gas emissions.

For the average consumer, this translates into a retail price increase of $3,400 for a car with doubled fuel economy in 2035, and an increase of $4,000 for a light truck. If the fuel savings provided by doubling fuel economy are taken into account, the undiscounted payback period (that is, the length of time required for the extra cost to pay for itself) is rroughly five years for both cars and light trucks at the Energy Information Administration’s long-term gasoline price forecast of $2.50 per gallon. At $4.50 per gallon—a price that didn’t seem out of the question in mid-2008—the undiscounted pay back period shortens to only three years.

Engaging the policy gear

Although it is technically possible to double the fuel economy of new vehicles by 2035, major changes would be required from the status quo. Tough trade-offs will need to be made among improvements in vehicle performance, cost, and fuel economy. Although CAFE is a powerful policy tool, it is also a blunt instrument for grappling with the magnitude and cost of these required changes for two reasons: It has to overcome the market forces of the past two decades that have shown a strong preference for larger, heavier, and more powerful vehicles; and in attempting to reverse this trend, CAFE places the burden of improving fuel economy solely on the auto industry.

As buyers have grown accustomed to current levels of vehicle size and performance, domestic manufacturers have profited from providing such vehicles. In contrast, increasing CAFE standards may require abrupt changes in vehicle attributes from automakers whose ability to comply is constrained by the high cost of rapid changes in technology. More consistent signals that buyers are willing to pay for improved fuel economy would justify the investments needed for compliance.

Such signals can be provided by policy measures that influence consumer behavior and purchase decisions. First, providing financial incentives for vehicles based on their fuel economy would strengthen the market forces pulling efficiency improvements toward improving fuel economy. Second, raising the cost of driving with a predictable long-term price signal would further reduce the popularity of gas-guzzlers, encouraging the adoption of fuel-sipping vehicles over time. These complementary measures can sharpen the bluntness of CAFE by providing clear incentives to consumers that directly influence market demand for fuel economy.

Feebates are one such reinforcing policy that would reward buyers for choosing improved fuel economy when they purchase a new vehicle. Under a feebate system, cars or trucks that achieve better than average fuel economy would receive a rebate against their retail price. Cars or trucks that achieve worse than average fuel economy would pay an extra fee. Effectively, sales of gas-guzzling vehicles subsidize the purchases of models with high fuel economy.

Feebates have several advantages. They can be designed in a revenue-neutral manner so that the amount paid in rebates is equal to the revenue collected from fines. They do not discriminate between vehicles that employ different technologies but focus on improving fuel economy in a technology-neutral manner. And they provide a consistent price incentive that encourages manufacturers to adopt technologies in ways that improve vehicle fuel economy. A drawback is that feebates require administrative oversight in defining how the fees and rebates will be calculated and in setting an increasingly stringent schedule in order to balance revenue against disbursements.

Various revenue-neutral arrangements have been proposed that would see the funds collected from tax increases returned to consumers in the form of income or payroll tax rebates.

Feebates have been proposed in France and Canada. France’s scheme is aimed at achieving the European Commission’s objective of reducing new vehicle carbon dioxide emissions. Canada introduced a national feebate system in the spring of 2007, but the government has since decided to phase out the system in 2009 because of complaints about how the fees and rebates were structured and a lack of consultation with industry.

Measures that influence the cost of driving are another reinforcing lever for improving fuel economy. As petroleum prices rise, which many observers expect over the longer term, politicians and consumers alike typically show increased interest in improving fuel economy. In a similar way, increasing the federal fuel tax over a number of years would encourage consumers to adopt vehicles that get more miles to the gallon, even if fuel prices themselves do not go back up dramatically.

Historical data indicate that over the short term, the immediate response to high gasoline prices is small. If higher prices are sustained for several years, however, the reduction in demand for gasoline is estimated by econometric studies to be four to seven times larger as consumers retire existing vehicles and replace them with newer fuel-sipping models. Although the actual response to changes in price is uncertain, recent studies suggest that a 10% increase in gasoline prices would reduce consumption by 2 to 4% over 10 to 15 years. This consumer-driven reduction would be achieved almost entirely through the purchase of vehicles with improved fuel economy.

Although higher fuel taxes would stimulate demand for fuel economy over the long term, substantial increases have proven politically infeasible to date. Gasoline taxes affect all consumers, and some observers argue that higher taxes will hit people with low incomes the hardest. Fuel tax increases are also met with cynicism because they generate significant revenue for the government. Any policy proposal advocating an increase in federal or state fuel taxes must clearly outline how the revenues generated from tax increases will be used to benefit consumers or rebated.

One compelling rationale for substantial increases in fuel taxes is the need for greater investment in the nation’s surface transportation infrastructure. In January 2008, the National Surface Transportation Policy and Revenue Study Commission, a blue ribbon panel that examined the future needs of national surface transportation, supported as much as a 40 cent increase in the federal fuel tax over five years. In justifying the increase, the commission noted that the Highway Account of the Highway Trust Fund will have a negative balance of $4 billion to $5 billion by the end of the 2009 fiscal year and is in desperate need of the revenue that would be generated from increased taxes on transportation fuel.

Alternatively, various revenue-neutral arrangements have been proposed that would see the funds collected from tax increases returned to consumers in the form of income or payroll tax rebates. A “pay at the pump” system would offer a separate revenue-neutral approach. This system would roll registration, licensing, and insurance charges into the price of gasoline paid at the pump. Annual or semiannual costs of vehicle ownership would become a variable cost per gallon of fuel consumed, encouraging the purchase of vehicles with higher fuel economy without requiring the average driver to pay more. California is considering similar “pay as you drive” legislation that would allow insurers to offer premiums based on the actual annual mileage driven by an individual. A study by the Brookings Institution found that this measure could result in an 8% reduction in light-duty vehicle travel and $10 billion to $20 billion in benefits, primarily among low-income drivers.

Boosting miles per gallon

To see the possible benefits of such policy actions, it is useful to consider the combined effect of two of these policies alongside the mandated 35 mpg CAFE target by 2020. The two policies are a feebate system that provides a $1,000 incentive against the retail price of a vehicle for every one-hundredth of a gallon shaved off the amount of fuel consumed per-mile (roughly ranging from a maximum rebate of $1,200 to a maximum fee of $3,000 dollars per vehicle), and an annual 10 cent per gallon increase in federal fuel taxes, sustained for 5 to 10 years.

Based on our cost assessment, the feebate measure would be strong enough to neutralize the retail price increase of enhancements of conventional gasoline engines that improve fuel economy and most of the increased price from purchasing a more fuel-efficient turbocharged gasoline engine. It would offset roughly half of the price increase of a diesel engine and more than a third of the price of a hybrid-electric powertrain. By effectively subsidizing manufacturers to adopt technologies in ways that improve fuel economy, such feebates would ease the internal pricing strategies of automakers while sending consumers a clear price signal at the time of vehicle purchase.

The second measure, increased fuel taxes, would send a continuous signal to consumers each time they fill up at the pump. Under our suggested policy package, the federal government would increase its fuel tax by roughly 10 cents a gallon annually over five or more years. This would provide a moderate but consistent signal to consumers over the longer term. Such a policy alone could stimulate a 4 to 8% reduction in annual gasoline consumption over 10 to 15 years, given recent estimates of the sensitivity of gasoline demand to changes in price. Alongside CAFE, sustained fuel tax increases could match the public’s desire for more miles per gallon to fuel economy regulations that the public might not otherwise prefer.

The combined effect of these two policies is consistent and reinforcing: Consumers respond to feebates and fuel prices in a way that aligns their desire for fuel economy with requirements placed on manufacturers. These demand-side measures would encourage consumers to choose vehicles that achieve more gallons per mile, an approach that harnesses market forces to pull efficiency gains in vehicles toward improved fuel economy alongside the regulatory push provided by CAFE. A sustained demand for better fuel economy from consumers would also assuage the fears of automakers that they will be stuck with CAFE’s price tag.

Just as there is no silver bullet in the various technology options now available or just over the horizon, controversy over CAFE that has persisted for two decades suggests that one dominant strategy is unlikely to satisfy the necessary political and economic constraints while sustaining long-term reductions in petroleum consumption and GHG emissions. Broadening the policy debate to include measures such as feebates and fuel taxes that stimulate consumer demand for fuel economy through price signals will enhance the prospects of achieving CAFE’s goal of 35 mpg by 2020 and further targets beyond. A coordinated set of fiscal and regulatory measures offers a promising way to align the interests of government, consumers, and industry. Achieving Congress’s aggressive target will not be easy, but overcoming these barriers is essential if the nation is to deliver on the worthy goal of reducing the fuel use and emissions of greenhouse gases from cars and light trucks.

Recommended reading

J. Bordoff, P. J. Noel, “The Impact of Pay As You Drive Auto Insurance in California”, The Hamilton Project, The Brookings Institution, 2008. http://www.brookings.edu/papers/2008/07_payd_california_bordoffnoel.aspx.

Congressional Budget Office, “Effects of Gasoline Prices on Driving Behavior and Vehicle Markets”, Congress of the United States, January 2008, .

U.S. Environmental Protection Agency, “Light-Duty Automotive Technology and Fuel Economy Trends: 1995 through 2007,” Office of Transportation and Air Quality, U.S. Environmental Protection Agency, 2007, http://www.epa.gov/otaq/fetrends.htm.

G. E. Metcalfe, “A Green Employment Tax Swap: Using a Carbon Tax to Finance Payroll Tax Relief”, The Brookings Institution and World Resources Institute Policy Brief, June 2007, http://pdf.wri.org/Brookings-WRI_GreenTaxSwap.pdf.

U.S. Government Accountability Office, “Reforming Fuel Economy Standards Could Help Reduce Oil Consumption by Cars and Light Trucks, and Other Options Could Complement These Standards,” U.S. Government Accountability Office, GAO-07-921, 2007, http://www.gao.gov/new.items/d07921.pdf.

K. A. Small, K. Van Dender, “If Cars Were More Efficient, Would We Use Less Fuel?,” Access, Issue 31, University of California Transportation Center, 2007, .

L. Cheah, C. Evans, A. Bandivadekar, J. Heywood, “Factor of Two: Halving the Fuel Consumption of New U.S. Automobiles by 2035,” Laboratory for Energy and Environment report, 2007, http://web.mit.edu/sloan-auto-lab/research/beforeh2/files/cheah_factorTwo.pdf.

National Surface Transportation Policy and Revenue Study Commission, “Transportation for Tomorrow: Report of the National Surface Transportation Policy and Revenue Study Commission”, 2008, http://www. transportationfortomorrow.org/final_report/.


Christopher Evans () is a recent masters graduate, Lynette Cheah is a Ph.D. student, Anup Ban divadekar is a recent Ph.D. graduate, and John Heywood is Sun Jae Professor of Mechanical Engineering at the Massachusetts Institute of Technology.

Growth Without Planetary Blight

The Bridge at the Edge of the World by James Gustave Speth. New Haven, CT: Yale University Press, 2008, 295 pp.

In November 2008, Al Gore authored an op-ed essay in the New York Times titled “The Climate for Change” in which he offered a five-part plan for how we can save the economy and the environment at the same time. For half a century, political economists and environmentalists of all stripes have been seeking the solution to a world order in which economics was in balance with ecology. In what appears to be an inexorable degradation of global ecosystems and more recently the near collapse of national economies, with no appreciable abatement in greenhouse gases, the problems addressed by Gore have reached a precarious state.

Many sectarian solutions have been proposed to resolve the deadlock between economy and ecology. Free marketers contend that it is a matter of getting the prices right and internalizing externalities. We are all free riders who are not paying for the despoliation of nature from our pollution. Regulation proponents argue that we need betters laws and stronger enforcement. Neo-anarchists and bioregionalists believe that we cannot achieve sustainable societies until we plan our communities according to proper scale and principles of self-sufficiency. Eco-radicals maintain that population and economic growth must be limited by the ecological constraints of the planet and that the protection of ecosystems that sustain our lives must receive the highest priority. Egalitarian social ecologists remind us that the responsibility for protecting the biosphere falls heavily on the nations that have contributed the most to the planet’s environmental threats and those economies that have already benefited the most from their exploitation of natural resources.

In The Bridge at the Edge of the World, James Gustave (Gus) Speth has given us a fresh look at the question of reconciling economy with ecology. Speth is not shy about probing deep into the structural conditions that lie at the heart of the problem. His contribution is unique because Speth has for years worked as an environmental reformer who has taken for granted that good laws, sound stewardship, honest environmental accounting, and strong federal leadership will make a difference. Currently the dean at the Yale School of Forestry and Environmental Studies, Speth was cofounder of the Natural Resources Defense Council, chairman of the U.S. Council on Environmental Quality, founder and president of the World Resources Institute, and administrator of the UN Development Program.

When Speth addressed the “root causes” of environmental problems in his earlier book, Red Sky at Morning, the focus was on honest prices, ecological technologies, sustainable consumption, and environmental education. He hasn’t abandoned these ideas, but he has reached the conclusion that they do not go deep enough into the problem.

In seeking to understand the structural conditions associated with industrial societies’ uncontrollable appetite for natural resources and unsustainable growth, Speth has dared to raise the “C” word, long viewed as the province of radical economists, Marxist sociologists, and communitarian utopians. For Speth, capitalism is both the source of our success as a post-industrial economy and the obstacle to realizing environmental sustainability.

The book is divided into three parts. Part 1 addresses the global environmental threat and its economic drivers, including the growth imperative. Part 2 provides an analysis of failed efforts to remedy the problem within the framework of neoclassical economics. Part 3 explores the opportunities for the transformation of the current system of market capitalism into a ”post-growth” economy.

Speth’s analysis in not caught up in a choice among historical ”isms”; rather, he says, “I myself have no interest in socialism or economic planning or other paradigms of the past.” What he does propose is a reoriented market system that embodies the values of a post-growth society because “the planet cannot sustain capitalism as we know it.” His book explores the values and constraints that a transformed political economy must embody.

Every market system functions at two levels. First, there is the legal, ethical, and regulatory matrix on which all business and human activities are expected to operate. These include the system of government incentives, research investments, taxes, and environmental laws. Second, there are the free market functions (business transactions, consumer choices) that are overlaid on the matrix. To use a computer metaphor, the political economy has an operating system (the underlying matrix) and a system of programs (markets) whose deep structure conforms to the operating system. In Speth’s view, the underlying matrix on which the market system operates needs reconfiguration if the economy and ecology are to become harmonized. In his words, “Today’s dominant world view is simply too biased toward anthropocentrism, materialism, egocentrism, contempocentrism, reductionism, rationalism, and nationalism to sustain the changes needed.”

More than a quarter century ago, the environmental sociologist Allan Schnaiberg introduced the concept of the “treadmill of production.” According to Schnaiberg, industrial capitalism is driven by higher rates of material throughput, which eventually creates so much waste (additions) and extracts so much of the earth’s natural resources (withdrawals) that it overwhelms the biosphere. Both Schnaiberg and Speth reach the same conclusion. There is only so much that can be done to slow down the biophysical throughput by recycling, green consumerism, and green technologies.

Growth without blight

The resolution between the capitalism of unfettered growth and a planet with limited natural resources and assimilative capacity can be found in the composition of the gross domestic product. And Speth zeroes in on this economic construct. He doesn’t argue that we have to eliminate growth; rather, we have to change its character. We need a new evolved form of capitalism that creates incentives for non-material growth, an economy that reverses the tendency to produce too many useless, redundant, and ecologically damaging consumer goods that effectively turn too many people into polluters.

If we were to draw two columns, the first listing what our current economy produces in abundance and a second indicating the scarcities we face, we might have a clue to what post-market capitalism would look like. Observes Speth, “Basically, the economic system does not work when it comes to protecting environmental resources, and the political system does not work when it comes to protecting the economic system.” His optimism is also evident. “As it has in the past, capitalism will evolve, and it may evolve into a new species altogether.” Speth reaches his conclusions judiciously, after navigating through a thorny intellectual landscape that includes all the major inspirational voices, the environmental sages of our age. His careful examination of a variety of solutions reveals how each is either insufficient or leads to a dead end.

Returning to Schnaiberg’s analysis in his 1980 book, The Environment: From Surplus to Scarcity, he concludes, “If the treadmill is to be slowed and reversed, the central social agency that will have to bring this about is the state, acting to rechannel production surplus in non-treadmill directions. But the state can only do so when there is both a sufficient crisis of faith in the treadmill, and sufficient political support for production apart from the treadmill.” Speth reaches a similar conclusion. In his formulation, “The transformation of contemporary capitalism requires far reaching and effective government action. How else can the market be made to work for the environment rather than against it? How else can corporate behavior be altered or programs built that meet real human and social needs?”

When the collapse of major financial institutions and banks occurred in the fall of 2008, the American people were advised by their president to consume to save the economy from cascading into free fall. But there are many ways for people and their government to consume that do not involve speeding up material throughput. We can consume education, provide support for nonprofit organizations that make our neighborhoods and regions better places to live, consume services for the elderly and for our own self-awareness through the arts, and invest in the research and infrastructure that enhances the quality of our lives. Speth applauds those growth scenarios leading to improving “non-material dimensions of fulfillment.” He also shows us that beyond a certain level of material wealth, increasing material consumption does not correlate with human well being.

This is a book of hope and inspiration. It tells us that we are not locked by default into a particular form of market capitalism that is in its deepest structure unfit for a sustainable world. There are signs that our society is already pregnant with change. What Speth has done, like a good Zen master, is to open our minds to the possibilities of aspiring to human self-realization, societal transformation, and a livable planet without setting limits on economic growth.

Book Review: Danger: Bell curve ahead

Danger: Bell curve ahead

Real Education: Four Simple Truths for Bringing America’s Schools Back to Reality by Charles Murray. New York: Crown Forum, 2008.

Michael J. Feuer

When I was in my junior high-school play, one of the parents in the audience was overheard saying that there were only two things wrong with our performance: The curtain went up, and the seats faced the stage. Similarly, there are only two things wrong with Charles Murray’s latest book: The logic is flawed, and the evidence is thin. Were it not for his claim that his earlier work (Losing Ground, 1984) changed the way the nation thought about welfare, there would be little reason to dignify the current polemic with a review in a magazine of the National Academy of Sciences. But on the off chance that Murray’s ideas might influence the way the nation thinks about education, it is worth a response even at the risk of affording him undeserved attention.

Here is his basic argument, presented in the form of “four simple truths” and an equally simplistic proposal: Ability varies, half the children are below average, too many people go to college, our future depends on how we educate the academically gifted, and privatization will fix the schools. Space constraints prevent me from undoing the errors of omission and commission in each of these claims, so I’ll concentrate on one or two and ask readers to extrapolate from there.

A good place to start is with ability, a complex concept that Murray chooses to simplify by focusing on IQ, which for him captures most of what matters to academic achievement and, for that matter, success in life. IQ is certainly a component of academic ability and a predictor of future performance; on those facts the science is well established. But Murray seems unable or unwilling to acknowledge the preponderance of evidence showing that IQ is only one measure of ability, that it covers only a small subset of what we now understand to mean by intelligence, and it is neither the sole nor the most important correlate of adult success. The observation (simple truth no. 2) that it varies in the population is utterly banal, but Murray unabashedly uses it as a building block for his core argument: Let’s stop wasting our time with children at the low end of the ability continuum, concentrate our resources on those whose IQ scores suggest they can handle rigorous intellectual material, encourage the remaining 80 to 90% to become electricians and plumbers, and stop clogging our colleges and universities with people who don’t have (and will almost certainly never develop) what it takes to benefit from a liberal education.

Murray correctly anticipates that this radical proposal might invite criticism, so he launches an early preemptive strike: “As soon as I move beyond that simplest two-word expression, ability varies, controversy begins.” Well, not quite. Everyone knows that ability varies, and thanks to Garrison Keillor (and introductory statistics courses), almost everyone knows that half the children have to be below average. The controversy begins when Murray moves from that truism to this mischievous accusation: “Educators who proceed on the assumption that they can find some ability in which every child is above average [sic] are kidding themselves.” This statement warrants some unpacking.

First, where is the evidence that this is what educators assume? When teachers work with children to improve their reading and mathematical skills, that doesn’t signify an attempt to make every child “above average,” any more than when physicians strive to improve their patients’ health they are motivated by a naïve desire to make them all “above average” (whatever that might mean). Murray ridicules a completely defensible goal—improving the academic skills of children even at the lower end of the ability distribution—by intentionally confounding improvement with the end of variability. If we accept Murray’s strangely nihilistic logic that raising the average reading or math performance of low-achieving students is futile because some of them will still be below average, then yes, we should stop wasting our time and money. But the premise is flatly wrong—the goal of education is not to undo the basic laws of statistics and make every student above average—and therefore so is the conclusion.

Second, the smug allegation that educators are “kidding themselves” suggests that ability (as measured by IQ) is mainly a fixed trait and kids either have it or they don’t and that we know how to measure it accurately enough so that we can decide ex ante which kids are worth investing in. Here too Murray is on thin ice. Although he concedes that “environment plays a major role in the way that all of the abilities develop, [and] genes are not even close to being everything,” he seems either unaware of or unimpressed by a substantial and growing body of research on the plasticity of brain structure and cognitive functioning over the lifespan. Eight years ago, in its landmark report From Neurons to Neighborhoods, the National Research Council established that “gene-environment interactions of the earliest years set an important initial course for all of the adaptive variations that follow, [but] this early trajectory is by no means chiseled in stone” (italics added). With the advent of magnetic resonance imaging technologies and advanced computational methods, cognitive neuroscience now affords greater appreciation of the interactions between nature and nurture and of the ways in which exposure to education, training, and other stimuli can be associated with changes in the parts of the brain responsible for various cognitive and behavioral tasks.

And even if all that mattered was IQ (a claim now discredited by the scientific community), research evidence should give Murray something to be more hopeful about. As James Flynn has reported, based on his extensive analyses of IQ test data in 20 nations, “there is not a single exception to the finding of massive IQ gains over time.” Clearly something must be contributing to this trend, and though we don’t have enough evidence to support specific causal claims, rejecting the possibility that education makes a difference is a bit premature. As Flynn notes, “every one of the 20 nations evidencing IQ gains shows larger numbers of people spending longer periods of their life being schooled and examined on academic subject matter.” Flynn is a careful scientist and doesn’t allow that finding to obscure counterfactual evidence suggesting that some educational reforms might actually impede IQ gains. None theless, he cautions against the overly deterministic view: “The fact that education cannot explain IQ gains as an international phenomenon does not, of course, disqualify it as a dominant cause at a certain place and time.” This nuance is glaringly absent in Murray’s simplified model of the brain, mind, and cognition.

What about Murray’s willingness to rely on IQ tests to figure out which kids have, for lack of a better metaphor, “the right stuff”? Here Murray challenges the overwhelming consensus in the measurement community concerning the limited validity and reliability of conventional intelligence measures. The bottom line in a vast and easily accessible literature is that almost no one in the testing profession, regardless of political predisposition, is as convinced as Murray of the utility of IQ scores for the kind of lockstep sorting and selection that he envisions.

Moving beyond IQ, is there evidence that investments in human capital can yield significant and sustainable gains in other valued outcomes? Murray sees a glass much less than half full, and based on his cursory summary of evaluations of programs such as Head Start, he sinks to yet another dismal bottom line: “Maybe we can move children from far below average intellectually to somewhat less below average. Nobody claims that any project anywhere has proved anything more than that.” Really? Apparently Murray does not know about, or chooses not to cite, the work by James Heckman and others, which supports a more hopeful conclusion. In a recent interview, Heckman (a Nobel laureate in economics) noted that “the [Perry Preschool] program had substantial effects on the earnings, employment, [involvement in] crime, and other social aspects of participants, compared to non-participants. But what we also find is that the main mechanism through which the program operates is non-cognitive skills” (italics added). This is an important point, as it argues for a broader definition of ability than what is encompassed by IQ, and it emphasizes again the potential value of school-based programs in the development of a wide range of skills that correlate with academic achievement and longer-term success. Heckman is hardly an “educational romantic”(Murray’s label for people who disagree with him about the futility of education) and cautions that “an under-funded, low-quality early childhood program can actually cause harm. But a high-quality program can do a great deal of good—especially one that is trying to cultivate the full person, trying to develop in every aspect the structure of cognition and non-cognitive skills.”

Evidence from other programs, such as Success for All, New York’s District 2, and the famous Tennessee class-size reduction experiment, along with data from the National Assessment of Educational Progress that indicate upward trends (especially in mathematics), clearly undermine Murray’s bleak forecast. One wonders what motivates someone supposedly trained as a social scientist to so willfully ignore large quantities of evidence and to declare categorically that the dream of uplifting children from impoverished intellectual and economic environments is just a lot of romantic nonsense. I leave that question to psychologists better equipped to address it. Meanwhile, I expect that Murray’s book will spur debate and cause people to focus on real research, for which I suppose we should be grateful. It’s the minimum we should demand for enduring Murray’s mean-spirited rhetoric and faulty science.


Michael J. Feuer () is executive director of the National Research Council’s Division of Behavioral and Social Sciences and Education.


The High Road for U.S. Manufacturing

Manufacturing employment could be stabilized with more widespread use of advanced production methods. Government policy can play a key role.

The United States has been losing manufacturing jobs at a stunning rate: 16% of the jobs disappeared in just the three years between 2000 and 2003, with a further decline of almost 4% since 2003. In all, the nation has lost 4 million manufacturing jobs in just more than 8 years. This was some of the best-paying work in the country: The average manufacturing worker earns a weekly wage of $725, about 20% higher than the national average. Although manufacturing still pays more than average, wages have fallen relative to the rest of the economy, especially for non-college workers. Manufacturing also employs significant numbers of white-collar workers: One in five manufacturing employees is an engineer or manager.

Continued hemorrhaging is not inevitable. The United States could build a high-productivity, high-wage manufacturing sector that also contributes to meeting national goals such as combating climate change and rebuilding sagging infrastructure. The country can do this by adopting a “high-road” production system that harnesses everyone’s knowledge—from production workers to top executives—to produce high-quality innovative products.

Promoting high-road strategies will strengthen manufacturing and the U.S. economy as a whole. Through coordination with highly skilled workers and suppliers, firms achieve high rates of innovation, quality, and fast response to unexpected situations. The resulting high productivity allows firms to pay fair wages to workers and fair prices to suppliers while still making fair profits.

Many U.S. firms can close the remaining cost gap with low-wage competitors. Some firms are already doing so, and there is evidence that a few widely applicable and teachable policies account for a lot of their success.

How can this be done? Start with more investment in education, training, and R&D. But education alone will not allow firms to overcome the market failures that block the adoption of efficient high-road practices. Nor will it reinvigorate income growth, which even for college-educated men has risen only 0.5% annually since 1973 at the median. Similarly, increased R&D spending by itself won’t get innovative products to market.

More is needed. Competing with low-wage nations is not as daunting as one might think. Research by the Michigan Manufacturing Technology Center suggests that most manufacturers have costs within 20% of their Chinese competitors. Reducing costs by this magnitude is well within the range achievable by high-road programs, and a key institution that can help bridge this gap is already in operation. The federal Manufacturing Extension Partnership (MEP) program teaches companies to develop new products, find new markets, and operate more efficiently—and it pays for itself in increased tax revenue from the firms it helps. This program will not save all the manufacturing at risk, but it will increase the viability of much of it, while increasing the productivity and wages of those who perform this important work.

The low-wage fallacy

Two main forces have caused U.S. manufacturing employment to fall: the growth of productivity during a period of stagnant demand and the offshoring of work to other nations, especially China. Economists differ as to the relative contribution of the two forces, but as Nobel Laureate Paul Krugman argued in the Brookings Papers on Economic Activity, there is growing consensus that both are important.

Two groups of policy analysts argue that nothing should be done about the stunning fall in manufacturing employment, but for opposite reasons. One group, exemplified by a 2007 study by Daniel Ikenson of the Cato Institute, argues that the employment decline is a sign of soaring productivity, and that manufacturing is actually “thriving.” Another view, exemplified by New York Times columnist Thomas Friedman, says it is simply impossible to compete with countries whose wages are so much lower than ours. It is inevitable, he argues, that manufacturing will go the way of agriculture, employing a tiny fraction of the workforce.

Neither of these views is correct. Although U.S. manufacturing is not thriving, with appropriate policies it could be. First, there are problems with the Cato study’s statistical analysis. Second, a significant number of firms are holding their own, and more could do so with appropriate policies.

The Cato study says that U.S. manufacturing output reached an all-time high in 2006, but it fails to subtract the value of imported inputs. When one looks at manufacturing value added, even Cato’s data show that output has fallen since 2000. And even these data, drawn from U.S. government sources, paint far too rosy a picture because U.S. statistical agencies do not track what happens to goods outside U.S. borders. The result of this limitation (and of some complex statistical interactions) means that official statistics could be substantially overestimating growth in manufacturing output.

U.S. firms can and do compete with China and other low-wage countries, in part because direct labor costs are only 5 to 15% of total costs in most manufacturing. Many U.S. firms have costs not so different from those of Chinese firms. Therefore, it is not naïve to think that manufacturing can and should play an important role in the U.S. economy during the next several decades.

A 2006 study by the Performance Benchmarking Service (PBS) suggests that most small U.S. manufacturers are competitive with Chinese firms or could become so. Similarly, a 2004 McKinsey study found that in many segments of the automotive parts industry, the “China price” is only 20 to 30% lower than the U.S. price for a similar component. Note that neither this study nor the PBS study takes into account most of the hidden costs discussed below. Thus, low-wage countries are not necessarily low-cost countries. U.S. companies can continue to pay higher wages for direct labor and offset the added cost with greater capabilities—capabilities that lead to outcomes such as higher productivity, fewer quality problems, and fewer logistical problems.

Unfortunately, firms are handicapped in deciding where they should locate production because they often do not take into account the hidden costs of offshoring. A number of studies have found that most firms, even large multinationals, use standard accounting spreadsheets to make sourcing decisions. These techniques focus on accounting for direct labor costs, even though these are a small percentage of total cost, and ignore many other important costs.

Consider some of the hidden costs of having suppliers far away. First, top management is distracted. Setting up a supply chain in China and learning to communicate with suppliers requires many long trips and much time, time that could have been spent on introducing new products or processes at home. Second, there is increased risk from a long supply chain, especially with just-in-time inventory policies. Third, there are increased coordination and “handoff costs” between U.S. and foreign operations. More difficult communication among product design, engineering, and production hinders serendipitous discovery of new products and processes. Quality problems may be harder to solve because of geographic and cultural distance. Time to market may increase.

These costs can be substantial: One study by Fanuc, a robotics manufacturer, found that they added 24% to the estimated costs of offshoring. The challenges of dealing with a far-flung supply base make it difficult for firms to innovate in ways that require linked design and production processes. For example, one Ohio firm had based its competitive advantage on its ability to quickly add features to its products (cup holders in riding mowers, to take a nonautomotive example). But when they sourced to China, these last-minute changes wreaked havoc with suppliers, and the firm was forced to freeze its designs much earlier in the product development process.

Why would firms systematically ignore these costs? One reason is to convince outside investors that the company is serious about reducing costs by taking actions that are publicly observable, such as shutting factories in the United States and moving to countries with demonstrably lower wages. However, as the U.S.-China price differential shrinks because of exchange rate revaluations, higher Chinese wages, and increased transportation costs, firms (such as Caterpillar) are turning more to suppliers closer to home.

Many U.S. firms can close the remaining cost gap with low-wage competitors. Some firms are already doing so, and there is evidence that a few widely applicable and teachable policies account for much of their success.

For example, in the metal stamping industry, a firm at the 90th percentile has a value added per worker of $125,000—a large enough pie to pay workers well, invest in modern equipment and training, and earn a fair profit. In contrast, the median firm has a value-added per worker of about $74,000 per year. This is barely enough to pay the typical compensation for a worker in this industry (about $40,000) and still have money left for equipment and profit. This differential in performance is typical: The PBS consistently finds that the top 10% of firms have one and a half times the productivity of the median firms, even within narrowly defined industries. Moreover, the same practices (designing new products, having low defect rates, and limiting employee turnover) explain much of the differential in productivity across a variety of industries.

Building high-productivity firms

U.S. firms cannot compete by imitating China by cutting wages and benefits. Instead, they should build on their strengths by drawing on the knowledge and skills of all workers. Many of this country’s high-productivity firms prospered by adopting a high-road production recipe in which firms, their employees, and suppliers work together to generate high productivity. Successful adoption of these policies requires that everyone in the value chain be willing and able to share knowledge. Involving workers and suppliers and using information technology (IT) are key ways of doing this.

Workers, particularly low-level workers, have much to contribute because they are close to the process: They interact with a machine all day, or they observe directly what frustrates consumers. For example, a study of steel-finishing lines by Casey Ichniowski, Kathryn Shaw, and Giovanna Prennushi found that firms with high-road practices had 6.7% more uptime (generating $2 million annually in net profits for a small plant) than did lines without them. The increase in uptime is due to communication and knowledge overlap. In a firm that does not use high-road practices, all communication may go through one person. In contrast, in high-road facilities, such as the one run by members of the United Steelworkers at Mittal Steel in Cleveland, workers solve problems more quickly because they communicate with each other directly in a structured way.

Involving suppliers is also important. Take, for example, the small supplier to Honda that had problems with some plastic parts. On an irregular basis, parts would emerge from molding machines with white spots along the edge of the product or molds not completely filled in. These problems, which had long plagued the company, were not solved until Honda organized problem-solving groups that pooled the diverse capacities and experiences of people in the supplier’s plant. They quickly solved the problem. Molding machine operators noticed condensation dripping into the resin container from an exhaust fan in the ceiling, quality control technicians then saw that the condensation was creating cold particles in the resin, and skilled trade people designed a solution.

The continuing use of IT will be critical in improving manufacturing practice, but it will not necessarily boost productivity unless it is accompanied by a decentralization of production, a key element of high-road production. For example, a study by Ann Bartel, Casey Ichniowski, and Kathryn Shaw of valve producers found that more-efficient firms adopted advanced IT-enhanced equipment while also changing their product strategy (to produce more customized valves), their operations strategy (using their new IT capability to reduce setup times, run times, and inspection times), and human resource policies (employing workers with more problem-solving skills and using more teamwork). The success of the changes in one area depended on success in other areas. For example, customizing products would not have been profitable without the reduced time required to change over to making a new product, a reduction made possible both by the improved information from the IT and the improved use of the information by the empowered workers. Conversely, the investments in IT and training were less likely to pay off in firms that did not adopt the more complex product.

A key reason why the high road’s linked information flow is so powerful is that real production rarely takes place exactly according to plan. A manufacturing worker may be stereotyped as someone who pushes the same button every 20 seconds, day after day, year after year, but even in mature industries, this situation rarely occurs. For example, temperatures change, sending machines out of adjustment; customers change their orders; a supplier delivers defective parts; a new product is introduced. All of these contingencies mean that the perfect separation of brain work and hand work envisioned by efficiency guru Frederick Taylor does not occur.

In mass production, managers have often tried to minimize these contingencies as well as worker discretion to deal with them. In contrast, the Toyota production system, which accepts that the very local information that workers have is crucial to running and improving the process, sets up methods for the sustained and organized exploration of that information. Although these methods require substantial overlap of knowledge and expertise that may seem redundant, they produce substantial benefits.

For example, at Denso, a Japanese-owned supplier in Battle Creek, Michigan, someone approved a suggestion that a supplier be able to deliver parts in standard-size boxes, thus reducing packaging costs. Although these boxes were only two inches deeper than the previous boxes, the difference created a significant problem. Denso’s practice (following the just-in-time philosophy) was that a worker would deliver the boxes from the delivery truck directly to a rack above the line. The worker who assembled these parts had to reach up and over and down into the box an extra two inches 2,000 times per shift, which proved quite painful. The situation was corrected quickly, because of an overlap of knowledge. Denso had a policy that managers worked on the line once per quarter, and the purchasing manager had done that job in the past. Thus, the worker knew whom to contact about the problem (since she had worked next to him for a day), and the purchasing manager understood immediately why the extra two inches was a problem. He directed the supplier to go back to the previous containers. In a world of perfect information, Denso’s rotation policy would be a waste of managerial talent; but in a world in which much knowledge is tacit and things change quickly, the knowledge overlap allowed quick problem identification and resolution.

This high-road model of production provides an alternative to the current winner-take-all model, with corporate executive “stars” at the top supported by workers considered to be disposable at the bottom. In this view, there are no jobs that are inherently low-skill or dead-end.

Diffusing high-road practices

The practices discussed above are not new. In response to the Japanese competitive onslaught in the 1980s and 1990s, some U.S. manufacturers had begun to use them. But they have not been as widely adopted as they could be.

Markets alone fail to provide the proper incentives for firms to adopt high-road policies for two main reasons. First, the high road works only if a company adopts several practices at the same time. It must improve communication skills at all levels, create mechanisms for communicating new ideas across a supply chain’s levels and functions, and provide incentives to use them. Merely getting the prices right (adding taxes or subsidies to correct for market failures) is not sufficient to build these capabilities. Instead, it makes sense to provide technical assistance services to firms directly.

Second, many of the benefits of the high-road strategy accrue to workers, suppliers, and communities in the form of higher wages and more stable employment. Profit-maximizing firms do not take these benefits into account when deciding, for example, how much to invest in training. Many firms will provide less than the socially optimal amount of general training because they fear trained employees will be hired away by other firms.

For these reasons, there is a theoretical case that government services could outperform competitive markets in promoting high-road production. There is also practical evidence that this potential has in many cases been realized.

Several in-depth studies have found that MEP pays for itself in increased tax revenue generated by the firms it serves. MEP could be even more effective if its scope were expanded, so that it could link together the disparate skills that firms must learn to master high-road production.

MEP has had significant success in helping manufacturers overcome many of these problems. Established in 1989 as part of the National Institute of Standards and Technology, the program was loosely modeled on the agricultural extension program. There are manufacturing extension centers in every state, providing technical and business assistance to small and medium-sized manufacturers. The centers help plants adopt advanced manufacturing technologies and quality-control programs, as well as develop new products. For example, the Wisconsin Manufacturing Extension Program has provided classes and consultants to help firms dramatically reduce their lead times (the time from order to delivery). A study by Joshua Whitford and Jonathan Zeitlin found that participants have cut their lead times by 50% and their inventory by 70%, improving their profit margins substantially while also improving performance for their customers.

Several in-depth studies have found that MEP pays for itself in increased tax revenue generated by the firms it serves. However, MEP remains a tiny program; its budget for fiscal year 2008 is only $90 million, less than $7 per manufacturing worker. This low level of funding makes it difficult for MEP to subsidize its services enough to capture their true social benefit. Currently, only marketing and facility costs are subsidized; this (very approximately) works out to be about a 33% rate of subsidy for first-time clients. Firms pay market rates for services actually delivered, meaning that they often buy services piecemeal when they have some extra cash. An increased rate of subsidy would allow the MEP to reach out with an integrated program to small firms that lack the capability to plan a coherent change effort. Such a program would enable MEP to teach skills such as brainstorming and problem-solving to a wider audience.

A market for private consultancy services to teach lean production has developed, but a 2004 study by Janet Kiehl and myself found that these consultants do not obviate the need for MEP. First, consultants tend to focus on areas that provide a quick cash return (such as one-time inventory reductions) rather than longer-term capability development (whose payoff would be harder for consultants to capture). Second, consultants are in practice a complement to MEPs, not substitutes. The reason is that MEPs expand the market for the outside provision of expertise by providing evaluations of firms, exposing firms to new ideas, and providing referrals to vetted consultants.

MEP could be even more effective if its scope were expanded so that it could link together the disparate skills that firms must learn to master high-road production. Some of these programs are already under way, but only on a pilot basis. Below are some key priorities:

Organize training by value chain in addition to focusing on individual firms. In the Wisconsin example above, the training was developed and candidate firms identified in conjunction with six large customer firms, including John Deere and Harley Davidson. This supply chain modernization consortium trains supplier firms in general (rather than firm-specific) competencies and promotes mutual learning by harmonizing supplier certification and encouraging cross-supplier communication. This framework meets diverse supplier needs through multiple institutional supports. For example, having customers agree on training priorities and encouraging suppliers to apply what they learn in class helps suppliers retain a focus on long-term improvement rather than short-term firefighting.

Include training on manufacturing services, because a key part of what high-productivity manufacturing firms offer is not just production itself but also preproduction work (learning what customers want and designing the products) and postproduction work (delivering goods just in time and handling warranty issues efficiently). These additional activities are often more tied to the location of consumers, who (at least for now) are usually in the United States. These activities also benefit from close linkages within and between plants. For example, skilled production workers and trade people can ramp up the production of high-quality products more quickly, produce more variety on the same lines, reduce lead times for customized products, reduce defects, and so forth.

With the right policies, the United States can have a revitalized manufacturing sector that brings with it good jobs, rapid innovation, and the capacity to pursue national goals.

Develop new products and find new markets. This is an especially important type of manufacturing service. These skills help high-road firms avoid competing with low-wage commodity producers. They also enable firms to make use of the additional capacity freed up by “lean” initiatives. MAGNET (the MEP center in Northern Ohio) has had significant success in this area. It employs a staff of 15 (plus four subcontractors) that can take a small company through all steps of the product development process. The MAGNET staff draws on ideas from several industries and technologies to help develop a diverse array of products, such as a light fixture that can be easily removed from the ceiling to enable bulb-changing without a ladder and a HUMVEE engine that can be replace in one hour, rather than the previous standard of two days.

Other possibilities include creating a national standard for evaluating the total cost of acquisition for components and teaching firms how to use energy more efficiently.

Creating discussion forums

High-road production techniques have been codified and shown to work. But this process of codification takes a long time. How will the next generation of programs be developed? In addition, the exact ingredients of the high-road recipe vary by industry and over time. Thus, it is useful to have forums for discussion so that industry participants can make coordinated investments, both subsidized and on their own. The forums could elicit the detailed information necessary to design good policies, thus avoiding government failure. However, organizing the forums is subject to market failures, because the benefits of coordinated investment are diffuse and thus hard for a profit-making entity to capture.

Federal and state governments could establish competitive grant programs in which industries compete for funding to establish such forums. Also, MEP should encourage cities and regions to apply to create such forums. A large literature, including case studies and statistical work, has found that firms concentrated in the same geographical area (including customers, suppliers, rivals, and even firms in unrelated industries) are more productive. The advantages of geographical proximity include the ability to pool trained workers and the ease of sharing new ideas. These advantages can be magnified if institutions are created that organize these exchanges, facilitating the communication and development of trust.

Several prototypes of these discussion forums already exist in a number of stages of the value chain, including innovation (Sematech), upstream supply [the Program for Automotive Renaissance in Tooling (PART) in Michigan], component supply (Accelerate in Wisconsin), and integrated skills training (the Wisconsin Regional Training Partnership and the Manufacturing Skills Standards Council).

PART includes communities, large automakers, first-tier suppliers, and small tool and die shops. Its membership reflects how much of manufacturing is organized today, with large firms outsourcing work to smaller suppliers, who remain geographically concentrated. The program, funded by the Mott Foundation, coordinates joint research among members and provides benchmarking and leadership development for small firms. It helps organize “coalitions” of small tooling firms that do joint marketing and develop standardized processes. The state of Michigan offers significant tax breaks for firms located in a Tooling Recovery Zone.

A bill to encourage the formation of discussion forums was introduced by U.S. Senators Sherrod Brown (D-OH) and Olympia Snowe (R-ME) in the summer of 2008. Called the Strengthening Employment Clusters to Organize Regional Success (SECTORS) Act, the legislation would provide grants of up to $2.5 million each for “partnerships that lead to collaborative planning, resource alignment, and training efforts across multiple firms” within an industry cluster.

Expanding MEP and creating discussion forums would cost about $300 million. I have calculated that if just half the firms increase their productivity by 20% as a result (the low estimate from Ronald Jarmin’s study of MEP’s effectiveness) and can therefore compete with China, the United States would save 50,000 jobs at a cost of only $6,000 per job, a cost that would be offset by increased tax revenue. This $300 million is a tiny amount of money. State and local governments currently spend $20 to $30 billion on tax abatements to lure firms to their jurisdictions. That spending generally does not improve productivity. Moreover, it is much cheaper to act now to preserve the manufacturing capacity we have than to try to reconstruct it once it is gone.

This $300 million expenditure can also be compared with that for agricultural extension: $430 million in 2006 for an industry that employs 1.9% of the workforce and produces 0.7% of gross domestic product (GDP). In contrast, manufacturing is 10% of the workforce and 14% of GDP.

Paving the high road

A number of observers have noted the fragility of high-road production in the United States. Cooperation, especially between labor and management, may flourish for a while but then collapse, or cooperation may be limited because management wants to keep its options open regarding the future of the facility. Low-road options (either in the United States or in low-wage nations overseas) remain attractive to firms, even if they impose costs on society. After a few failures, unions often become reluctant to trust again. Similar problems plague customer/supplier relations.

Therefore, we must look at broader economic policies that affect the stability of the high road in manufacturing and in other sectors. These policies can be divided into those that “pave the high road” (reduce costs for firms that choose this path) and those that “block the low road” (increase costs for firms that choose the low road, thus reducing their ability to undercut more socially responsible competitors).

Some examples of policies that pave the high road are universal health care, increased funding of innovation, and investments in training. Some policies that block the low road would be including in trade agreements protections for workers and the environment and strengthened safety regulations for workplaces and consumer products. Implementing these policies would require large investments but would benefit the entire economy, not just manufacturing.

Coordinated public effort to develop productive capabilities in the United States is an effective way of confronting the twin problems of shrinking manufacturing and stagnant income for most U.S. workers. With the right policies, the United States can have a revitalized manufacturing sector that brings with it good jobs, rapid innovation, and the capacity to pursue national goals.

Rather than abandon manufacturing, the nation can transform it into an example for the rest of the economy. The rationale for high-road policies is applicable to most industries in the United States. The policies outlined here could ensure that all parts of the economy remain strong and that all Americans participate in a productive way and reap the rewards of their efforts.

Recommended reading

AFL-CIO, Manufacturing Matters to the U.S. (Washington, DC: Working for America Institute, AFL-CIO, 2007) ().

Susan Helper and Janet Kiel, “Developing Supplier Capabilities: Market and Non-Market Approaches,” Industry & Innovation 11, no. 1-2 (2004):89–107.

Susan Helper, Renewing U.S. Manufacturing: Promoting a High-Road Strategy (Washington, DC: Economic Policy Institute, 2008) http://www.sharedprosperity.org/bp212/bp212.pdf)

Casey Ichniowski and Kathryn Shaw, “Beyond Incentive Pay: Insiders’ Estimates of the Value of Complementary Human Resource Management Practices,” Journal of Economic Perspectives 17, no. 1 (2003): 155–180.

Ronald S. Jarmin, “Evaluating the Impact of Nanufacturing Extension on Productivity Growth.” Journal of Policy Analysis and Management18, issue 1 (1999): 99–119.

Daniel Luria, Matt Vidal, and Howard Wial with Joel Rogers, FullUutilization Learning Lean” in Component Manufacturing: A New Industrial Model for Mature Regions, & Labor’s Stake in Its Success (Sloan Industry Studies Working Papers Number WP-2006-3, 2006) (http://www.cows.org/pdf/rp-amp_wai_final.pdf).

The Manufacturing Institute, the National Association of Manufacturers, and the Deloitte Consulting LLP, 2005 Skills Gap Report – A Survey of the American Manufacturing Workforce (November 2005) ().

John Paul MacDuffie and Susan Helper, Collaboration in Supply Chains With and Without Trust (New York: Oxford University Press, 2005).

Rajan Suri, “Manufacturers Can Compete vs. Low-Wage Countries” The Business Journal, February 13, 2004.

Josh Whitford and Jonathan Zeitlin, “Governing Decentralized Production: Institutions, Public Policy, and the Prospects for Inter-firm Collaboration in U.S. Manufacturing,” Industry & Innovation 11, no. 1 (2004): 11–14.

James Womack, Daniel Jones, and Daniel Roos, The Machine That Changed the World: The Story of Lean Production (New York: Harper Perennial, 1991).


Susan Helper () is the AT&T Professor of Economics at the Weatherhead School of Management, Case Western Reserve University.

From the Hill – Winter 2009

Research funding flat in 2009 as budget stalls

Fiscal year (FY) 2009 began on October 1 with final budget decisions for most federal agencies postponed until at least January 2009. To keep the government operating, lawmakers combined three final appropriations bills into a continuing resolution (CR) that extends funding for all programs in the remaining unsigned 2009 appropriations bills at 2008 funding levels through March 6. President Bush signed the measure into law on September 30.

The CR contains final FY 2009 appropriations for the Departments of Defense (DOD), Homeland Security (DHS), and Veterans Affairs (VA); all three will receive substantial increases in their R&D portfolios. Other federal agencies covered by the remaining appropriations bills will be operating temporarily at or below 2008 funding levels for several months. The CR excludes from its FY 2008 base most supplemental appropriations. Thus, agencies that received additional funds in the mid-year supplemental funding bill, including the National Institutes of Health, the National Aeronautics and Space Administration (NASA), the National Science Foundation (NSF), and the Department of Energy’s (DOE’s) Office of Science, will see a decrease under the CR. The CR, however, does allow the Food and Drug Administration to count the $150 million FY 2008 supplemental it received as part of its base.

The CR provides $2.5 billion for the Pell Grant program, which gives aid to college students, and $5.1 billion for low-income heating assistance. A $25 billion loan program for the auto industry is also part of the CR, as is $22.9 billion in disaster relief funding.

Overall, the federal government enters FY 2009 with an R&D portfolio of $147.3 billion, an increase of $2.9 billion or 2%, due entirely to an increase for DOD’s R&D, which will rise by $3 billion or 3.6% to $86.1 billion in 2009. The flat-funding formula of the CR results in a $61.2 billion total for non-defense R&D at the start of FY 2009, a cut of 0.1% as compared to 2008.

Excluding development funds, the federal investment in basic and applied research could decline for the fifth year in a row in 2009, after adjusting for inflation, if the CR’s funding levels hold for the entire year.

The flat funding levels of the CR put requested increases for the three agencies in the Bush administration’s American Competitiveness Initiative on hold. Although congressional appropriators had endorsed and even added to large requested increases for NSF, DOE’s Office of Science, and the Department of Commerce’s National Institute of Standards and Technology laboratories in early versions of the 2009 appropriations bills, the next Congress may have to start over again. In the meantime, the three key physical sciences agencies begin FY 2009 with funding levels at or slightly below those of 2008.

NASA funding boost authorized

Emphasizing the important role that a balanced and adequately funded science program at NASA plays in the nation’s innovation agenda, Congress in October approved a bill with broad bipartisan support that could significantly increase NASA’s funding. The National Aeronautics and Space Administration Authorization Act of 2008 authorizes $20.2 billion for FY 2009, far more than the $17.1 billion appropriated to the agency in the FY 2009 continuing resolution.

The bill authorizes an 11% increase above the president’s request in scientific research and strengthens NASA’s earth science, space science, and aeronautics programs. It contains provisions on scientific integrity, expressing “the sense of Congress that NASA should not dilute, distort, suppress, or impede scientific research or the dissemination thereof.” It also includes a plan for the continuation of the Landsat remote-sensing satellite program and reauthorizes the Glory mission to examine the effects of aerosols and solar energy on Earth’s climate.

Congressional Action on R&D in the FY 2009 Budget as of September 30, 2008 (budget authority in millions of dollars)

Action by Congress

FY 2008

FY 2009

FY 2009

Chg. from Request

Chg. from FY 2008

Estimate

Request

Congress

Amount

Percent

Amount

Percent

Defense (military) * 79,347 81,067 82,379 1,311 1.6% 3,032 3.8%

(“S&T” 6.1,6.2,6.3 + Medical) * 13,456 11,669 14,338 2,669 22.9% 882 6.6%

(All Other DOD R&D) * 65,891 69,398 68,040 -1,358 -2.0% 2,149 3.3%

National Aeronautics & Space Admin. 12,251 12,780 12,188 -592 -4.6% -63 -0.5%

Energy 9,724 10,519 9,661 -858 -8.2% -63 -0.6%

(Office of Science) 3,637 4,314 3,574 -740 -17.1% -63 -1.7%

(Energy R&D) 2,369 2,380 2,369 -11 -0.5% 0 0.0%

(Atomic Energy Defense R&D) 3,718 3,825 3,718 -107 -2.8% 0 0.0%

Health and Human Services 29,966 29,973 29,816 -157 -0.5% -150 -0.5%

(National Institutes of Health) 28,826 28,666 28,676 10 0.0% -150 -0.5%

(All Other HHS R&D) 1,140 1,307 1,140 -167 -12.8% 0 0.0%

National Science Foundation 4,501 5,175 4,479 -696 -13.5% -23 -0.5%

Agriculture 2,359 1,955 2,412 457 23.4% 53 2.2%

Homeland Security * 992 1,033 1,085 52 5.0% 93 9.4%

Interior 676 618 676 59 9.5% 0 0.0%

(U.S. Geological Survey) 586 546 586 41 7.5% 0 0.0%

Transportation 820 902 820 -81 -9.0% 0 0.0%

Environmental Protection Agency 548 541 548 7 1.3% 0 0.0%

Commerce 1,138 1,152 1,138 -14 -1.2% 0 0.0%

(NOAA) 581 576 581 5 0.9% 0 0.0%

(NIST) 521 546 521 -25 -4.5% 0 0.0%

Education 321 324 321 -3 -0.9% 0 0.0%

Agency for Int’l Development 223 223 223 0 0.0% 0 0.0%

Department of Veterans Affairs * 891 884 952 68 7.7% 61 6.8%

Nuclear Regulatory Commission 71 77 71 -6 -7.8% 0 0.0%

Smithsonian 203 222 203 -19 -8.6% 0 0.0%

All Other 322 299 322 23 7.7% 0 0.0%

TOTAL R&D * 144,354 147,743 147,295 -449 -0.3% 2,941 2.0%

Defense R&D * 83,065 84,892 86,097 1,204 1.4% 3,032 3.6%

Nondefense R&D * 61,288 62,851 61,198 -1,653 -2.6% -91 -0.1%

Basic Research * 28,846 29,656 28,952 -704 -2.4% 106 0.4%

Applied Research * 29,218 27,626 29,281 1,655 6.0% 63 0.2%

Total Research * 58,064 57,282 58,233 951 1.7% 169 0.3%

Development * 81,814 85,745 84,605 -1,140 -1.3% 2,791 3.4%

R&D Facilities and Capital Equipment * 4,476 4,716 4,457 -260 -5.5% -19 -0.4%

AAAS estimates of R&D in FY 2009 appropriations bills. Includes conduct of R&D and R&D facilities. All figures are rounded to the nearest million. Changes calculated from unrounded figures. FY 2008 figures have been adjusted to reflect supplementals enacted in Public Law 110-252 and contained in the FY 2009 CR. These figures have been revised since the publication of AAAS Report XXXIII: R&D FY 2009.

The bill calls for continuing NASA’s approach toward completing the International Space Station and making the transition from the Space Shuttle to the new Constellation launch system. The legislation authorizes the agency to fly two additional Shuttle missions to service the space station and a third flight to launch a DOE experiment to study charged particles in cosmic rays.

The Senate version of the bill added language that directs NASA to suspend until April 30, 2009, any activities that could preclude operation of the Space Shuttle after 2010, in order to provide an opportunity to the incoming administration to evaluate the shuttle’s planned retirement, thus providing another opportunity for reassessment and redirection of the agency.

Climate change proposals multiply

As the 110th Congress wrapped up, legislators were already looking ahead to the next session, releasing drafts of climate change proposals they hope to advance. The measures reflect growing interest in Congress in addressing the broad spectrum of concerns about climate change legislation so that a successful compromise can be reached.

On October 7, House Committee on Energy and Commerce Chair John Dingell (D-MI) and Subcommittee on Energy and Air Quality Chair Rick Boucher (D-VA) released a proposal for a cap-and-trade system to control U.S. greenhouse emissions. The bill would cap emissions at 80% below 2005 levels by 2050, which is more aggressive than the bill proposed by Sens. Joe Lieberman (I-CT) and John Warner (R-VA) that garnered a great deal of attention earlier in 2008.

Many of the bill’s provisions are similar to those of the Lieberman-Warner bill in terms of the mechanisms by which emissions would be controlled. This includes the creation of a market-based system of emissions permits that can be traded from one firm to another in order to remain within a cap set by the government. The cap would decline each year until reaching its ultimate reduction goal in 2050. The bill would give control of the carbon-permit allocation process to the Environmental Protection Agency (EPA), although the bill does not settle on a means of determining permit price. Instead, it offers four possible scenarios, ranging from initially offering the permits for free to limit burdens on covered firms to a proposal to use the allowance values entirely as rebates for consumers. Regardless of the option chosen, the proposal would invest in energy efficiency and clean energy technology, return value from permits back to low-income consumers, and auction all permits after 2026.

The bill would permit the purchasing of EPA-approved domestic and international carbon offset credits, although firms would be limited to off-setting only 5% of their emissions in the first five years, eventually increasing to 35% in 2024. The Lieberman-Warner bill would have permitted 15% of emissions to be offset with domestic carbon offset credits and up to 5% with international credits, but not until after 2012.

In addition to the Dingell-Boucher bill, an outline of principles for climate change legislation, based on adhering to greenhouse gas reductions that will limit global temperature rise to two degrees Celsius, was sent to House Speaker Nancy Pelosi (D-CA) by a group of 152 representatives, led by Reps. Henry Waxman (D-CA), Jay Inslee (D-WA), and Edward Markey (D-MA). These recommendations include a cap on carbon emissions of 80% below 1990 levels by 2050, more rapid policy responses to climate science, better international cooperation, investment in clean energy technology, and economic measures to protect consumers and domestic industries.

Markey, the chairman of the Select Committee on Energy Independence and Global Warming, released his own climate change bill earlier in 2008. In an October 7 press release in which he welcomed the Dingell-Boucher bill, he said that, “The draft legislation lays out a range of options for structuring a cap-and-trade system that are likely to trigger a vigorous and healthy debate about how best to reduce global warming pollution.”

In the Senate, a group of 16 Democrats known as the “Gang of 16” is attempting to craft a new bill that will address concerns that arose during the debate on the Lieberman-Warner bill, namely the distribution process for emissions allowances and the desire to use low-carbon energy development to stimulate job creation.

The proposal would focus on critical areas of concern, namely examining carbon offsets, containing the costs of abatement, and protecting consumers. Provisions addressing these issues include creating incentives for farmers to produce marketable offset credits, investing in low-carbon energy technology such as clean coal, providing flexibility for businesses if new technology is not available or is too expensive, and providing energy assistance to low-income families in order to offset any rise in energy costs that results from the legislation.

Big boost in energy R&D funding supported

Although most of the energy policy debate in Congress during the fall of 2008 centered on expanding offshore drilling and renewable tax incentives, both of which were approved in the waning days of the 110th Congress, legislators also examined the role of energy R&D in advancing the nation’s energy independence.

At a September 10 hearing of the House Select Committee on Energy Independence and Global Warming, witnesses testified about the importance of having a broad portfolio of energy R&D programs to meet the challenges of energy security, climate change, and U.S. competitiveness. Committee Chairman Ed Markey (D-MA) noted that during the past 25 years, energy R&D has fallen from 10% of total R&D spending to only 2%, an amount Rep. Jay Inslee (D-WA) called “pathetic.” The witnesses all agreed on the need for additional funds for energy R&D, with estimates ranging from 3 to 10 times as much as current funding.

Susan Hockfield, president of the Massachusetts Institute of Technology, testified about the importance of energy investments in motivating students. “The students’ interest is absolutely deafening,” she said, “and one of my fears is that if we don’t fund the kind of research that will fuel innovation, these very brilliant students will see that a bright future actually lies elsewhere.”

University of Michigan Vice President of Research Stephen Forrest discussed the willingness of the university community to join industry and government to discover solutions to address energy security. He called on Congress to fully fund the Advanced Research Projects Agency–Energy, a program targeting high-risk energy research authorized in the America Competes Act. Daniel Kammen of the University of California at Berkeley explained the role of federal R&D in sparking private-sector investment, stating that government funding is necessary to “prime the pump” before industry will increase R&D.

In addition to hearing testimony, legislators have received input from a variety of sources. The Council on Competitiveness released a “100 Day Action Plan” for the next administration. It calls for increased R&D investment and the creation of a $200 billion National Clean Energy Bank. More than 70 universities and scientific societies under the umbrella of the Energy Science Coalition released a petition to presidential candidates highlighting the importance of basic energy research in addressing energy issues. At a press conference hosted by the Science Coalition and the Task Force on the Future of American Innovation, leaders from universities, industry, and national labs described the role energy R&D can play in achieving U.S. energy independence.

During the presidential campaign, president-elect Barack Obama released his New Energy for America plan that calls for a $150 billion federal investment during the next 10 years in clean energy research, development, and deployment. The plan includes basic research to develop alternative fuels and chemicals, new vehicle technology, and next-generation nuclear facilities.

Despite calls from both parties and chambers of Congress, increased levels of funding for energy research are not included in the continuing resolution approved in October. However, Congress did address several energy issues in provisions in the Emergency Economic Stabilization Act of 2008. It includes extensions of the investment tax credit for solar energy and production tax credits for wind, solar, biomass, and hydropower, and expands the residential energy-efficient property credit. It also includes tax credits for oil shale, tar sands, and coal-to-liquid fuels, areas that may advance energy security and economic competitiveness but are at odds with addressing climate change, illustrating some of the difficulties in meeting these intertwined challenges.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Practical Pieces of the Energy Puzzle: Low Carbon Fuel Standards

The most direct and effective policy for transitioning to low-carbon alternative transportation fuels is to spur innovation with a comprehensive performance standard for upstream fuel producers.

When it comes to energy security and climate change concerns, transportation is the principal culprit. It consumes half the oil used in the world and accounts for almost one-fourth of all greenhouse gas (GHG) emissions. In the United States, it plays an even larger role, consuming two-thirds of the oil and causing about one-third of the GHG emissions. Vehicles, planes, and ships remain almost entirely dependent on petroleum. Efforts to replace petroleum—usually for energy security reasons but also to reduce local air pollution—have recurred through history, with little success.

The United States and the world have caromed from one alternative to another, some gaining more attention than others, but each one faltering. These included methanol, compressed and liquefied natural gas, battery electric vehicles, coal liquids, and hydrogen. In the United States, the fuel du jour four years ago was hydrogen; two years ago it was corn ethanol; now it is electricity for use in plug-in hybrid electric vehicles. Worldwide, the only non-petroleum fuels that have gained significant market share are sugar ethanol in Brazil and corn ethanol in the United States. With the exception of sugar ethanol in Brazil, petroleum’s dominance has never been seriously threatened anywhere since taking root nearly a century ago.

The fuel du jour phenomenon has much to do with oil market failures, overblown promises, the power of incumbents, and the short attention spans of government, the mass media, and the public. Alternatives emerge when oil prices are high but whither when prices fall. They emerge when public attention is focused on the environmental shortcomings of petroleum fuels but dissipate when oil and auto companies marshal their considerable resources to improve their environmental performance. When President George H. W. Bush advocated methanol fuel in 1989 as a way of reducing vehicular pollution, oil companies responded with cleaner-burning reformulated gasoline and then with cleaner diesel fuel. And when state air regulators in California and federal officials in Washington adopted aggressive emission standards for gasoline and diesel engines, vehicle manufacturers diverted resources to improve engine combustion and emission-control technologies.

The fuel du jour phenomenon also has much to do with the ad hoc approach of governments to petroleum substitution. The federal government provided loan and purchase guarantees for coal and oil shale “synfuels” in the early 1980s when oil prices were high, passed a law in 1988 offering fuel-economy credits for flexible-fuel cars, launched the Advanced Battery Consortium and the Partnership for a New Generation of Vehicles in the early 1990s to accelerate development of advanced vehicles, promoted hydrogen cars in the early years of this decade, provided tens of billions of dollars in federal and state subsidies for corn ethanol, and now is providing incentives for plug-in hybrids.

State governments also pursued a variety of options, including California’s purchases of methanol cars in the 1980s and imposition of a zero-emission vehicle requirement in 1990. These many alternative-fuel initiatives failed to move the country away from petroleum-based transportation. The explanation has much to do with government prescribing specific solutions and not anticipating shifts in fuel markets. More durable policies are needed that do not depend on government picking winners. The needed policies should be performance-based, stimulate innovation, and reduce consumer and industry risk and uncertainty. A more coherent and effective approach is needed to orchestrate the transition away from oil.

Policy strategy

The path to reducing oil dependence and decarbonizing transportation involves three related initiatives: improving vehicle efficiency, reducing vehicle use, and decarbonizing fuels. Here we focus on decarbonizing fuels, which has the additional benefit of reducing oil use.

To succeed, any policy approach must adhere to three principles: It must inspire industry to pursue innovation aggressively; it must be flexible and performance-based so that industry, not government, picks the winners; and it should take into account all GHG emissions associated with the production, distribution, and use of the fuel, from the source to the vehicle.

We believe that the low carbon fuel standard (LCFS) approach that is being implemented in California provides a model for a national policy that can have a significant near-term effect on carbon emissions and petroleum use. The LCFS is a performance standard that is based on the total amount of carbon emitted per unit of fuel energy. Critically, the standard includes all the carbon emitted in the production, transportation, and use of the fuel. Although upstream emissions account for only about 20% of total GHG emissions from petroleum, they represent almost the total lifecycle emissions for fuels such as biofuels, electricity, and hydrogen. Upstream emissions from extraction, production, and refining also comprise a large percentage of total emissions for the very heavy oils and tar sands that oil companies are using to supplement dwindling sources of conventional crude oil. The LCFS is the first major public initiative to codify lifecycle concepts into law, an innovation that must increasingly be part of emission-reduction policies if we are to control the total carbon concentration in the atmosphere.

To simplify implementation, the LCFS focuses as far upstream as possible, on the relatively small number of oil refiners and importers. Each company is assigned a maximum level of GHG emissions per unit of fuel energy it produces. The level declines each year to put the country on a path to reducing total emissions. To maximize flexibility and innovation, the LCFS allows for the trading of emission credits among fuel suppliers. Oil refiners could, for instance, sell biofuels or buy credits from biofuel producers, or they could buy credits from an electric utility that sells power to electric vehicles. Those companies that are most innovative and best able to produce low-cost, low-carbon alternative fuels would thrive. The result is that overall emissions are lowered at the lowest cost for everyone.

A clear advantage of this approach is that it does not have to be revised every time a new alternative appears. Any cost-effective energy source that moves vehicles with lower GHG emissions can benefit from the LCFS. The combination of regulatory and market mechanisms makes the LCFS more politically acceptable and more durable than a strictly regulatory approach.

With the exception of sugar ethanol in Brazil, petroleum’s dominance has never been seriously threatened anywhere since taking root nearly a century ago.

The California Air Resources Board adopted the LCFS in concept in June 2007 and began a rulemaking process, with the final rule scheduled for adoption in March 2009 and implementation in January 2010. California’s LCFS proposal calls for at least a 10% reduction in emissions per unit of energy by 2020.

The European Union has in parallel unveiled a proposal similar to the LCFS in California, and the Canadian provinces of British Columbia and Ontario as well as several states in the Northeast are considering similar approaches. The proposed 2007 Lieberman-Warner Climate Security Act (S. 2191) included an LCFS program.

Why not a renewable fuel standard?

To appreciate the wisdom of the LCFS approach, compare it to the alternatives. Congress adopted a renewable fuels standard (RFS) in 2005 and strengthened it in December 2007 as part of the Energy Independence and Security Act (EISA). It requires that 36 billion gallons of biofuels be sold annually by 2022, of which 21 billion gallons must be “advanced” biofuels and the other 15 billion gallons can be corn ethanol. The advanced biofuels are required to achieve at least 50% reduction from baseline lifecycle GHG emissions, with a subcategory required to meet a 60% reduction target. These reduction targets are based on lifecycle emissions, including emissions from indirect land use. Although the RFS is a step in the right direction, the RFS volumetric mandate has three shortcomings. First, it targets only biofuels and not other alternatives. Second, setting the target of 50 and 60% GHG reductions is an admirable but clumsy approach. It forces biofuels into a small number of fixed categories and thereby stifles innovation. Third, it exempts existing and planned corn ethanol production plants from the GHG requirements, essentially endorsing a massive expansion of corn ethanol. This rapid expansion of corn ethanol not only stresses food markets and requires massive amounts of water, but also pulls large quantities of land into corn production. The ultimate effect of increasing corn ethanol production will be diversion of prairie lands, pastures, rainforests, and other lands into intensive agricultural production, likely resulting in higher overall GHG emissions than from an equivalent amount of gasoline and diesel fuels.

Other strategies that have won attention are a carbon tax and a cap and trade program. Economists argue that carbon taxes would be the more economically efficient way to introduce low-carbon alternative fuels. Former Federal Reserve chairman Alan Greenspan, car companies, and economists on the left and the right all have supported carbon and fuel taxes as the principal cure for both oil insecurity and climate change. But carbon taxes have shortcomings. Not only do they attract political opposition and public ire, they are of limited effectiveness. Taxing energy sources according to how much carbon dioxide (CO2) they admit certainly sounds sensible and straightforward, but this strategy is not effective in all situations. A carbon tax could work well with electricity generation because electricity suppliers can choose among a wide variety of commercially available low-carbon energy sources, such as nuclear power, wind energy, natural gas, or even coal with carbon capture and sequestration. A tax of as little as $25 per ton of CO2 would increase the retail price of electricity made from coal by 17%, which would be enough to motivate electricity producers to seek lower-carbon alternatives. The result would be innovation, change, and decarbonization. Carbon taxes promise to be effective in transforming the electricity industry.

But transportation is a different story. Producers and consumers would barely respond to even a $50-a-ton tax, which is well above what U.S. politicians have been considering. Oil producers wouldn’t respond because they have become almost completely dependent on petroleum to supply transportation fuels and can’t easily or quickly find or develop low-carbon alternatives. Equally important, a transition away from oil depends on automakers and drivers also changing their behavior. A carbon tax of $50 per ton would raise the price of gasoline by only about 45 cents a gallon. This wouldn’t induce drivers to switch to low-carbon alternative fuels. In fact, it would barely reduce their consumption, especially when price swings of more than this amount have become a routine occurrence.

Carbon cap and trade programs suffer the same shortcomings as carbon taxes. This policy, as usually conceived, involves placing a cap on the CO2 emissions of large industrial sources and granting or selling emission allowances to individual companies for use in meeting their capped requirements. Emission allowances, once awarded, can be bought and sold. In the transportation sector, the cap would be placed on oil refineries and would require them to reduce CO2 emissions associated with the fuels. The refineries would be able to trade credits among themselves and with others. As the cap is tightened over time, pressure would build to improve the efficiency of refineries and introduce low-carbon fuels. Refiners are likely to increase the prices of gasoline and diesel fuel to subsidize low-carbon fuels, creating a market signal for consumers to drive less and for the auto companies to offer more energy-efficient vehicles. But unless the cap was very stringent, this signal would be relatively weak for the transportation sector.

Economists might characterize the LCFS approach as second best because it is not as efficient as a carbon tax or a cap and trade program. But given the huge barriers to alternative fuels and the limited impact of increased taxes and prices on transportation fuel demand, the LCFS is the most practical way to begin the transition to alternative fuels. Some day, when advanced biofuels and electric and hydrogen vehicles are commercially viable options, cap and trade and carbon taxes will become an effective policy with the transport sector. But until then, more direct forcing mechanisms, such as a LCFS for refiners, are needed to stimulate innovation and overcome the many barriers to change.

The LCFS cannot stand alone, however. It must be coupled with other policies, including efficiency and GHG gas emission standards for new cars, infrastructure to support alternative fuel penetration, and incentives to reduce driving and promote transportation alternatives. That is California’s approach, and it would also be an effective national policy in the United States and elsewhere.

Designing an LCFS

In the California case, the proposed 10% reduction in life-cycle GHG emissions by 2020 is imposed on all transport fuel providers, including refiners, blenders, producers, and importers. Aviation and certain maritime fuels are excluded because California either does not have authority over them or including these fuels presents logistical challenges.

There are several ways that regulated parties can comply with the LCFS. In the California model, three compliance strategies are available. First, refiners can blend low-GHG fuels such as biofuels made from cellulose or wastes into gasoline and diesel. Second, refiners can buy low-GHG fuels such as natural gas, biofuels, electricity, and hydrogen. Third, they can buy credits from other refiners or use banked credits from previous years. In the EU’s design, producers may also gain credit by improving energy efficiency at oil refineries or by reducing upstream CO2 emissions from petroleum and natural gas production, for instance by eliminating flaring.

LCFS is simple in concept, but implementation involves many details. The LCFS requires a system to record and verify the GHG emissions for each step of fuel production and distribution. California is using a “default and opt-in” approach, borrowed from a voluntary system developed in the United Kingdom, whereby fuels are assigned a conservative default value. In other words, the regulations estimate the carbon emissions associated with each fuel. The fuel producer can accept that estimate or provide evidence that its production system results in significantly lower emissions. This places the burden of measuring and certifying GHG emissions on the oil distributors, biofuel producers, and electricity generators.

A major challenge for the LCFS is avoidance of “shuffling” or “leakage.” Companies will seek the easiest way of responding to the new LCFS requirements. That might involve shuffling production and sales in ways that meet the requirements of the LCFS but do not actually result in any net change. For instance, a producer of low-GHG cellulosic biofuels in Iowa could divert its fuel to California markets and send its high-carbon corn ethanol elsewhere. The same could happen with gasoline made from tar sands and conventional oil. Environmental regulators will need to account for this shuffling in their rule making. This problem is mitigated and eventually disappears as more states and nations adopt the same regulatory standards and requirements.

Some day, when advanced biofuels and electric and hydrogen vehicles are commercially viable options, cap and trade and carbon taxes will become an effective policy with the transport sector.

Perhaps the most controversial and challenging issue is indirect land-use changes. When biofuel production increases, land is diverted from agriculture to energy production. The displaced agricultural production is replaced elsewhere, bringing new land into intensive agricultural production. By definition, this newly farmed land was previously used for less-intensive purposes. It might have been pasture, wetlands, or perhaps even rainforest. Because these lands sequester a vast amount of carbon in the form of underground and aboveground roots and vegetation—effectively storing more than twice the carbon contained in the entire atmosphere—any change in land use can have a large effect on carbon releases.

If biofuel production does not result in land-use changes—for instance when fuel is made from crop and forestry residues—then the indirect land-use effects are small or even zero. But if rainforests are destroyed or vegetation burned, then the carbon releases are huge. In the more extreme cases, these land-use shifts can result in each new gallon of biofuel releasing several times as much carbon as the petroleum diesel fuel it is replacing. In the case of corn ethanol, preliminary analyses suggest that ramping up to meet federal RFS targets will add about 40% more GHG emissions per unit of energy. Cellulosic fuels would have a much smaller effect, and waste biomass, such as crop and forestry residues and urban waste, would have no effect.

The problem is that scientific studies have not yet adequately quantified the indirect land-use effect. One could ignore the carbon and other GHG releases associated with land diversion in calculating lifecycle GHG emissions, but doing so imputes a value of zero to this effect. That is clearly wrong and inappropriate. The prudent approach for regulators is to use the available science to assign an initial conservative value and then provide a mechanism to update these assigned values as the science improves. Meanwhile, companies are advised to focus on biofuels with low GHG emissions and minimal indirect land-use effects, fuels created from wastes and residues or from degraded land, or biofuels produced from algae and renewable hydrocarbons. These feedstock materials and lands, not intensively farmed food crops, should be the heart of a future biofuels industry.

A broader concern is the environmental and social sustainability of biofuels. Many biofuel programs, such as those in the Netherlands, UK, and Germany, have or are adopting sustainability standards for biofuels. These sustainability standards typically address issues of biodiversity, soil, air, and water quality, as well as social and economic conditions of local communities and workers. They require reporting and documentation but lack real enforcement teeth. And none address effects on land and food prices and the market-mediated diversion of land to less sustainable uses. The effectiveness of these standards remains uncertain. New and better approaches are needed.

Those more concerned with energy security than with climate change might be skeptical of the LCFS. They might fear that the LCFS disadvantages high-carbon alternatives such as tar sands and coal liquids. That concern is valid, but disadvantaging does not mean banning. Tar sands and coal liquids could still be introduced on a large scale with an LCFS. That would require producers of high-carbon alternatives to be more energy efficient and to reduce carbon emissions associated with production and refining. They could do so by using low-carbon energy sources for processing energy and could capture and sequester carbon emissions. They could also opt for ways of converting tar sands and coal resources into fuels that facilitate carbon capture and sequestration. For instance, gasifying the coal to acquire hydrogen allows for the capture of almost all the carbon, because none remains in the fuel itself. In this way, coal could be essentially a zero-carbon option.

In a larger sense, the LCFS encourages energy producers to focus on efficiency and methods for reducing carbon. It stimulates innovation in ways that are in the public interest. Even with an LCFS policy in place, a region or nation might still produce significant quantities of fossil alternatives but those fuels would be lower carbon than otherwise, and they would be balanced by increasing quantities of other non-fossil fuels.

Going global

The principle of performance-based standards lends itself to adoption of a national or even international LCFS. The California program is being designed to be compatible with a broader program. Indeed, it will be much more effective if the United States and other countries also adopt it. Although some countries have already adopted volumetric biofuel requirements, these could be readily converted into an LCFS. It would require converting the volumetric requirements into GHG requirements. In the United States that would not be difficult because GHG requirements are already imposed on each category of required biofuels. For the EU programs, efforts are under way to complement their biofuel directive with an LCFS-like fuel-quality directive that would require a 10% reduction in GHG intensity by 2020 for transport fuels.

An important innovation of the California LCFS is its embrace of all transportation fuels. The U.S. and European RFS programs include only biofuels, including biogas. Although it is desirable to cast the net as wide as possible, there is no reason why all states and nations must target the same fuels. Indeed, the northeastern U.S. states are exploring the inclusion of heating oil in their LCFS.

Broader-based LCFS programs are attractive for three reasons. First, it would be easier to include fuels used in international transport modes, especially fuels used in jets and ships. Second, a broader LCFS would facilitate standardization of measurement protocol. At present, California is working with fuel-exporting nations to develop common methods for specifying GHG emissions of fuels produced in those countries. The fuels of most relevance at this time are ethanol and biodiesel from Brazil, but tar sands from Canada will also be of interest. Third, the broader the pool, the greater the options available to regulated entities, and more choice means lower overall cost, because there will be a greater chance of finding low-cost options to meet the targets.

The ad hoc policy approach to alternative fuels has largely failed. A more durable and comprehensive approach is needed that encourages innovation and lets industry pick winners. The LCFS does that. It provides a single GHG performance standard for all transport-fuel providers, and it uses credit trading to ensure that the transition is accomplished in an economically efficient manner.

Although one might prefer more theoretically elegant policies such as carbon taxes and cap and trade, those instruments are not likely to be effective in the foreseeable future with transport fuels. They would not be sufficient to induce large investments in electric vehicles, plug-in hybrids, hydrogen fuel cell vehicles, and advanced biofuels.

The LCFS is amenable to some variation across states and nations, but standardization of the measurement protocol is necessary for the LCFS performance standard to be implemented and enforced fairly and reliably. The LCFS not only encourages investments in low-carbon fuels, but it also accommodates high-carbon fossil fuels, with strong incentives to produce them more energy efficiently and with low-carbon energy inputs. The enormity of the threat of global climate change demands a policy response that encompasses all viable options.

Recommended reading

California Air Resources Board, Low Carbon Fuel Standard Program, 2008. http://www.arb.ca.gov/fuels/lcfs/lcfs.htm

Alexander E. Farrell and Daniel Sperling, A Low-Carbon Fuel Standard for California, Part 1: Technical Analysis. Institute of Transportation Studies, University of California, Davis, Research Report UCD-ITS-RR-07-07, 2007.

Alexander E. Farrell and Daniel Sperling, A Low-Carbon Fuel Standard for California, Part 2: Policy Analysis. Institute of Transportation Studies, University of California, Davis, Research Report UCD-ITS-RR-07-08, 2007.

Timothy Searchinger, Ralph Heimlich, R. A. Houghton, Fengxia Dong, Amani Elobeid, Jacinto Fabiosa, Simla Tokgoz, Dermot Hayes, and Tun-Hsiang Yu, “Use of U.S. Croplands for Biofuels Increases Greenhouse Gases Through Emissions from Land Use Change,” Science 319 (5867), 2008:1238 – 1240.


Daniel Sperling (), a professor of civil engineering and environmental science and policy and founding director of the Institute of Transportation Studies (ITS) at the University of California, Davis, is co-author of Two Billion Cars: Driving Toward Sustainability (Oxford University Press, 2009). Sonia Yeh is a research engineer at ITS.

Science on the Campaign Trail

In November 2007, a group of six citizens decided to do something to elevate science and technology in the national dialogue. They created Science Debate 2008, an initiative calling for a presidential debate on science policy. They put up a Web site, and began encouraging friends and colleagues to sign a petition calling for the debate. Within weeks 38,000 scientists, engineers, and other concerned citizens had signed on. The American Association for the Advancement of Science (AAAS), the National Academies, and the Council on Competitiveness (CoC) joined as cosponsors, although Science Debate 2008 remained independent, financed by individual contributions and volunteer labor. Within months it grew to represent virtually all of U.S. science, including almost every major science organization, the presidents of over 100 universities, prominent corporate leaders, Nobel laureates, and members of Congress. All told, the signatory organizations represented over 125 million Americans, making it arguably the largest political initiative in the history of U.S. science.

The need could not have been clearer. Science and technology dominate every aspect of our lives and thus heavily influence all of our policy considerations. Yet although nearly every major challenge facing the nation revolves around science policy, and at a time when the United States is falling behind in several key measures, the candidates and the news media virtually ignored these issues.

Others noted this problem as well. The League of Conservation Voters analyzed the questions asked of the then-candidates for president by five top prime-time journalists—CNN’s Wolf Blitzer, ABC’s George Stephanopoulos, MSNBC’s Tim Russert, Fox News’ Chris Wallace and CBS’s Bob Schieffer—who among them had conducted 171 interviews with the candidates by January 25, 2008. Of the 2,975 questions they asked, only six mentioned the words “climate change” or “global warming,” arguably the largest policy challenge facing the nation. To put that in perspective, three questions mentioned UFOs.

Armed with their list of supporters, the Science Debate team pitched the story to hundreds of news outlets around the country. The blogosphere buzzed over the initiative and ScienceDebate2008.com eventually rose to the top one-quarter of 1% of most visited Web sites worldwide. By any measure, coming off the Bush administration’s fractured relationship with U.S. science, the tremendous number of prominent individuals publicly calling for a presidential science debate was news, at least to some news outlets. But while “netroots” coverage exploded and foreign press picked up the story, not a single U.S. political news page and very few political blogs covered it. The idea of a science debate was being effectively shut out of the discussion by the mainstream press. The question was why.

The team investigated and identified a problem in U.S. news that goes beyond the fact that many news outlets are cutting their science sections. Even in outlets that still have one, editors generally do not assign political reporters to cover science stories, and science reporters don’t have access to the political pages. The business and economics beat and the religion and ethics beat have long since crossed this barrier onto the political page. But the science and technology beat remains ghettoized. Today, in an era when many of the biggest policy stories revolve around science, the U.S. press seems to be largely indifferent to science policy.

This situation tends to have an echo-chamber effect on candidates. Science Debate organizers secured broadcast partners in PBS’s NOW and NOVA and a venue at Philadelphia’s Franklin Institute. But the candidates responded that it wouldn’t work for their schedules. Tellingly, it did work for Barack Obama and Hillary Clinton to attend a “Compassion Forum” at Harrisburg’s Messiah College just days before the cancelled science debate, where, ironically, they answered questions about science. John McCain ignored both events.

Probing further, the Science Debate team learned that science was seen as a niche topic by the campaigns, and a presidential debate dedicated to science policy issues such as climate change, innovation, research, health care, energy, ocean health, stem cells, and the like was viewed as requiring extensive preparation and posing high risk for a limited return.

The tide turns

Science Debate 2008 wanted to test this assumption, so it partnered with Research!America and hired Harris to conduct a national poll. The results were astounding: Fully 85% of U.S. adults said the presidential candidates should participate in a debate to discuss key policy problems facing the United States, such as health care, climate change, and energy, and how science can help tackle them. There was virtually no difference across party lines. Contrary to the candidates’ assumptions, science is of broad concern to the public.

Next, Science Debate worked to reassure the campaigns that it was not out to sandbag one or another candidate by showing the candidates the questions in advance. The team culled the roughly 3,400 questions that had been submitted by supporters online into general categories and, bringing in the AAAS, the Academies, CoC, Scientists and Engineers for America, and several other organizations, developed “the 14 top science questions facing America.”

Armed with the results of the national polling, the continuing stream of prominent new supporters, and the 14 questions, the Science Debate team went back to the two remaining candidates and asked them to answer the questions in writing and to attend a televised forum.

Although the candidates still refused to debate, instead attending yet another faith forum at Saddleback Church in California, Science Debate 2008 was able to obtain written answers from both candidates. The Obama campaign tapped the expertise of his impressive campaign science advisory team to help him answer. The McCain campaign relied on their brilliant and multitasking senior domestic policy advisor, the economist and former Congressional Budget Office director Douglas Holtz-Eakin.

Once the answers were in hand, the Science Debate initiative was finally “news” from a political editor’s perspective. It was providing the candidates’ positions in their own words on a wide variety of substantive issues, and suddenly the floodgates opened. In the final month of the campaigns, reporters were looking for ways to differentiate the candidates, and political reporters started taking apart the nuances in the answers’ rhetoric. Obama, for example, expressly talked about a variety of international approaches to addressing climate change, and reporters noted that McCain remained silent on international issues and steered far away from the Kyoto Protocol.

The responses highlighted other, broader differences between the candidates. Senator Obama stressed his plans to double the federal agency research budgets, whereas Senator McCain stressed further corporate deregulation and tax credits to stimulate more corporate R&D, coupled with big money prizes to reward targeted breakthroughs. This philosophical difference carried through in answers on energy policy, education, innovation, and other areas. Senator Obama’s team further refined his answers into his official science policy platform. Senator McCain’s answer to the stem cell question came briefly into play in the race when his running mate, Governor Sarah Palin, contradicted it in an interview with James Dobson and was subsequently described as “going rogue.” In another answer and followup interview, Senator McCain claimed to have been responsible for the development of wi-fi and Blackberry-like devices, which caused a minor tempest. Senator Obama made news when 61 Nobel laureates, led by Obama science advisory team leader Harold Varmus, signed a letter in support of his campaign, and the answers of both candidates to the questions of Science Debate 2008 served as the basis for a letter signed by 178 organizations urging the winner to appoint a science advisor by January 20 and elevate the post to cabinet level.

References to the candidates’ science policy views eventually appeared in almost every major U.S. paper and in a wide variety of periodical and broadcast outlets across the country and around the world. All told, Science Debate 2008 generated over 800 million media impressions and was credited with elevating the level of discourse. No matter which candidate one supported, this level of discussion is healthy, some might even say critical, for a 21st-century United States.

Looking forward, much work remains to be done to repair America’s fractured relationship with science, and the Science Debate initiative and others like it should continue. Scientists must participate in the national dialogue, which requires a plurality of voices to be successful. President-elect Obama has laid out an ambitious science policy focused on some of the greatest challenges facing the nation, but harsh economic times and continued ideological opposition to science may make implementing that policy difficult. To succeed, the president will need the support of Congress, and members of Congress, in turn, the support of their constituents. In such an environment, the public’s understanding and appreciation of science policy will be important to the nation’s success, and the involvement of scientists will be critical in that process.


Shawn Lawrence Otto () is a cofounder and chief executive officer of Science Debate 2008. Sheril Kirshenbaum is a cofounder of Science Debate 2008 and a marine biologist at Duke University.

Overcoming Stone Age Logic

Through a remarkable manipulation of limited knowledge, brute force, and an overwhelming arrogance, humans have shaped a world that in all likelihood cannot sustain the standard of living and quality of life we have come to take for granted. Our approach to energy, to look at only one sector, epitomizes our limitations. We remain fixated on short-term goals and a simplistic model governed by what I call “Stone Age logic”: We continue to dig deep holes in the ground, extract dark substances that are the remains of pre-historic plants and animals, and deliver this treasure to primitive machines for combustion to maintain the energy system on which we base our entire civilization. We invest immense scientific and technological effort to find it more efficiently, burn it more cleanly, and bury it somewhere we will never have to see it again within a time horizon that might concern us. Find it, burn it, bury it. Our dependency on fossils fuels would be worthy of cavemen.

Fortunately, we seem to be slowly moving out of the final decades of the Stone Age, and discussions about whether our planet will be able to continue to sustain human societies at our present scale are no longer limited to environmentalists and apocalyptic religious groups. Prominent corporate, government, academic, and environmental leaders gathered during September 2008 in Washington to consider some of the most serious challenges facing humanity in a summit convened by Arizona State University. Among the host of concerned leaders were Minnesota governor Tim Pawlenty; Ford Motors executive chairman Bill Ford Jr.; Wal-Mart chairman Rob Walton; John Hofmeister, former president of Shell Oil and now president of Citizens for Affordable Energy; Massachusetts congressman Edward Markey, chair of the U.S. House Select Committee on Energy Independence and Global Warming; Michigan congressman Fred Upton, a member of the House Energy and Commerce Committee; and Frances Beinecke, president of the Natural Resources Defense Council.

Although there was broad agreement at the summit that Washington has abandoned its traditional environmental leadership role, leaving us reliant on a patchwork quilt of local or regional-scale solutions from cities and states, there was nevertheless a recognition that informed and carefully considered federal efforts will be essential if we are to meet our societal needs within the limits of our environment. However well-intentioned the motivation for immediate action may be, I would argue that without some grounding of public policy in the discourse of sustainability, we are likely to dig ourselves deeper into the holes we have already dug.

OUR UNIVERSITIES REMAIN DISPROPORTIONATELY FOCUSED ON PERPETUATING DISCIPLINARY BOUNDARIES AND DEVELOPING INCREASINGLY SPECIALIZED NEW KNOWLEDGE AT THE EXPENSE OF COLLABORATIVE ENDEAVORS TARGETING REAL-WORLD PROBLEMS.

Sometimes mistakenly equated with an exclusive focus on the environment, the term “sustainability” tends to be used so casually that we risk diluting its power as a concept. Its implications are far broader than the environment, embracing economic development, health care, urbanization, energy, materials, agriculture, business practices, social services, and government. Sustainable development, for example, means balancing wealth generation with continuously enhanced environmental quality and social well-being. Sustainability is a concept of a complexity, richness, and significance comparable to other guiding principles of modern societies, such as human rights, justice, liberty, and equality. Yet, as is obvious from our failure to embrace the concept in our national deliberations, sustainability is clearly not yet a core value in our society or any other.

Although the general public and especially our younger generations have begun to think in terms of sustainability, the task remains to improve our capacity to implement advances in knowledge through sound policy decisions. We have yet to coordinate transnational responses commensurate with the scale of looming problems such as global terrorism, climate change, or possible ecosystem disruption. Our approach to the maddening complexity of the challenges that confront us must be transformative rather than incremental and will demand major investment from concerned stakeholders. Progress toward sustainability will require the reconceptualization and reorganization of our ossified knowledge enterprises. Our universities remain disproportionately focused on perpetuating disciplinary boundaries and developing increasingly specialized new knowledge at the expense of collaborative endeavors targeting real-world problems. If we in the academic sector hope to spearhead the effort, we will need to drive innovation at the same time as we forge much closer ties to the private sector and government alike.

The summit in Washington is heartening evidence that such collaboration is possible. The involvement of corporate visionaries such as Bill Ford and Rob Walton as well as government leaders from both sides of the aisle represents an expanded franchise not only of individuals but of institutional capabilities for response. But more flexibility, resilience, and responsiveness will be required of all institutions and organizations. Society will never be able to control the large-scale consequences of its actions, but the realization of the imperative for sustainability positions us at a critical juncture in our evolutionary history. Progress will occur when new advances in our understanding converge with our evolving social, cultural, economic, and historical circumstances and practices to allow us to glimpse and pursue new opportunities. To realize the potential of this moment will require both a focused collective commitment and the realization that sustainability, like democracy, is not a problem to be solved but rather a challenge that requires constant vigilance.


Michael M. Crow is president of Arizona State University, where he also serves as professor of public affairs and Foundation Leadership Chair. He is chair of the American College and University Presidents Climate Commitment.

Practical Pieces of the Energy Puzzle: Reduce Greenhouse Gases Profitably

A regulatory system that rewards energy companies for innovations that boost efficiency can appeal to environmentalists and industry alike.

After the Senate’s failed effort to pass the Lieberman-Warner climate change bill, Congress could conclude that reducing greenhouse pollution is a political impossibility—the costs too high, the benefits too uncertain, the opposition too entrenched. But that would ignore a convenient truth: Technology already exists to slash carbon emissions and energy costs simultaneously. With a little political imagination Congress could move beyond Lieberman-Warner and develop an energy plan that satisfies both pro-business and pro-environment advocates.

The Lieberman-Warner bill would create a cap-and-trade system to govern carbon emissions from power plants and major industrial facilities. What the bill does well is to limit greenhouse gas emissions to 19% below the 2005 level by the year 2020. The bill further demands a 71% reduction by 2050. Some argue that these goals should be stricter or looser, but the legislation does at least set clear targets and timetables.

Then the legislation becomes needlessly complicated. In order to provide “transaction assistance” (or what might be described as bribes for the unwilling), the bill offers massive subsidies to utilities, petrochemical refiners, natural gas distributors, carbon dioxide (CO2) sequesterers, state governments updating their building codes, and even Forest Service firefighters wanting to prepare for climate changes that spark more blazes. The gifts may garner political support from key constituencies, but they induce little clean energy generation. The same criticism can be levied against a carbon tax, even one that returns some of the receipts to taxpayers and spends the rest researching emissions-mitigating technologies.

Nearly 70% of U.S. greenhouse emissions comes from generating electricity and heat, whereas only 19% comes from automobiles. Electricity generation is a particular problem because only one-third of the energy in the fuel used to produce electricity is converted to useful electric power. Enhancing the efficiency of electric generation is essential in the battle against global warming, and making use of the wasted thermal energy produced in power plants is the key to improving efficiency. The technology to capture and use that excess thermal energy already exists. The nation needs a policy that encourages every electricity generator and every industrial user of thermal energy to follow this approach. An elegant market-oriented approach that avoids the quicksand of government picking technology winners would be a system founded on output-based allocations of carbon emissions.

First, each producer of electricity and thermal energy would obtain initial allowances equal to the previous year’s national average output of CO2 emissions per delivered megawatt-hour of electricity and Btu of thermal energy.

Second, every plant that generates heat and/or power would be required to obtain total allowances equal to its CO2 emissions. As with the trading system in the Lieberman-Warner bill, high-carbon facilities would need to purchase extra allowances from clean plants at market prices.

Third, these allowances would be cut every year to insure total emission reductions. Under this output allocation system, companies using clean energy such as wind turbines or industrial waste-energy–recovery plants can sell their pollution allowances, thus improving their economic position. Combined heat and power units, by earning allowances for both electric and thermal output, would have spare allowances to sell, increasing their financial attractiveness. Improving efficiency at any energy plant would lower emissions (and fuel costs) without lowering output, thereby saving allowance purchases or creating allowances to sell. In contrast, a dirty power plant that did not increase its efficiency would have to buy allowances.

Output-based allocations create carrots and sticks, additional income for low-carbon facilities that sell allowances, and additional costs for high-carbon facilities that must purchase allowances. Lieberman-Warner or a carbon tax, in contrast, impose a cost on polluters but provide no direct incentive for the use of clean energy sources or to companies like mine that boost energy output and efficiency by merging electric and thermal energy production.

Establishing such a system is relatively simple. Measurement and verification for electric and thermal output and CO2 are easy, since all plants have fuel bills and electric meters, and thermal output can be calculated. Continuous emission meters, moreover, are now affordable and proven. Regulators simply need to require energy plants to submit annual audited records, along with allowances covering actual emissions of each pollutant.

How it works

Each electric producer would receive initial allowances of 0.62 metric ton of CO2 emissions per delivered megawatt-hour of electricity, which are the 2007 average emissions. Each thermal energy producer would obtain initial allowances of 0.44 metric ton of CO2 emissions per delivered megawatt-hour of thermal energy, roughly the 2007 average emissions.

At the end of each year, a plant’s owner must turn in allowances for each pollutant equal to actual output. Consider CO2. Every producer of thermal energy and/or electricity would keep track of all fossil fuel burned in the prior year and calculate the total CO2 released. Each plant also would record the megawatt-hours of electricity produced, subtracting the amount for line losses, and record each unit of useful thermal energy produced and delivered. The plant would automatically earn the scheduled allowance of CO2 per megawatt-hour and per unit of thermal energy, but it must turn in allowances for every ton of carbon dioxide actually emitted in the prior year.

The allowance credits would be fully tradable and interchangeable between heat and power. Note that efficiency improvements reduce the burning of fossil fuel and thus reduce carbon emissions, but they do not decrease the plant’s output, and thus would not decrease total output allowances. Any production of heat or power without burning additional fossil fuel would earn an emission credit but produce no added emissions, which enables the producer to sell the allowance and improve the profitability of cleaner energy.

Heat and power producers, of course, have many options. By increasing efficiency, a company can reduce CO2 emissions, save fuel, reduce purchases of allowances, or add revenue from sold allowances. By installing a combined heat and power unit sized to the facility’s thermal load, it would earn additional allowances, providing revenues above the value of the saved fuel.

Consider a typical carbon black plant that produces the raw material for tires and inks. It currently burns off its tail gas, producing no useful energy service. If the owner built a waste energy recycling plant to convert the flare gas into electricity, it would earn 0.62 ton of CO2 allowance for every delivered megawatt-hour. A typical carbon black plant could produce about 160,000 megawatt-hours per year of clean energy. At a value of $20 per ton of CO2, the plant would earn $3.2 million per year from the output allowance system.

Now consider the options for a coal-fired electric-only generator that emits 1.15 tons of CO2 per delivered megawatt-hour. It receives only 0.62 tons of CO2 allowance and must purchase an additional 0.53 tons, costing $10.60 per delivered megawatt-hour (with $20-per-ton CO2). To reduce carbon emissions and save money, it could invest in devices to improve the plant’s efficiency and lower the amount of coal burned per megawatt-hour. Second, it could entice a thermal-using factory or commercial building to locate near the power plant and sell some of its presently wasted thermal energy, earning revenue from that sale and added CO2 allowances for the useful thermal energy. Third, it could invest in a wind farm or other renewable energy production facility and earn CO2 credits. Fourth, it could pay for an energy recycling plant to earn added allowances. Fifth, it could purchase allowances. Or, sixth, it could consider operating the plant for intermediate instead of base load. All of these options reduce total U.S. CO2 emissions.

Rather than collect and distribute trillions of dollars, Congress would have only two key tasks: to set fair rules for calculating useful output and to establish the decline rate for the allowances per unit of useful output. Current scientific thinking suggests that we must reduce total carbon emissions by 70% or more over the next 50 years. If initial output allowances are set equal to average outputs in 2006 for each megawatt-hour of electricity and useful thermal energy, allowances would need to decline by 2.38% per year for the next 50 years in order to reach the 70% reduction. If there were no increase in the amount of useful energy consumed for the next 50 years, this reduction would cause CO2 emissions to drop to 30% of 2006 emissions. Of course, if the nation’s total energy use increases, allowances would have to decline more rapidly to reach the 2050 goal.

Advantages

Output-based allowances are simple, keep government from picking technology (which is always a bad bet), allow maximum flexibility for the market to lower fossil-fuel use, and encourage profitable greenhouse gas reduction. As with cap-and-trade systems, output-based allowances can be ratcheted down to ensure greenhouse gas reductions. Consider the faults of other approaches.

A carbon tax requires legislators to determine the precise price per ton of CO2 emissions that would cause the desired reduction of fossil-fuel consumption. Congress then must decide how to spend the collected money, creating an atmosphere ripe for mischief.

A cap-and-trade system that allocates initial allowances to existing emitters, as was done with sulfur emissions in 1990, rewards pollution rather than clean energy. A new combined heat and power facility, although emitting half as much CO2 per megawatt-hour as do older plants, would receive no baseline allowances, be required to purchase carbon allowances for all CO2 emissions, and then would compete with an old plant that was granted sufficient allowances to cover all emissions. Such an allocation approach is favored by owners of existing plants, for obvious reasons, but it retards efficiency.

A system of allowances per unit of input fuel, such as the Clean Air Act’s approach toward criteria pollutant emissions, pays no attention to energy productivity and gives no credit for energy efficiency. In contrast, an output-based allowance system rewards every approach that emits less CO2 per megawatt-hour, regardless of technology, fuel, location, or age of plant. Thus, the output allowance approach will produce the lowest-possible-cost CO2 reductions.

An output allowance system is quintessentially American, solidly based on market forces and rewarding power entrepreneurs for “doing the right thing.” It leverages the U.S. innovative and creative spirit by encouraging all actions that lower greenhouse gas emissions per unit of useful output and penalizing above-average pollution per unit of output. The Lieberman-Warner approach, in contrast, has government picking winners and distributing up to $5.6 trillion to a hodgepodge of political interests.

The output-allowance system, moreover, sends powerful signals to every producer of heat as well as every producer of power. The total money paid for allowances exactly matches the total money received from the sale of allowances, so the average consumer pays no added cost for electricity. The impact on consumer impacts will vary and will be higher for those with few current alternatives to dirty fossil-fuel plants. The market decides the clearing price of the allowances, and every producer, regardless of technology, fuel, age of plant, or location, receives the same price signals.

Output-based allocations could also improve several provisions of the Clean Air Act, which has achieved impressive results but has blocked investments in energy productivity. The current approach, crafted in 1970 when global warming was not yet a concern, gives existing energy plants the right to continue dirty operations but forces new facilities to achieve significantly lower emissions. By forcing any plant that undergoes significant upgrading to become subject to stricter emission standards, the law’s New Source Review has effectively blocked investments to increase efficiency.

A transition away from a carbon-intensive economy will doubtless hurt some businesses, particularly big polluters. But others will prosper. Rather than having environmentalists focusing on the moral need to reduce pollution and industrialists responding that change will hurt the economy, a better way to structure the climate change debate is to ask how the nation can profitably reduce greenhouse gas emissions. On this point, environmentalists and industrialists should be able to find common ground. Output-based allocations, by unleashing market forces and sending clear signals, can muster such a political agreement as well as stimulate an investment boom in increased energy productivity.


Richard Munson is senior vice president of Recycled Energy Development (www.recycled-energy.com) and author of From Edison to Enron: The Business of Power and What It Means for the Future of Electricity (Praeger, 2005).

Practical Pieces of the Energy Puzzle: Energy Security for American Families

Helping moderate-income households invest in energy-efficient cars, appliances, and home retrofits would benefit financially struggling families as well as the U.S. economy.

In July 2008, Americans were paying $4.11 per gallon of gasoline—nearly three times the price six years earlier, according to the Energy Information Administration. Most people have felt the pinch of higher energy prices, but those hurt the most have been moderate-income families who struggle to buy gas for their cars and to heat and cool their homes. Although prices have now fallen because of the current global economic problems, the longer-term trend of rising energy prices will continue. Many of the 70 million U.S. households making less than $60,000 a year will find it increasingly difficult to cope.

The United States needs a strategy to help moderate-income households adjust to the new reality of higher energy prices. A proposal I call the Energy Security for American Families (ESAF) initiative would give moderate-income households the power to control their long-term energy costs, largely by improving household energy efficiency. Specifically, the initiative would offer a combination of vouchers, low-interest loans, and market-based incentives to help families invest in energy-efficient cars, homes, and commutes. These investments would allow workers to save money, year after year, gaining economic security.

Energy costs are a drain on the economy, leading to increases in prices and unemployment, and most of the money spent on oil leaves the U.S. economy. Channeling money toward investments in energy efficiency will not only help families cut costs but also create jobs and reduce energy demand, pollution, and greenhouse gases. The ESAF initiative represents a long-term investment in the health and resilience of the U.S. economy.

The fastest and easiest way to reduce the consumption of petroleum right now is to remove the vehicles with the worst gas mileage from the road and replace them with more efficient cars.

After enjoying more than two decades of relatively cheap energy, U.S. consumers have struggled to pay monthly gasoline bills that rose (in constant dollars) from $21 billion in July 2003 to $50 billion in July 2008, according to the Oil Price Information Service. Increased energy prices have hurt the economy as a whole, squeezing the credit and housing markets, depressing auto sales, and raising unemployment. They have had a negative multiplier effect on the economy, increasing inflationary pressures and shifting spending, so that money once spent on consumer goods is now going to pay mostly non-U.S.-based oil producers. Growing global demand for energy, coupled with a cramped supply infrastructure, means that volatile energy prices are here to stay, and they require a thoughtful policy response.

Hit hardest by high energy prices are U.S. households making less than $60,000 a year. These people spend a higher percentage of their income on energy than do wealthier Americans, and a lack of capital limits their ability to reduce the amount of energy they consume. Transportation, including vehicle costs, eats up about a fifth of household budgets for all U.S. households, but for those making $20,000 to $50,000 a year, the total cost of transportation may top 30%, according to a 2006 study of 28 metropolitan areas by the Center for Neighborhood Technology. Part of the reason for this disparity is that many moderate-income people find cheaper housing in exurban areas far from their workplaces.

The rapid increase in gasoline prices hit this group disproportionately hard. In 2006, households making $15,000-$40,000 a year spent 9% of their income on gasoline (double the national average of 4%); by the summer of 2008, they were spending between 10 and 14% of their income on gasoline alone, according to the Bureau of Labor Statistics’ Consumer Expenditure Survey and price figures from the EIA. For rural families, who drive nearly 10,000 miles a year more than do urban households, the cost is even higher.

On the home front, low-and moderate-income families are again at an energy disadvantage. Poor insulation and old equipment cause lower-income families to spend more per square foot to heat their homes than middle-income families, according to the EIA. In the 7% of U.S. homes heated with oil, moderate-income families can be at a disadvantage when purchasing fuel. Higher-income families are often able to hedge their spending on heating oil by locking in prices in advance, whereas struggling families are at the mercy of the market, buying when their tanks are empty.

Higher fuel prices are reducing the standard of living for these U.S. families. A 2008 survey by the National Energy Assistance Directors Association (NEADA) found that 70% of low-and moderate-income families said that energy prices had caused them to change their food-buying habits, and another 30% said that they had cut back on medicine. Utilities have recently become more aggressive in collecting unpaid bills. Between May 2007 and May 2008, an unprecedented 8% of U.S. households had their utilities cut off for nonpayment, according to NEADA.

Increasingly, moderate-income families find themselves in a Catch-22: Despite being squeezed by high energy costs, they are unable to reduce the amount of energy they use. Often living paycheck to paycheck, they lack the capital to invest in a more efficient vehicle or furnace, or in a home closer to their work, even when they know it would ease their monthly budget.

The credit crisis has added to their troubles by further limiting their ability to borrow money. For example, the sub-prime auto lending market is now experiencing the highest foreclosure rates in 19 years, and lenders are cutting back on auto loans. People who can’t qualify for loans are increasingly resorting to buying cars at “buy here, pay here” lots, where some 10,000 dealers nationwide charge interest of 25% or more and may impose additional finance charges that push the total much higher.

Another component of the crisis is the change in the pricing of fuel-efficient vehicles. In the past, used economy cars were relatively inexpensive. But the increasing size of U.S. vehicles during the past decade, combined with high gasoline prices, has changed the used car market. Used fuel-efficient cars are now relatively expensive compared to gas guzzlers, which may be the most affordable cars for lower-income buyers. The National Association of Automobile Dealers estimates that every $1 increase in the price of gas deflates the resale value of large pickups by $2,200 and increases the resale value of smaller cars by $980. This cruel trick of the market means that lower-income families, unable to spend much on their vehicles, may be forced to spend even more of their income on gasoline. Although economically rational decisions regarding the purchase of an automobile, commute length, and home energy efficiency may be options for those in higher-income brackets, moderate-income households do not have the same range of choices or access to capital.

For most, going without a car is not an option. Nine out of ten U.S. workers have cars, but for low-wage workers, owning a car may seal their fate. Owning a reliable vehicle has been shown to be more important for high school dropouts than earning a GED in getting and keeping a job, and on average, those with cars made $1,100 more per month than those without, according to a 2003 study by Kerry Sullivan for the National Center for the Study of Adult Learning and Literacy.

The problem with conventional fixes

The energy crisis facing moderate-income families has three components: These families are more dependent on energy than are wealthier families; increased energy costs eat up a higher percentage of their income; and high energy costs threaten their economic stability and standard of living. Market forces have exacerbated the first two problems, neither of which the government has addressed. The government has attempted to address the third problem through direct or indirect emergency energy payments, but existing government programs are stretched beyond their capacity to deliver emergency funding.

The Low Income Home Energy Assistance Program is the main federal program providing emergency funds for heating for low-income families. It has been funded at $5.1 billion for FY 2009, but this will not be enough to meet all requests. Around the country, needs have risen dramatically with rising energy prices and the economic downturn. In Nevada, for example, applications for assistance were up 79% in 2007.

Proposed solutions to alleviate the pain of high energy costs have fallen short. Republicans have suggested gas tax holidays, whereas Democrats have favored $1,000 subsidy checks. Neither addresses the underlying problems facing the families disproportionately affected by volatile energy prices. Gas tax holidays encourage more gasoline use and have been shown to create larger profits for gasoline marketers and minimal price reductions for buyers. Stimulus checks temporarily ease family finances, but they don’t help families change their consumption or spending habits. Early studies of how families spent the $600 tax rebate in 2008 reveal that they spent more than half on gasoline, food, and paying down credit card debt. These short-term measures also strike many voters as gimmicky, election-year ploys. At the very least, they are effectively government overrides of market forces that may actually delay the kind of investment and behavioral changes necessary to cope with higher energy costs in the long run.

The only way to overcome the unique energy disadvantages moderate-income families face is to help them invest in energy-efficient cars, appliances, and home retrofits. Reducing energy consumption pays for itself in energy savings and by making homes more comfortable. For about $2,800, the Department of Energy’s (DOE’s) Weatherization Assistance Program seals air leaks, adds insulation, and tunes and repairs heating and cooling equipment to reduce household heating energy consumption by an average of 23%, for a savings of $413 in heating and cooling costs the first year. Despite the program’s success, it has been poorly funded during its 30-year lifetime, reaching about 5 million homes out of the 35 million the DOE estimates are eligible. Of these, DOE estimates that 15 million homes owned by low-income families would benefit from this retrofit.

Investing in more efficient appliances offers further savings. For example, a programmable thermostat, which costs about $150, has a payback period of less than a year. A $900 flame-retention burner saves about $300 per year. Replacing a pre-1980 refrigerator with an Energy Star model can save more than $200 in electricity annually. Nationwide, pilot energy-efficiency programs have decades of experience, reducing energy bills by more than 20%. In California and New York, efficiency programs save families an average of $1,000 and $600 a year, respectively. These savings act like a stimulus program, year in and year out.

Whereas energy spending is a drain on the economy, yielding fewer jobs than other types of spending, every dollar spent on energy efficiency returns two dollars in benefits to the state, according to the California Public Utilities Commission. According to estimates by the DOE’s weath-erization program, every dollar invested produces $2.72 in savings and benefits, and every $1 million invested creates 52 direct jobs and 23 indirect jobs. Residents also see other, less-tangible returns, including cleaner air and less demand on the power grid, leading to fewer brownouts.

Helping working families reduce their dependence on fossil fuels is a good investment strategy for the United States. Moreover, moderate-income families appear to be willing to adopt energy-efficient and energy-saving habits: They take public transit at two to four times the rate of more affluent families. They also report closing off parts of their homes and keeping their living spaces either hotter or cooler than they feel is safe, according to a survey by NEADA. Thus, targeting this group of households for energy-efficiency investment may yield large financial and social dividends, as well as immediate and significant reductions in energy use and carbon dioxide emissions.

Policy specifics

The centerpiece of the ESAF initiative is a federal government-guaranteed loan program that would enable qualified lenders to make low-interest loans to moderate-income families for the purchase of energy-efficient autos, appliances, and home renovations. In addition, a system of vouchers and state-based incentives would be used to influence purchasing decisions. To create flexible transportation options beyond private cars, the initiative would reward those who don’t drive their cars to work with a yearly voucher and would provide seed money to the public and private sectors to develop alternative transit programs.

The target of these programs should be families or multiple-person households earning $60,000 per year or less. However, in the interests of geographic fairness, because the costs of living are higher in some parts of the country than others, the cutoff point could be raised to $75,000. Vouchers and state-based “nudges” could be tailored to reach certain income levels such as households of two or more people earning less than $60,000.

Automobile vouchers and loans. Private cars and trucks consume 18% of the energy used in the United States and the majority of the petroleum that is burned. The average fuel economy for new cars and trucks is now just 20 miles per gallon (mpg). The fastest and easiest way to reduce the consumption of petroleum right now is to remove the vehicles with the worst gas mileage from the road and replace them with more efficient cars. Toward that end, the ESAF initiative would offer a $1,000 voucher, low-interest auto loans, and state-run “clunker credit” programs to help families buy a car that achieves 30 mpg or more. This sort of government investment in private cars is far from unprecedented. The $3,150 tax rebates offered to buyers of Toyota Prius hybrids were essentially rewards to well-off buyers, many with incomes of $100,000 or above.

The cornerstone of the proposed auto program is very low-interest loans, backed by a government guarantee but provided by private lenders, for cars that achieve 30 mpg or more. Loans would be similar to the Small Business Administration’s 7A loans, with the federal government offering a guarantee on most of the value of the loan, thus reducing the risk to authorized lenders. Funds could be directed to favored lenders, such as credit unions, which have a track record of making auto loans to moderate-income car buyers. At least 8 million low-and moderate-income households already receive auto loans yearly, according to an Aspen Institute study, but most pay far higher interest rates. The standard auto loan rate is about 6%, but the sub-prime rate that is often the only option for less-affluent borrowers is usually above 17%. Because loans in the program would be guaranteed by the federal government, interest rates could be as low as 2% APR for a loan of up to $15,000.

Qualifying for a loan would be easy. Buyers could apply for the loans online and receive a notice of financing from a local bank or credit union as well as a list of cars eligible for purchase or trusted dealers in their area. Some of the country’s 8,500 credit unions already offer similar services that could be expanded. The loans would include clear rules to discourage predatory lending or sales. For example, used cars would not be financed at more than Blue Book value. The easy availability of low-cost capital may in itself discourage some predatory lending.

The ESAF initiative would also offer money to states to administer clunker-credit programs. Many states, including Texas and California, already operate such programs, which pay owners of old cars to turn them in to salvage yards, where they are dismantled. Texas has successfully offered payments of up to $3,500 per car as part of a pollution-abatement program, and similar programs are in place in Virginia, Colorado, Delaware, and Illinois. Combined with the low-interest loan program, a clunker-credit program would be an effective way of removing less-efficient and dirtier cars from the road and leading buyers to make a leap up in fuel efficiency. The advantage of state implementation is that the states would be able to adjust to local market conditions and be creative in finding the best mix of carrots and sticks.

A new car can dramatically improve the finances and lives of working families. The Bonnie CLAC (Car Loans and Counseling) auto loan program in New Hampshire has helped a thousand drivers obtain lower-interest loans for new cars, reducing their auto payments and maintenance costs. For some households, the savings in fuel have been enormous: One couple made a daily 130-mile commute in a 1998 Ford Explorer with a fuel efficiency of 10 mpg. The higher-mileage Honda Civic they bought to replace it reduced their monthly spending on gasoline from $800 to $200.

Relatively small shifts in market behavior could have a profound effect on U.S. energy consumption. For example, the scrap rate for light trucks, sport utility vehicles, and vans is now around 5% a year. Bumping that to 8% and encouraging 75% of those 8 million households to buy a 30-mpg vehicle would reduce U.S. gasoline consumption by 3.33 billion gallons a year. On a macro level, the U.S. economy would avoid spending $10 billion on fuel (at a gasoline price of $3 a gallon). The program would also assure automakers that there would be long-term demand for fuel-efficient vehicles, creating a market incentive for them to create more vehicles with higher fuel economy than current standards require.

Home efficiency vouchers and loans. U.S. homes consume 21% of the energy used in the United States. The average household spends nearly $2,000 on energy and produces twice the greenhouse gases of an average car. Modest investments in energy efficiency could reduce home energy bills, and emissions, by a fifth.

Toward that end, the ESAF initiative would offer a $1,000 voucher to eligible households to spend on immediate weatherization or appliance upgrades; underwrite a home equity loan program offering low-cost loans for energy-fefficiency renovations and efficient appliances; and support a state-run incentive program to encourage cooperation between utilities and homeowners.

The voucher could be issued in the form of an electronic debit card that could be used to buy energy-saving supplies, appliances, and weatherization retrofits that have been approved as cost effective by the EPA’s Energy Star program. Obviously, certain measures would need to be put in place to prevent fraud and waste, but ideally state regulators, utilities, contractors, and appliance dealers would offer packages combining energy audits, approved appliances, and cost-effective retrofits.

The ESAF initiative would require Fannie Mae or a comparable institution to provide low-interest home equity loans and mortgages for energy-efficient home improvements. In the 1990s, Fannie Mae had an effective energy-efficiency mortgage program that proved that investing in efficiency improved families’ ability to pay back their loans by lowering their bills. This time around, Fannie Mae should renew that program and make it accessible to all moderate-income borrowers. If Congress approves a mortgage rescue plan to help with the current financial crisis, energy-efficiency investments should be included in renegotiated mortgages and new ones as well. Like the auto loan program, the home efficiency loan would be backed by a government guarantee. In addition, owners of rental properties could be offered loans to upgrade the efficiency of their properties. This could be a requirement for Section 8 housing, which receives government subsidies.

As with the auto program, applying for a loan should be easy and fast. Families could initiate the process by applying online and having their request routed to nearby banks or credit unions to follow up. Once given a loan, families could purchase Energy Star appliances from approved dealers or contract with a bonded contractor to do construction work on their homes.

Utility companies are in an ideal position to help homeowners perform energy audits and make decisions about efficiency purchases. Utilities have data on all homes in the area they serve, knowledge of energy-demand patterns, and in some states already collaborate with households to reduce energy use. When proper incentives are in place, utilities profit by helping to reduce energy demand because they can avoid investing in power plants and transmission lines. The ESAF initiative would require state regulators to create incentives and rules to encourage utilities to help reduce energy demand. Ideally, utilities would establish partnerships with ratepayers, helping them figure out how to reduce energy demand by 20% and rewarding households that met the reduction targets by lowering their rates.

Innovative transit. Three-quarters of Americans commute to work alone in their cars, 5% take public transit, and 15% commute by car pool, in van pools, by bicycle, by telecommuting, and on foot. If just 3 million more Americans left their cars in the garage, the nation would have net savings of at least a billion gallons of gasoline a year. And the $3 billion those drivers would have spent on gasoline would be directed toward more productive spending. The United States needs to develop alternatives to private cars and mass transit for commuters.

Toward that end, all workers who don’t drive themselves to work would be given a $750 tax rebate every year to offset their transit costs. Drivers are already offered tax breaks of nearly $1,000 a year to offset the cost of parking, but by leaving their cars at home, non-car commuters do society several favors. They reduce road congestion and therefore commute times for everyone else, reduce pollution and greenhouse gas emissions, and reduce petroleum demand, which may make gasoline cheaper for other drivers. A recent study of the San Francisco Bay Area’s 9,000 “casual carpoolers,” who share rides during the morning commute, found that they directly and indirectly save 900,000 gallons of gasoline a year.

The transit subsidy, which could be delivered to recipients’ bank accounts as a tax refund or as a debit card, would reward non-drivers for making a decision that benefits everyone. It would replace the $1,380 tax break the federal government already offers on employer reimbursements for carpooling, a benefit rarely claimed because the rules and paperwork requirements are so cumbersome. Fraudulent claims could be discouraged by requiring written assurances from employers or other proof that applicants commuted by transit.

To encourage new ways of traveling to work, startup funds would be provided to local governments, businesses, and nonprofits to help them design innovative, self-supporting, “mini-transit” programs. Such flexible transit programs might include neighborhood car sharing, casual carpool programs, employer-based carpool programs, van pools, and jitneys. Ride sharing can be made easy, convenient, and safe through the use of mobile phones, GPS devices, and transportation affinity networks—a Facebook for carpoolers. It is even possible to pay drivers by using cell phones to transfer funds, as the program goloco.org already does with its 10,000 members. With nurturing, these programs could fill in the considerable gaps in the mass transit system.

Some city buses, which sometimes travel with only a few passengers, may actually use 25% more energy per passenger mile than a private car, according to Oak Ridge National Laboratory. A van pool, in which seats are much more likely to be filled, removes between 6 and 13 cars from the road, according to the EPA. Large employers of moderate-income workers, such as Wal-Mart, could work with other employers and local governments to create van pools to carry their workers to and from work, eliminating the need for employee parking spaces and easing scheduling problems caused by workers with transportation problems. Cities would benefit from reduced congestion, more readily accessible jobs, and less pollution. Workers would benefit because they would not need to shoulder the cost of owning a car and might be able to count on more regular working hours. Many commuters who use van pools say that they make their day less stressful. The market for these services could be significant. A 2003 study in the Puget Sound area of Washington State found that the existing fleet of 1,200 vanpools, which accounted for 1.4% of the commuter trip market, could be expanded up to six-fold with more aggressive incentives.

Making efficiency pay for itself

The ESAF initiative could assist most moderate-income households if it were funded at $45 billion a year for three years. The bulk of the funding would go toward transit tax rebates and vouchers for autos and home efficiency improvements; the low-interest loan program would cost far less. The initiative would provide 20 million transit riders with $750 rebates and offer vouchers for 6 million autos and 10 million home-efficiency projects. Over three years, the program could reach close to 70 million households by helping to upgrade 30 million homes, purchase 18 million cars, and subsidize 20 million commuters a year. The cost of these vouchers would be $31 billion per year.

Channeling money toward investments in energy efficiency will not only help families cut costs but also create jobs while reducing energy demand, pollution, and greenhouse gases.

The auto purchases and home-energy retrofits would be made possible by a $300-billion loan-guarantee program, which would cost approximately $3 billion over five years. Another $9 billion a year would be distributed to states to create incentives and flexible transit.

A typical household taking advantage of both the auto and home efficiency programs could reap annual savings of $1,235 on gasoline and nearly $400 on home energy costs. Some would save much more, either in energy costs or on auto or home financing. Members of a household commuting to work by carpool or vanpool would save at least 187 gallons of gasoline a year and receive a $750 voucher in addition.

At the end of three years, the auto program would have reduced U.S. gasoline consumption by 6.35 billion gallons, or 4.5% of total consumption. Likewise, by the third year of the home-efficiency program, 30 million homes would be saving more than $12 billion in energy costs.

The ESAF initiative could be funded as part of a federal stimulus program aimed at the auto and construction industries, through a carbon tax or auction, or by a windfall profits tax on energy companies. Although the public often opposes taxes on gasoline, ESAF could also be funded by a modest tax on imported oil. The United States imports 13.6 million barrels of oil a day, and Americans might be induced to tax those imports as part of a package to send a message to foreign oil producers. A tax of $6 a barrel would yield a fund of nearly $30 billion the first year, at a cost to drivers of just 9 cents a gallon. During the course of a year, the average U.S. family would pay less than $100 toward the tax, an amount that could be entirely offset by a decline in gasoline prices.

The primary purpose of the tax would be to provide a stable source of funds for energy-efficiency investments, but it would have several other important effects as well. First, it would signal to the oil market and oil producers that the United States intends to overcome domestic political inertia and begin aggressively decreasing oil demand. An initiative of this scale would also send a signal to other oil-consuming countries that the United States no longer intends to support cheap-by-any-means-necessary gasoline and is moving toward containing demand through market measures. The tax would also provide an opportunity to educate the public about the loan programs and other ways to reduce gasoline consumption. Driving habits and auto maintenance influence vehicle fuel efficiency by as much as 15%. Printing notices of the tax and tips for reducing fuel consumption on gas receipts has the potential to significantly increase driver awareness and reduce demand, as previous government education projects have reduced demand for tobacco, alcohol, and even water during droughts.

The ESAF initiative represents a long-term investment in the well-being of U.S. families as the nation heads into an era of real uncertainty about energy security and climate change. By shifting spending from energy bills to investment, the initiative will stimulate the economy and encourage businesses that provide smart energy solutions. The relatively low cost will not only reduce household bills but also yield big dividends in greenhouse gas emissions reduction. This is particularly important because a number of studies indicate that carbon cap-and-trade schemes will disproportionately burden lower-income households, rural households, and those living in coal-dependent states in the south and Midwest. The advantage of this initiative is that it addresses this burden directly by enabling moderate-income families to take control of their finances and emissions. What’s more, the net effect to society could be large: If 60 million families take advantage of the program to lower their energy consumption by just 10%, the total reduction of 132 million tons of carbon dioxide would be the equivalent of the emissions of Oregon, South Dakota, Vermont, Maine, Idaho, Delaware, Washington, D.C., and Maine combined. Empowering moderate-income households to be active agents in ensuring the nation’s energy security will strengthen the overall economy and assure a greener, more prosperous future.


Lisa Margonelli () is a California-based Irvine Fellow with the New America Foundation and the author of Oil on the Brain: Petroleum’s Long Strange Trip to Your Tank (Broadway Books, 2008).

Practical Pieces of the Energy Puzzle: A Full-Court Press for Renewable Energy

Transformation of the energy system will require steady and generous government support across technological, economic, and social domains.

Any effort to move the United States away from its current fossil-fuel energy system will require the promotion of renewable energy. Of course, renewable energy alone will not solve all problems of climate change, energy security, and local pollution; policies must also stress greater energy efficiency, adaptation to existing and future changes in climate, and possibly other options. But greatly increased reliance on renewable energy will certainly be part of the mix. The nation’s vast resources of solar and wind imply that renewable energy could, over time, replace a large part of the fossil-fuel energy system. Policies that encourage and guide such changes need to think about energy as a technological system and include a portfolio of policies to address all of the components of that system.

For more than 20 years, economists, historians, and sociologists have been analyzing technologies as systems. Although each discipline has its own emphasis, framework, and nomenclature, they all converge on a central insight: The materials, devices, and software that usually are thought of as “technology” are created by and function within in a larger system with economic, political, and social components. Policies that seek to change technological systems need also to address these nontechnological components, and moving toward the extensive use of renewable energy would constitute a major system change.

The existing energy system includes economic institutions such as banks and capital markets that know how to evaluate an energy firm’s financial status and are knowledgeable about prices. Politically, the system requires such measures as technical standards for a range of items, such as voltage and octane, as well as regulatory rules and structures for environmental protection and worker health and safety. At the social level, the system needs people with diverse skills to operate it, as well as university departments to train these workers and associations to promote their professional growth. Also needed are institutions that can interact successfully with the many populations affected by energy developments.

Along every dimension, the size of the existing energy system almost defies imagination, creating what historian Thomas P. Hughes characterizes as the system’s momentum, the extent to which it resists change. Most obviously, the system moves and processes huge quantities of various fuels and in so doing generates trillions of dollars of revenues worldwide. The many institutions in the system have created well-established norms, rules, and practices, which also resist change. The individuals in the many professions that make the system work have, in addition to their incomes, their identities linked to the existing system, which system change would put at risk. Changing this large, deeply entrenched system will take time, major shifts in incentives, and considerable political and business effort.

One could describe this system as emergent: Instead of being planned from the top down, it evolved out of the fragmented efforts of innovators, firms, governments, and nonprofit organizations responding to a complex set of technological, economic, political, and social challenges and incentives. But that process of emergence was anything but smooth or easy. For all of its benefits, it also entailed wrenching economic disruptions, rampant pollution, and sometimes violent labor relations. The nation can do better.

Public policies can influence and guide these changes but cannot determine them. The energy system spans and links together all sectors of society, of which government policy is only a part. The response of businesses, social groups, and even the culture to government policies will drive their effects, as will planned or unexpected technological developments. It is quite impossible to predict accurately and with precision all of the effects of policies, thus leaving open the certainty of unplanned and probably unwelcome results from even the most carefully developed policies. All policies are born flawed. Therefore, governments need to design flexible policies and create institutions that can learn and change. Good policies are ones that get better over time, because no one gets it right the first time.

However, and in tension with the previous point, public policies that seek to change large systems must be long-term and consistent. Flexibility and learning do not mean lurching from one fad to the next. Whether government policy aims to create new technologies through funding for R&D, foster new cohorts of technical experts through education funding, or change the incentives that firms and consumers face through providing targeted financial incentives, it will need to push in the same general direction for decades. Such consistency has paid off in fields such as information technology and biotechnology. Science policy scholars also can point to the heavy costs of volatile funding, as research groups that take years to assemble will disband after one year of bad funding. No one possesses a simple formula for reconciling the need for flexibility and the need for consistency. However, studying policies that successfully do both can inform the creation of new policies and institutions.

Finally, policies that seek to change the energy system need to stay focused on policy goals beyond simple market efficiency. Not surprisingly, debates over energy often involve discussions of the prices of competing energy sources. However, the energy system entails many other important social consequences, such as environmental and social equity problems. Policy analysts Barry Bozeman and Daniel Sarewitz have proposed a framework called Public Values Mapping in an effort to articulate the nonmarket values that policies should seek. This is not to say that market efficiency is inherently a poor standard, but that market goals may not always align with other goals, and policymakers will need to negotiate those conflicts.

An integrated strategy

To address all of the parts of the energy system requires a four-part policy strategy: improving technology, improving markets, improving the workforce, and improving energy decisionmaking. Each part will entail many specific policies and programs. Many of these policies will come out of or be implemented by firms, trade and professional associations, or advocacy groups, but governments will be centrally involved in all of them.

Improving technology. The level of funding, public and private, for renewable energy R&D is abysmally low, when seen in the context of the size of the energy market. The nation cannot transform a $1 trillion industry with a $1 billion investment. To make matters worse, public and private energy R&D has been declining for decades around the world, including in the United States. Innovative industries spend upward of 10% of their revenues on R&D. Industries such as computers and pharmaceuticals also enjoy the benefits of large government R&D programs.

The volatility of federal spending on renewable energy R&D has also contributed to problems. Such volatile budgets damage any research program. When funding fluctuates, laboratories lose good research teams and, to make matters worse, it is hard to recruit the best researchers and graduate students. Moreover, it is possible for the government to spend lots of money on R&D without producing much social benefit. To succeed, R&D programs need to pay close attention to public/private linkages and to the public social values they promote.

Improving markets. Improving markets for renewable energy technologies means removing impediments to their diffusion, making them more cost-effective, and making economic institutions more sophisticated in dealing with them. A number of market conditions impede the development and diffusion of innovation, usually by increasing transaction costs or placing renewable energy at a financial disadvantage beyond the costs of the devices themselves. For example, a home with solar photovoltaic panels may generate more electricity during the day than the household needs. If so, does a utility have to buy back the excess power, and at what price? If every home or business that puts in solar panels has to individually negotiate those questions with the utility, that greatly increases the transaction costs of renewable energy. A variety of well-tested policies can overcome these and related impediments. These policies include “net metering” that provides home or business owners with retail credit for any excess power they provide to the grid, interconnection standards, building codes, technology standards, and installation certification.

Like all other new technologies, renewable energy technologies in their early stages of development can benefit from subsidies (which all other energy sources get anyway) or regulatory mandates. Thus, policies such as production tax credits, “feed-in tariffs” that obligate utilities to buy energy from any renewable energy–generating facilities at above-market prices, and renewable portfolio standards come into play. Many of these policies are problematic, but more than a decade of experience in individual U.S. states and in Europe are enabling analysts to sort out the merits of various policies. In some cases, the economically optimal policies may not be the most politically feasible.

In addition to providing subsidies, governments can use their power of procurement to simply buy renewable energy, creating a large revenue stream for the industry. Government procurement has been a huge driver for many high-tech industries, and it could be for renewable energy as well. The federal government is an immense energy consumer, perhaps the largest in the world. What it buys influences and even creates markets.

Recently, some high-profile venture capitalists have become involved in renewable energy projects. Can other economic institutions, such as banks and insurance companies, assess and finance renewable energy deployment? Part of improving markets means ensuring that, for example, mortgage lenders have the ability and incentive to properly evaluate the effect on operating costs of adding renewable energy to a home or business.

In addition, government policy will need to be deeply involved in developing the appropriate infrastructure, the most obvious part of which is the electrical grid. This is not to say that the government will in any simple sense pay for that infrastructure, but policies will influence who does and how its components are built. Public goods, from lighthouses to highways, have always posed these collective action problems, and public policies are involved in solving them.

Discussions of subsidies or other forms of government aid raise the issue of whether renewable energy can compete in markets, and to some extent that is a serious issue, but not in the pure sense. First, government has been involved in energy markets for more than a century through procurements, subsidies, regulations, tax benefits, and other means. All forms of energy have enjoyed many tens of billions of dollars of government largess. It makes no sense to say that renewable energy has to do it on its own. Second, every major technological revolution—and that is what changing the nation’s energy system will be—of the 20th century had government deeply involved. Energy will be no exception to that rule.

Improving the workforce. The development and deployment of renewable energy technologies will require an ever-growing and diverse workforce: wind-turbine installers, solar design engineers, systems analysts, Ph.D.-level researchers, and so on down a long list. Does the nation have the programs, in quantity and quality, to train, certify, and provide professional development for such a workforce? For scientists and engineers, the federal government has typically funded graduate and undergraduate education indirectly through research assistantships tied to government research grants. Will that policy work for renewable energy, and is the flow of funds large enough? This gets back to the point about volatile funding for R&D. That volatility hurts education as well as the research itself. To attract the best researchers and the best graduate students into this field, it needs relatively steady funding. The nation would neglect this part of the system at its peril. The current policy of volatility seems to be based on the Homer Simpson philosophy of education: “Our children are our future. Unless we stop them now.” Surely, our society can do better.

Improving energy decisionmaking. Improving purchasing decisions is usually thought of as a matter of consumer education. Instead, the focus should be on the many people in the economy whose decisions drive substantial energy use: vehicle fleet managers, architects, heating and cooling engineers, building and facility managers, and many others who purchase energy for institutions. Private groups are doing some of this education; one example is the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) certification program for environmental standards in buildings. Government policy could work with the many professional associations to help energy decisionmakers be up to date on the opportunities and problems they might encounter in adopting renewable energy. Some of this is already happening, but it needs to expand greatly in guiding energy-related decisions.

Each piece of this four-part strategy will have numerous policies within it, and developing those policies for the array of technologies involved in renewable energy will be a large and multidisciplinary task. Configuring government institutions to implement and, as need be, adapt those policies will generate another set of challenges. Some policies in this strategy will be controversial, and no doubt some people will argue that the government should stay out of the way entirely and let the process unfold as it will. After all, the energy system has changed before and will change again.

But the true bottom line is that it would be irresponsible for government to take such a hands-off position. The change in the energy system will be a huge and wrenching event, and government has an obligation to push the change in a socially desirable direction as well as to try to alleviate some of the negative fallout of the changes. In an important sense, government policy will be unavoidably involved in this change. Government regulations and mandates structure the world that businesses and social groups encounter, and therefore they play important roles in resolving the conflicts that such changes inevitably entail. Because government policy cannot help but be involved, it should push for a system that protects the public interest and reflects values that markets by themselves will neglect. The nation simply cannot afford the waste of resources and environmental damage of past system changes and should not tolerate the human costs.

Recommended reading

Barry Bozeman and Daniel Sarewitz, “Public Values and Public Failure in U.S. Science Policy,” Science and Public Policy 32, no. 2 (April 2005): 119–136.

Thomas P. Hughes, “Technological Momentum,” in Albert H. Teich, Technology and the Future, ed. 8 (Boston/New York: Beford/St. Martin’ Press): 26–35.

Richard Nelson, National Innovation Systems (New York: Oxford University Press, 1993).

Gregory F. Nemet and Daniel M. Kammen, “U.S. Energy Research and Development: Declining Investment, Increasing Need, and the Feasibility of Expansion,” Energy Policy 35 (2007): 746–755.


Frank N. Laird () is an associate professor in the University of Denver’s Josef Korbel School of International Studies.

Climate Change: Think Globally, Assess Regionally, Act Locally

Climate change is here to stay. No matter how effectively governments and the private sector limit greenhouse gas emissions, average global temperatures will rise during the next several decades. Scientists know less well how climate change effects will be manifested regionally. And this information is critical because each region will have to decide how to adapt to change.

The evidence that global warming is already here and that its effect varies by region is strikingly apparent at the poles. Average temperature in the Arctic increased at nearly twice the global rate during the past 100 years, summer sea-ice area has decreased by 7.4% since satellite observations begin in 1978, and buildings and highways are threatened as the permafrost beneath them melts. The Greenland and Antarctic land ice sheets are changing rapidly, which is contributing to sea-level rise.

Change, somewhat less dramatic, is taking place across the globe. Mountain glaciers are retreating, glacial lakes are warming and growing, and spring runoff is occurring earlier. Spring events such as bird arrival and leaf unfolding are occurring earlier, and the summer growing season is lengthening. Plant and animal species are moving poleward or to higher elevations. Forests have increased in many areas but decreased in parts of North America and the Mediterranean basin. Oceanic primary production, the base of the marine food chain, has declined by about 6% in the past three decades, and the acidification of the oceans due to increased capture of carbon dioxide is making the fates of corals and other shelled creatures more precarious.

A sea change in public opinion is also in progress. People no longer focus exclusively on whether humans are responsible for climate change. The more pressing and practical question is how the world can adapt to the inevitable consequences of climate change and mitigate the most undesirable ones. The answers to those questions depend on where one is living.

Not only will climate change affect each community differently, but each community has a unique combination of environmental, economic, and social factors and its own ways of reaching decisions. Each community will have to decide how it can respond, so each needs information about how, when, and where climate change will affect the specific things it cares about. How will citizens know when they need to make decisions, or if they do?

Many of the responses to climate change will be local, and the variety of items that need attention is daunting. Infrastructure resilient to single stresses has been known to fail in a “perfect storm,” where vulnerability and multiple stresses combine. By analogy, localities are subject to social and environmental stresses that change simultaneously at different rates. These effects are often not simply additive; they can interact and reinforce one another in unexpected ways that can lead to potentially disastrous threshold responses or tipping points.

Not only do different multiple stresses interact differently in different places, but the ways in which people make decisions differ as well. Key decisions are made locally about land use, transportation, the built environment, fire management, water quality and availability, and pollution. For perfectly good reasons, local officials focus on the most concrete local trends and most visible social forces, and many of them perceive global warming as distant and relatively abstract.

All too often, local social, economic, political, legal, and cultural forces overshadow the warnings of the international scientific community. Besides, local officials understandably see global warming as an international issue that should be addressed by national and world leaders. And if local leaders were motivated to act, the effects of climate change do not respect jurisdictional boundaries, and they would find it difficult to marshal the necessary information and expertise to craft and harmonize their responses.

For these and other reasons, decision processes become dangerously long and complex. But time has run out for ponderous decisionmaking when every generation will have to adapt to a different climate. The scientific community needs to help by providing local leaders with the specific regional climate information they need to motivate and inform coordinated action.

Challenge of regional assessment

Regional climate differs in complexity and character from global climate. The factors that combine to drive global climate may have a different balance regionally. Today’s global models clearly delineate differences between the responses of oceans and continents and of high-latitude and tropical zones to climate change. A true regional assessment, however, differs from a regionalized global assessment in its spatial specificity; topography and coastal proximity create local climatic and ecological zones that cannot be resolved by contemporary global models, yet must be evaluated to make a regional impact assessment meaningful. Increasing global models’ spatial resolution is helpful but not sufficient; new analytic tools are needed to provide useful regional climate forecasts. Scientists must develop truly regional climate impact models that will help local leaders see what the future holds and understand how actions they can take will make a difference in their region.

Understanding how climate changes at the regional level is only the beginning of the evaluation of the ensuing ecological, economic, and social impacts. The next question to be answered is how climate change affects key natural systems such as watersheds, ecosystems, and coastal zones. Assessing the effect on natural systems is the starting point for assessing impacts on regionally important socioeconomic sectors such as health, agriculture, and infrastructure. For example, agriculture, a managed ecosystem, is subject to multiple environmental stresses: human practices, changes in water availability and quality, and the lengthening of the growing season.

And these human activities then influence local climate. Deforestation, irrigation, and agriculture affect local moisture concentrations and rainfall. The burning of fossil fuels plays a particularly complex role, only one dimension of which is its contribution to overall global warming. Inefficient combustion in poor diesel engines, open cooking fires, and the burning of coal and biomass produce aerosols with organic soots, or “black carbon,” as well as atmospheric brown clouds.

It is vital that scientists understand the complex and varied effects that such pollutant products will have on regional and global climates. Atmospheric brown clouds intercept sunlight in the atmosphere by both absorbing and reflecting it, thus cooling the surface and heating the atmosphere. The reduction in solar radiation at the surface, called dimming, strengthens in the presence of atmospheric moisture because aerosols nucleate more cloud drops, which also reflect radiation back to space. Because dimming cools ocean surface temperatures as well as land, Asian pollution has contributed to the decrease of monsoon rainfall in India and the Sahel. In addition, aerosols are carried from their local sources across entire ocean basins in a few days, and thus they have a global effect; the cooling due to dimming may have counteracted as much as 50% of the surface temperature increase expected from greenhouse warming.

Another powerful reason to undertake regional climate assessments is the impact of climate change on water availability. Global climate models predict that even if the total fresh water circulating in the hydrological system remains the same or even increases, there will be a redistribution of rainfall and snow, with more precipitation at high and equatorial latitudes and drying at mid-latitudes. If only because of redistribution, the study of changing water availability must be regional.

Topography, coastal and mountain proximity, land cover, prevailing storm tracks, and other factors all make regional water climate distinctive. These issues are best addressed on a watershed-by-watershed basis. At mountain altitudes, black carbon heats the air and turns white snow gray, which absorbs more sunlight. These effects are contributing to the melting of the Himalayan snowpack and glaciers, and this melting is, in turn, affecting the river water supply of more than 2 billion people in Asia.

The regional impacts of a change in water availability will depend on factors such as the number and types of ecological provinces, the balance of irrigated and nonirrigated agriculture, the urban/rural population balance, the state of water distribution infrastructure, and regulatory policy.

The decisions to be made will be locally conditioned. How should managed irrigation systems adjust to changes in the timing and volume of spring runoff? Which farmers and crops will be affected? Should farmers change their crop mix? How and when should investments be made in water delivery capacity, agricultural biotechnology, or monitoring systems? Rigorous and detailed regional climate change impact assessments are necessary to answer these questions.

Leading the way

The state of California has been a national and global leader in modeling, assessing, and monitoring potential climate change effects at its regional scale. California, home to extensive scientific expertise and resources, began to study the issues 20 years ago and two years ago committed to biennial formal assessments to identify and quantify effects on its massive water-supply systems, agriculture, health, forestry, electricity demand, and many other aspects of life. The accompanying illustration details findings of the first California assessment, Our Changing Climate, published in 2006. The complete results were published in March 2008 in California at a Crossroads: Climate Science Informing Policy, a special supplement to the journal Climatic Science.

The prediction in Our Changing Climate that the snow cover in the northern Sierra Nevada will decline by 50 to 90% by mid-century is particularly compelling, because California’s Central Valley, the nation’s most productive agricultural region, derives most of its water from rivers with headwaters in these mountains. In addition, northern Sierra water is a major source for the 20 million people living in arid southern California.

Our Changing Climate motivated California’s leaders to enact a series of climate-related measures and to forge cooperative programs with neighboring states and even some countries. This example shows how powerful regional assessments can be, because Californians learned how they will be affected, and this, in turn, motivated political action. Until people can answer the question, “What does it mean for me?,” they are unlikely to develop their own strategies for adaptation.

Since 2000, a multiagency team has monitored a suite of factors, including pollution and management practices as well as climate change, that affect the Sacramento River Delta and its interaction with San Francisco Bay. The Bay-Delta system transports water southward to the Central Valley and southern California. The team’s report, The State of Bay-Delta Science 2008, challenges many conventional assumptions about integrated ecosystem management, argues that the desire to maintain a steady state is misplaced, and suggests that present practices should be replaced by adaptive management based on comprehensive monitoring.

California and other well-equipped regions should translate their knowledge and techniques to other parts of the world. California’s experience in modeling, monitoring, and assessment could be useful to others. And California can continue to blaze the trail by expanding its efforts. For example, extending California’s assessment to include aerosols and black carbon would enable a more rigorous comparison with similar issues in Asia, Africa, and elsewhere. Such efforts should begin by monitoring the effects of black carbon and other aerosols on California’s climate and snowpacks. It also will be important to simulate regional climate change with and without aerosols and to connect the simulations to a variety of (already extant) models that link climate projections to snow and watershed responses to reservoirs and water-supply outcomes.

Although California has for many years actively managed all but one of its major rivers, it is not yet making adequate use of recent scientific research to inform its management decisions. The state’s water policies and practices are based in the experience of the 20th century and need to be adapted to the changing water climate of the 21st century. This process is beginning. The observational and modeling infrastructure that supported Our Changing Climate has already been applied to adaptive management of the state’s water supply and is now being extended to cope with the challenges ahead. California has learned that the capacity to assess and the capacity to manage are intimately related.

Building a mosaic

Global climate models have met the highest standards of scientific rigor, but there is a new need to extend that effort to create a worldwide mosaic of regional impact assessments that link the global assessment process to local decisionmaking.

The effects of climate change will be felt most severely in the developing world. Although developing nations may not always have the capacity to assess regional climate change by themselves, they understand their social, economic, and political environment better than do outsiders. Thus, developing nations should take the lead by inviting developed-world scientists to collaborate with them in conducting regional assessments that can influence local actions.

The world needs a new international framework that encourages and coordinates participatory regional forecasts and links them to the global assessments. Such a framework for collaboration will not only help build assessment capacity in the nations and regions that need it, but will also generate the local knowledge that is a prerequisite for making the response to climate change genuinely global.

Of course, the globe cannot be subdivided neatly into nonoverlapping regions with sharp boundaries, nor will regions be able to restrict themselves to the same geographical area for the different kinds of sectoral assessments they need. Each physical, biological, and human system has a natural spatial configuration that must be respected. The focus therefore should be on developing a complex hierarchical network of loosely connected, self-assembled regional assessments rather than a unitary project.

In moving toward a suitable international framework for regional assessments, it will be useful to examine a number of questions. What lessons can be learned from the regional assessments done to date? How should global and regional assessments relate to one another? Should regional assessment panels be connected to the Intergovernmental Panel on Climate Change, and if so, how? What are good ways for the international community to incubate regionally led assessments? Are there best practices that promote interaction between scientists and decisionmakers? Do these differ regionally? What are good ways to encourage coordination among regional assessments? What standards should regional assessments adhere to? Who should define them? Who should certify compliance? How should assessment technologies be transferred? How should assessment results be disseminated and archived? How can assessments be designed so that assessment infrastructure can be used later in decision support?

Before formal framework discussions can take place, these and other issues will have to be debated in a variety of international and regional forums. These will certainly include the World Meteorological Organization, the Group on Earth Observations, and the United Nations Environment Programme. It is equally critical that discussions be organized in every region of the world and that partnerships among groups from industrialized and developing regions be struck.

It is clear, however, that the world must not wait for the creation of the perfect framework. It is by far preferable to learn by doing. Ideally, the framework will be an emergent property of a network of already active regional assessments that connect global assessments to local decisionmaking.

A good place to start is the critical issue of water. The effects of climate on water must be understood before turning to agriculture and ecosystems. The capacity to model and monitor exists, and it can be translated relatively easily. The path from assessment to decision support to adaptive management has been reasonably well charted. All parties now need to do all they can to launch assessments of the climate/water interface in every region of the world.

Forum – Fall 2008

The education people need

Brian Bosworth’s “The Crisis in Adult Education” (Issues, Summer 2008) could not be timelier for U.S. community colleges. As the nation’s current economic problems intensify, increasing numbers of adults are returning to community colleges to obtain education and training for living-wage jobs. They are often unprepared to perform college-level work, and these institutions are often unprepared to handle their concerns, in large part because of the issues Bosworth discusses. His suggestions for change are so logical that the reader is left with but one question: What is preventing change from happening?

Unfortunately, Bosworth does not take up this issue, but there are at least two major stumbling blocks. The first concerns the theory and practice of adult learning at the postsecondary level. Despite all the education theory and research conducted in the United States, there are precious few empirical examinations of successful ways in which adults learn. Indeed, most of the literature on remedial or developmental postsecondary education questions its current effective practice. There is some evidence to suggest that “contextual learning”—the embedding of basic adult-education skills in job training—works, but the jury is still out on whether the promising practices of a few boutique programs can be brought to the necessary scale.

The second obstacle concerns the separation of education and economic development. As long as postsecondary education continues to be viewed as an issue of access and finance for parents and their children, not as a strategy for economic development, most elected officials will continue to focus policy on traditional students. Rarely is educational policy seen as connected to economic growth and international competitiveness. Part of the reason for this has been the relative lack of corporate concern; the private sector has essentially been silent about the need for advanced educational opportunities for adults. Companies do their own training, or try to hire trained workers away from each other. This is a very inefficient, costly and, from a social viewpoint, ineffective strategy that does not produce the number of educated, highly skilled workers necessary for economic growth and prosperity. Some organizations, such as the Business Round-table, are beginning to advance this type of strategy, but it still is in the very beginning stages.

Finally, most of Bosworth’s recommendations call for changes in federal policies. U.S. education, including that at the postsecondary level, is primarily a local and state responsibility. Community colleges derive their main sources of revenue from tuition, state funds, and local property assessments, and they are governed by local boards. Policies affecting adult education and its connection to workforce development and the cost of that education to the student are generally products of state and local policies. The connection between these policies and adult learners also needs to be examined.

Still, Bosworth’s suggested changes are important and should be discussed by all committed to furthering the post-secondary needs of working adults. In that regard, he has made a major contribution to the field.

JAMES JACOBS

President

Macomb Community College

Warren, Michigan


I am writing to elaborate on Brian Bosworth’s thoughtful essay. Once a global leader in educational attainment, the United States has taken a backseat to other industrialized countries, which have broadened their educational pipeline and now are producing more young adults with a college degree. National estimates indicate that the United States will need to produce approximately 16 million college degrees, above and beyond the current rate of degree production, to match those leading nations by 2025.

Unfortunately, our current system of adult education is ill-equipped to handle the millions of adults who need and must receive training in order to replace retiring baby boomers and allow us to meet this target. Each year this problem is compounded further by the influx of high-school dropouts (25% of each high-school class) as well as the large number of high-school students who do graduate but are unprepared for the rigors of college or the demands of work.

Significant progress can and must be made with adult students to address the educated workforce shortfall, while continuing work to improve the achievement and attainment of the traditional school-age population.

First and foremost, we must recognize that the current system of higher education, designed to serve the traditional, high-performing 18- or 19-year-old, simply does not work for the majority of our working adults. Our response has been to retrofit adult students into this model primarily through remedial instruction. Given that most adults attend part-time, this further delays and blurs the path to a college degree. It is not surprising that in a recent California study, fewer than one in six students completed a remedial class and a regular-credit class within one year. Regrettably too few adults achieve success, and those that do persevere typically take 7 to 10 years to attain a degree.

We need bold new approaches designed specifically for adult students that provide a clear and direct path to the degree they seek. Bosworth notes the excellent completion rates at the University of Phoenix, which enrolls more than 50,000 students nationwide and strategically designs programs around the lifestyles of working adults. Indiana Wesleyan University has achieved similar success with campuses across Indiana serving the working adult population. We need more of these accelerated, convenient, technology-enhanced programs designed with the purpose of guaranteeing degree attainment without sacrificing quality.

Equally important is addressing the pipeline of young adults aged 18 to 24 that continues to increase the percentage of our population with low educational attainment. In its recently adopted strategic plan for higher education, Indiana has proposed the development of an accelerated program, wherein students earn the credits to complete an associate’s degree in 10 months. This program will be appealing to students who do not want to forgo earnings for multiple years, and will have positive results, including increased persistence and attainment. The primary goal is to reach students before “life gets in the way” of their educational pursuits.

Successfully educating an underskilled adult workforce is an enormous task, but promises significant returns. It will take bold and new strategies to meet the challenge.

STANLEY G. JONES

Commissioner

Indiana Commission for Higher Education

Indianapolis, Indiana


Peter Cappelli (“Schools of Dreams: More Education Is Not an Economic Elixir,” (Issues, Summer 2008) introduces a well-reasoned perspective into the 25-year conversation that has driven education reform in this nation. Beginning with A Nation at Risk and most recently enshrined in the federal education law No Child Left Behind, we have increasingly assumed two “truths” about public education: (1) the nation’s schools are failing our children, and (2) without preparing all youth for college, we are dooming our economic future.

The first assumption is partly true because too many young people fail to complete high school and too many high-school graduates are poorly prepared for either college or the workplace. Narrowing the curriculum to more college-preparatory coursework and holding schools accountable may contribute to the dropout problem. Piling on more academics seems to have made little impact. National Assessment of Educational Progress (NAEP) reading scores for 17-year-olds have declined since 1984 despite a 45% increase in academic course-taking. NAEP science scores have declined substantially in the same period despite the doubling of science credits earned. NAEP math scores are relatively unchanged despite a doubling of math credits. These and other data argue for other ways of thinking about preparing tomorrow’s workforce.

Cappelli does an artful job of debunking the second assumption. This continued belief, in the face of abundant evidence to the contrary, is at the heart of school reform agendas, from the American Diploma Project to the U.S Department of Education. The recent report of the Spellings Commission on the Future of Higher Education, for example, declared that “90% of the fastest-growing jobs in the new knowledge-driven economy will require some postsecondary education.” As Paul Barton at the Educational Testing Service notes, this false conclusion comes from a lack of understanding about basic data.

So how best to prepare our young people to succeed in the emerging labor market? The most obvious strategy is to focus on the technical and work-readiness skills employers need, especially those at the middle skill level where nearly half of all job growth is expected to occur, and to ensure access to those skills by today’s adolescents. This strategy requires that we expand, not reduce, high-school career-focused education and work-based learning, or Career and Technical Education (CTE). CTE has been shown to increase the likelihood that students will complete high school, increase the math skills of participants, and help young people focus on and complete postsecondary education and training. Yet current data from the Condition of Education (2007) show that the nation’s youth are taking substantially less CTE than in years past. An abundance of anecdotal evidence suggests that this is both a problem of access—fewer programs available in fewer schools—and opportunity—students have less time in the school day to access sustained occupational programming, as academic requirements continue to crowd out options for rigorous CTE.

High-quality CTE can improve the academic performance of America’s youth and the quality of America’s workforce, but only if robust programs are available to all young people who may benefit.

JAMES R. STONE III

Director

National Research Center for Career and Technical Education

University of Louisville

Louisville, Kentucky


Arguments about whether the labor market needs a better-educated work force are far too general. As Peter Cappelli shows, employers have serious needs, but they are for specific vocational and soft skills and in a narrow range of jobs. Cappelli correctly notes that vocational programs in community colleges may substitute for training and development previously provided by employers. Community colleges are, in fact, well-positioned to meet employers’ needs, given that they enroll nearly half of all undergraduates.

As we wrote about in Issues, Summer 2007, some two-year colleges do provide students with specific vocational and soft skills and link them to employers, although they do not do so generally or systematically. As Cappelli notes, instead of diffuse efforts at creating a “better-educated workforce,” policymakers should target their efforts at improving community colleges, focusing particularly on applied associates’ programs, soft skills, and problem-solving in practical contexts, and also on developing high-school career programs. Our college-for-all society and employers’ changing needs are transforming the meaning of a college education; our institutional organizations and policies need to respond.

JAMES E. ROSENBAUM

Professor of Sociology, Education, and Social Policy

Institute for Policy Research

JENNIFER STEPHAN

Graduate student

Northwestern University

Evanston, Illinois


Peter Cappelli provides a provocative analysis questioning the economic benefits of education. Yet the article focuses inordinately on finding connections between the academic pedigree of assembly-line workers and widget production. That analysis is too narrow, too shortsighted. The economic impact of universities extends far beyond creating employees custom-made to boost profits, tax revenue, or production on their first day at work. Universities should foster economic vitality, along with, for example, sustainable environmental health; positive individual well-being; and cultural, ethnic and racial understanding and appreciation. These, too, affect the economy. Perhaps no cliché is more apt: Education is indeed an investment in the future.

For example, additional education in nutrition, hygiene, and biohazards improves individual and public health, benefiting the individual’s workplace and the country’s economy. Let’s look at smoke. A 2005 study estimated that secondhand cigarette smoke drains $10 billion from the national economy every year through medical costs and lost wages. Meanwhile, decades of anti-tobacco education efforts have been linked to fewer teens smoking and more young adults quitting. Simply put: Smoking costs billions; education reduces smoking; and when it does, the economy breathes more easily.

Although many similar threads can be followed, harder to trace are the ways in which education prepares an individual to inspire, innovate, cooperate, create, or lead. Employers want someone “who already knows how to do the job,” often an impractical hope; thus they look for someone who knows how to learn the job, a trait ultimately more valuable, as jobs change rapidly. Quality education fosters the capacity to study, to analyze, to question, to research, to discover. In short, to learn—and to accept, individually, the responsibility for learning how to learn.

As the author acknowledges, employers also want workers with conscientiousness, motivation, and social skills. Except for perhaps good families, good churches, and possibly the armed forces, no institution matches the ability of good schools to foster these qualities.

Work-based learning is also critical to ensuring a labor force sufficient in both numbers and knowledge; thus the California State University has hosted forums bringing faculty from its 23 campuses together with employers from critical economic sectors, such as agriculture, biotechnology, and engineering.

As it renders economic benefits to individuals and industries, education also transforms communities and societies. When universities view their mission through a prism of access and success, diversity and academic excellence, they foster social and economic upward mobility. Raising educational levels in East Los Angeles and other areas of high poverty and unemployment undoubtedly improves the economy by helping to break generational cycles of poverty.

Finally, the article does not address the costs of not educating an individual. In July 2008, California education officials reported that one in four high-school students in the state (and one in three in Los Angeles) drops out of school before graduating. What is the economic toll on society when it loses so many potentially brilliant contributors, as early as middle school, because of inequalities in access to quality education? Whatever it is, it is a toll our society cannot afford, economically or morally.

JAMES M. ROSSER

President

California State University, Los Angeles

Los Angeles, California


Matthew Zeidenberg’s succinct analysis of the challenges facing two-year colleges is both accurate and sobering (“Community Colleges Under Stress,” (Issues, Summer 2008). Several of these issues—financial stress, poor academic preparation, and unsatisfactory persistence and graduation rates—also are common to four-year colleges that enroll large numbers of students who are first in their family to attend college, are from economically depressed neighborhoods, or are members of historically underrepresented racial and ethnic groups. Everyone agrees that K-12 schools must do a better job of making certain that all students have the academic skills and competencies to succeed at the postsecondary level. At the same time, schools cannot do this alone. Family and community support are indispensable to raising a student’s educational aspirations, becoming college-prepared, and increasing educational attainment levels across the board. So to Zeidenberg’s recommendations I add two more.

First, students and families must have adequate information about going to college, including real costs and aid availability. Too many students, especially those from historically underserved backgrounds, lack accurate information about postsecondary options. They are confused about actual tuition costs and expectations for academic work. The Lumina Foundation for Education, the Ad Council, and the America Council of Education are collaborating on KnowHow2GO, a public-awareness program to encourage low-income students in grades 8 to 10 and their families to take the necessary steps toward college (). Another effort is the nonprofit National College Access Network (NCAN), a federation of state and local efforts that provide counseling, advice, and financial assistance to students and families. Local initiatives, such as College Mentors for Kids! Inc., which brings together college and elementary-age students through their participation in campus and community activities, and Indiana’s Learn More Resource Center, are models for disseminating information about college.

Second, we must expand the scale and scope of demonstrably effective college-encouragement and transition programs. Particularly effective programs are the Parent Institute for Quality Education; the Puente Project; and GEAR UP, which provides information about financial aid, family support and counseling, and tutoring, among other things. Other promising encouragement initiatives include many of the TRIO programs funded under Title IV of the Higher Education Act, such as Upward Bound, Upward Bound Math/Science, Student Support Services, Talent Search, Educational Opportunity Center, and the McNair Program. For example, students in Upward Bound programs are four times more likely to earn an undergraduate degree than those not in the programs. Students in TRIO Support Services programs are more than twice as likely to remain in college as students from similar backgrounds who did not participate in the program ().

Preparing up to four-fifths of an age cohort for college-level work is a daunting, unprecedented task. The trajectory for academic success starts long before students enter high school. As Iowa State University professor Laura Rendon sagely observed, many students start dropping out of college in the third grade. Essential to breaking this unacceptable cycle is gaining the trust and support of parents and communities and ensuring that every student knows what is required to become college-ready and how to obtain the necessary financial resources to pursue postsecondary education.

GEORGE KUH

Chancellor’s Professor

Director, Center for Postsecondary Research

Indiana University

Bloomington, Indiana


Prison policy reform

In “Fixing the Parole System” (Issues, Summer 2008), Mark A. R. Kleiman and Angela Hawken correctly note that incarceration has become an overused and hugely expansive state activity during the past generation. Controlling for changes in population, the imprisonment rate in the United States has expanded fourfold in 35 years. Their rather modest proposal is to substitute intensive supervision and non-incarcerative sanctions for a system of parole monitoring and reincarceration in California that combines high cost and marginal public safety benefits.

There are three aspects of their program that deserve support:

  1. The shift from legalistic to harm-reduction goals for parole;
  2. The substitution of non-incarcerative for incarcerative sanctions for parole failure; and
  3. The use of rigorous experimental designs to evaluate the program they advocate.

There is clear public benefit in systematically stepping away from a practice that is simultaneously punitive, expensive, and ineffective.

Almost all responsible students of California crime and punishment support deconstruction of the state’s parole revocation juggernaut. But the brief that Kleiman and Hawken file on behalf of this penal reform is disappointing in two respects. Problem one is the rhetorical tone of their article. The authors intimate that risk monitoring and non-prison sanctions can lower crime rates, which would be very good news but is also unnecessary to the success of the program they support. If non-imprisonment parole monitoring produces no increase in serious crime at its smaller correctional cost, that will vindicate the reform. Reformers shouldn’t have to promise to cure crime to unwind the punitive excesses of 2008. And the proponents of reform should not have to sound like they are running for sheriff to sell modest reforms!

My second problem with the case that is presented for community-based intensive supervision and non-prison sanctions is its modesty. The authors suggest a non-prison program only for those already released from prison. But why not create such programs at the front end of the prison system as well, where diversion from two- and three-year imprisonment terms might be even more cost-effective than a parole reform if non-incarcerative programs have roughly equivalent outcomes? Is reducing California’s prison expenses from $9 billion to $8 billion per year the best we can hope for?

FRANKLIN E. ZIMRING

William G. Simon Professor of Law

School of Law

University of California

Berkeley, California


Mark A. R. Kleiman and Angela Hawken are certainly correct in saying that the parole and probation systems are badly broken and overwhelmed. They are also correct in concluding that if parole and probation were more effective, crime would decline, lives would improve, and the states would save barrels of money that are now being wasted on failed policies and lives.

Citing the Hawaii experiments, Kleiman and Hawken would rely heavily on the behavior change benefits of certain, swift, and consistent punishment for violations that now go undetected or are inconsistently punished. They cite the research literature on the importance of behavior change that is reinforced by rewards for appropriate behaviors but suggest that political opposition may limit the opportunities on that side of the ledger. In my view, the role of positive reinforcement and incentives must be significantly expanded in post-release supervision to really affect long-term recidivism. Released convicts have enormous needs, including housing, medical care, job training, etc. A properly resourced parole or probation officer could reinforce and promote a lot of good behavior by getting the parolee/probationer what he really needs to succeed as well as holding him accountable for his slips.

The burden of post-release supervision is made even heavier by the flood of prisoners who arrive totally unprepared to resume civil life. Their addictions—the underlying cause of most incarcerations—and other physical and mental illnesses have not been treated; they have no job experience or training; and their overcrowded prisons have created social norms of racial gangs and violence. Many, if not most, prisoners emerge in worse shape and less able to function in civil society than when they entered. The treatment of many criminals is itself criminal. I really wonder if California will be less safe if a judge orders thousands of prisoners released before they get poisoned by the prison environment and experience. I am confident that competent post-release supervision and support will produce a better result than we get now by leaving people to rot in prison; and at significantly lower cost.

The current weakness of parole systems around the country is an ironic unintended consequence of long mandatory sentences without possibility of parole. In many states, politicians thought it would be fine to let the parole systems wither because people completing mandatory sentences wouldn’t be subject to parole. The result we now see compounds the stupidity of the long mandatory sentences themselves.

DAVID L. ROSENBLOOM

Boston University School of Public Health

Boston, Massachusetts


Thinking about energy

Senator Jeff Bingaman is right (“Strategies for Today’s Energy Challenge,” (Issues, Summer 2008). The key to addressing climate change and future energy supplies is technology. We’ll need new energy technologies and new ways of using traditional energy technologies to build the energy and environmental future Americans want. Government will influence what that future looks like, but consumers and private companies will also play integral roles.

U.S. oil and natural gas companies strongly support new technologies. They have invested more in carbon-mitigation technologies than either the government or the rest of the private sector combined—about $42 billion from 2000 to 2006, or 45% of an estimated $94 billion spent by the nation as a whole. They are involved in every significant alternative energy technology, from biofuels to wind power to solar power to geothermal to advanced batteries. They created the technology to capture carbon dioxide emissions and store them underground.

As demand for alternative energy increases, oil and gas companies will be among the firms that meet that demand. However, they are also prepared to provide the oil and natural gas that Americans are likely to need for decades to come. Although our energy landscape will change, oil and natural gas will still provide substantial amounts of transportation fuels, energy for power generation, and petrochemicals, lubricants, and other products. As fuels, they’ll be cleaner and used more efficiently. We’re already seeing this in new formulations of gasoline and diesel fuel, in combined heat and power technology in our refineries, in advances in internal combustion engines, and in hybrid vehicles.

The future will be as much energy evolution as energy revolution. We’ll need all forms of energy—new, traditional, and reinvented—with each finding its place according to consumer needs and environmental requirements. In the end, providing the energy we need while also advancing our environmental goals will be a formidable balancing act. Government policies that can best help achieve these objectives will be those built on a shared vision; stakeholder collaboration among government, industry, and consumers; and a reliance on free markets.

RED CAVANEY

President and Chief Executive Officer

American Petroleum Institute

Washington, DC


In Senator Jeff Bingaman’s article, he says, “Our past technological choices are inadequate for our future. The solutions we need can only come from new technologies.”

Look around at what we are forgetting and puzzle over what Bingaman says. We developed shoes for solar-powered walking, but few walk. We developed safe nuclear power plants, then stopped building them. We developed glass that lets in light and sun centuries ago, yet our buildings need electric lights in the middle of the day. And there are more methods being forgotten that avoid fossil fuels: the bicycle, the clothes-line, passive heating and cooling, and solar water heaters.

What are the “concrete goals, road maps, timelines” he is after? “The time has come for government to act,“ but he has no idea what to do. Like many, Bingaman is under a spell, off balance, blind to what is around him, and seeking unborn machines and larger bank accounts.

STEVE BAER

Post Office Box 422

Corrales, New Mexico


Senator Lamar Alexander’s “A New Manhattan Project” (Issues, Summer 2008) is inspiring in its scope and scale, and I commend him for his commitment and focus on the big picture vis-à-vis energy policy. Although I disagree with some of his comments on electricity generation, I write as a transportation expert who thinks that the puzzle is missing some pieces.

First, there must be a greater focus on the deployment of new technology. Three of the seven components of the plan—plug-in hybrid commercialization and making solar power as well as biofuel alternatives cost-competitive—are reliant only in part on technological breakthroughs. Equally important, if not more so, are smart deployment strategies. We must work with entrepreneurs to develop revolutionary business models that will rapidly transform our vehicle fleets.

One initiative that aims to spur such innovation is the Freedom Prize (www.freedomprize.org). I am excited to be an adviser to this new organization, which will distribute monetary prizes to cutting-edge transformational initiatives in industry, schools, government, the military, and communities. An example of a revolutionary model is Project Better Place, launched by Israeli entrepreneur Shai Agassi. I recently had the pleasure of hearing him talk firsthand about his big idea, which Thomas L. Friedman described in a recent column in the New York Times (July 27, 2008):

“Agassi’s plan, backed by Israel’s government, is to create a complete electric car ‘system’ that will work much like a mobile-phone service ‘system,’ only customers sign up for so many monthly miles, instead of minutes. Every subscriber will get a car, a battery and access to a national network of recharging outlets all across Israel—as well as garages that will swap your dead battery for a fresh one whenever needed.”

Time will tell if it will work, in Israel or elsewhere. Regardless, it is exactly the kind of thinking we need. Technological breakthroughs are necessary but insufficient; they must be complemented by expedited deployment strategies.

The truly indispensable complements to crash research programs and big carrots for innovation are technology-neutral performance standards and mandatory programs to limit global-warming pollution. Such policy was debated by the U.S. Senate this year: the Boxer-Lieberman-Warner Climate Security Act (CSA).

An analysis commissioned by the Natural Resources Defense Council shows that the CSA would have dramatically cut pollution while slashing oil imports by 6.4 million barrels a day in 2025 (down to 1986 levels). This is in part due to Senator Alexander’s success in adding a national low-carbon fuel standard to the bill, which would lower the carbon intensity of fuels, making alternatives such as plug-in hybrids and advanced biofuels more competitive. That’s the kind of policy that would move us forward, and fast.

In sum, building the bridge to a low-carbon secure future requires an array of carrot and stick programs that expedite technological development and deployment. I look forward to working with Senator Alexander to speed us into that better world.

DERON LOVAAS

Transportation Policy Director

Natural Resources Defense Council

Washington, DC


What is science policy?

Irwin Feller and Susan Cozzens, both nationally recognized science policy scholars, have hit the nail on the head with their appropriately scathing critique of U.S. science policy entitled “It’s About More Than Money” (Issues, Summer 2008). The only thing they didn’t do was drive the nail in far enough to seal the fate of this critical area of national policy that remains wholly unsophisticated, unchanging, and inadequate to the task of providing our nation with the tools we need to make best use of our national R&D investment.

Here it is 2008, when we have the ability to analyze and quantify even everyday things such as the impact of soft drink advertising during the Super Bowl, but we can’t yet develop a national science and technology logic that goes beyond “we need more money.” We live in an era when the production of science-based knowledge, at everincreasing rates, is driving changes in economic competitiveness, culture, quality of life, foreign and military affairs, and sustainability on a global scale, and yet we have a science policy that is no more robust than most families apply to their family budgets: We have so many dollars this year and we would like more next year. Feller and Cozzens attack the central sophomoric argument of U.S. science policy, which has its roots in the original designs of Vannevar Bush and his piece Science—The Endless Frontier, published in the wake of the total victory of the Allies and the unconditional surrender of their enemies in World War II. What they don’t address is why we have been unable to grow up from our simple approach of largely unguided national science planning and budgeting.

It was in fact the simplistic correlation between our very successful efforts to develop new weapons during the war and our ultimate total victory that led to the genesis of a very simplistic model for science policy. This model works something like this: Science is good, more money for science is good, if you fund it more, good things (like winning the war against two opponents at the same time) will happen. We never got past this level of logic. Simple logic always sticks around for a long time, in the same way that lots of outmoded stereotypes do, such as just let science guide itself, as it can’t be guided.

This logic is so simple and so beneficiary to most of the stakeholders in the science policy realm that even the president’s science advisor hasn’t been able to make a change in the basic model after six years of effort. We fund our national science efforts on the premise that our success is measured in the investment itself and not its outcomes. This logic has actually kept us from building an outcomes-oriented national science policy, and as a result has put America’s well-being at risk.

When your policy success is measured by the budget inputs and not on goal attainment or outcome achievement or national performance, then we literally have no idea what we are doing or why. Our present rhetoric is that we need to spend more on science and this will make America greater. Or that we need more scientists or engineers to be stronger. Although these facts may be true, we don’t have empirical evidence of that, and more important, even if we did, we would need to be able to answer the question of whether our investments are helping us to reach the outcomes we most desire.

Most Americans seek a better life for their families; most want to have access to a safe, clean everything; and most want their children to have access to higher qualities of life. At the moment, we have very few tools in the science policy realm that could make any assessment of the relationship between science investments and these outcomes. This is very unfortunate and needs to be addressed.

Addressing it means that we must reject the notion that science policy is about money. It is about who and what we want to be and do. It is about attacking our most critical challenges and knowing where we are along the way. It is about having some dreams that we hope for and understanding that these investments are our means to achieve these dreams and holding people accountable for progress toward them.

It’s about a lot more than money, and Feller and Cozzens help us to see that.

MICHAEL M. CROW

President

Professor of Public Affairs and Foundation Leadership Chair

Arizona State University

Tempe, Arizona


Irwin Feller and Susan Cozzens note several important challenges for the new science of science policy. They point to “the perennial challenges that researchers and policymakers confront as they try to reduce the uncertainties and complexities surrounding processes of scientific discovery and technological innovations.” And they note the serious gaps that exist in the knowledge base on which new theories of science policy must be based. Most important, they assert that more effective science policy requires increased dialogue between the policy and research communities.

Although Feller and Cozzens note that one of the problems with current policy and research on policy is that it is too narrowly framed, they discuss policy research only in terms of evaluating science policy. What about the other side of the coin: research to improve science policy? As I’ve argued elsewhere, if one of the goals of our research is to improve science (and technology) policy, we must design our research with improved policy as an outcome. From a systems perspective, this research process would necessarily include key stakeholders such as policymakers in at least the design and communication phases, with feedback loops from such stakeholders to the research team. The identification of gaps in knowledge, possible consequences of success and failure of contemplated policies, and possible unintended consequences would all be part of a systems analysis framing policy-relevant research.

A systems analysis including policymakers clearly won’t solve the current dialogue gap between the policy and research communities. But we must begin a serious effort to work together for more effective policy-relevant research and policymaking. Not all researchers or policymakers would choose to be part of such an effort, but many from both groups, at the federal and state levels, have already demonstrated their interest through participation in such communication efforts, usually on specific topics.

On a more minor point, but perhaps typifying at least some of the examples used in the article, Feller and Cozzens point to one of the action outcomes identified in the National Academies report Rising Above the Gathering Storm: the call for recruiting 10,000 new science and mathematics teachers. They suggest that this call overlooks “the impressive data base of human resource surveys,” analyses of science and technology career patterns, and the government level responsible for education. In fact, this call was based on a rigorous state-level study of these factors in concrete cases such as Texas and California, and was extrapolated conservatively to states conducting similar studies at the time of the report. There has been excellent uptake of this call by more states subsequently (Arizona, North Carolina, Arkansas, Iowa, and Indiana, among others). The National Academies hosted national symposia in 2007 and 2008 focused on the federal/state/local relationship essential to meeting the science, technology, engineering, and mathematics education challenge.

In their four-page article, Feller and Cozzens manage to draw in most of the recent reports and commentaries relating to Presidential Science Adviser John Marburger’s call for a new science of science policy. And their overall point seems exactly right: that “a much broader approach” than has currently been taken is needed. Although I have no doubt about their capacity to map an outstanding broader approach, this brief article didn’t take us there, or even point us there.

ANNE C. PETERSEN

Deputy Director, Center for Advanced Study in the Behavioral Sciences at Stanford

Professor of Psychology

Stanford University

Stanford, California


Investments in basic scientific research and technological development have had an enormous impact on innovation, economic growth, and social well-being. Yet science policy decisions at the federal and state levels of government are typically dominated by advocates of particular scientific fields or missions. Although some fields benefit from the availability of real-time data and computational models that allow for prospective analyses, science policy does not benefit from a similar set of tools and modeling capabilities. In addition, there is a vigorous debate as to whether analytically based science policy is possible, given the uncertainty of outcomes in the scientific discovery process.

Many see the glass as half empty (not half full) when they contemplate the “knowns” that make up the evidence-based platform for science and innovation policy. This area of research and practice is not new to academics or policymakers; there are decades-old questions that we currently contemplate; problem sets that continue to be imperfectly answered by experts from varied disciplines and fields. In addition, the anxious call for or anticipation of better conceptualizations, models, tools, data sets, and metrics is not unique to the United States but is shared among countries at different levels of economic development. The marriage of ideas from an interdisciplinary and international community of practice is already emerging to advance this scientific basis of science policy. Diversity of thought and experiences no doubt lead Irwin Feller and Susan Cozzens to encourage the cause but strongly caution the process by which frontier methods are developed and utilized. An “increased dialogue between the policy and research communities”—and I would add here the business community—is paramount.

As the glass fills, therefore, so will the frequency and complexity of this dialogue. For instance, the management of risks and expectations is common practice in business and increasingly common in designing potent science and innovation policy mechanisms. Opportunities exist, therefore, for breakthroughs in finance and economics with applications to funding portfolios of science. But, it’s about more than the money. Understanding the multifunctional organism that facilitates creative invention and innovation requires the synthesis of network analysis, systems dynamics, and the social psychology of team networks. Add to that downstream linkages to outcomes data that can be harvested using modern scientometric or Web-scraping techniques. This complex research activity could add clarity to our understanding of the types of organizations that pass new ideas on to commercial products most effectively.

Another question often overlooked in the literature is the management of short-term and long-term expectations. The portfolio approach to the science of science and innovation policy could yield a full spectrum of analytical tools that satisfy short-term requirements while accomplishing long-term goals. These are topics that are ripe for frontier research and yet still have practical applications in the policy arena.

Often the question is asked, what should government’s role be in science and innovation policy? Although there is much controversy about incentives that try to pick winners, returns from tax incentives, and regulatory reform, many would agree that facilitating information exchange could yield important positive social dividends. Already, public funding has been used to sponsor research on the science of science and innovation policy and workshops and forums where academics, policymakers, and representatives from the business community exchange ideas. Partnerships among these three stakeholders are expected to be productive, yet as with many scientific endeavors, time is an important variable.

KAYE HUSBANDS FEALING

Visiting Professor

Humphrey Institute of Public Affairs

University of Minnesota

Minneapolis, Minnesota


Science and democracy

In “Research Funding via Direct Democracy: Is It Good for Science?” (Issues, Summer 2008), Donna Gerardi Riordan provides a timely, cogent case study of the “be careful what you wish for” brand of risk-taking that comes with merging science funding with populist politics. The California Stem Cell Research and Cures Bond Act of 2004 (Proposition 71) is probably not, in toto, good for science. The ends (more funding) can not justify the means (hype masquerading as hope). No good comes when science sacrifices honesty for expediency.

Beyond doubt, the language used to sell Proposition 71 promises more than science can hope to deliver. What is hard to understand is what made many of the parties involved say some of the things that were said. One can understand the anguish motivating those who have or whose loved ones have untreatable illnesses to bet on the promises of embryonic stem cell research, particularly when federal funds are limited. This new area of biology deserves to be explored, even if the ultimate aims of such research remain unproven and unpredictable at this time. Indeed, the United States has a rich history of private dollars, dispersed by individuals, charities, and voluntary health organizations, funding controversial and unpopular research that the federal government cannot or will not support. Economic development and higher-education infrastructure are traditional investments for state coffers. But Proposition 71 seems a horse of a different color. Riordan’s analysis of it rightly focuses our attention on an important question: Is it a good thing that a deliberate decision was made to circumvent the usual processes by which sciences gets funded and states decide investment priorities?

Those of us who care about letting the democratic process work should ask, is Proposition 71 good for public policy? Concocting a public referendum on a complicated issue fraught with scientific, ethical, legal, and social controversies should not be celebrated (nor misinterpreted) as giving people a voice. It is, rather, an example of the few pushing an agenda on the many, bypassing the representative legislative process. Such initiatives are not intended to stimulate debate. The intent, rather, is to shut down the healthy messiness of public debate. The legislative process can be inconvenient, inefficient, and often requires compromise. Given the forced choice of Proposition 71, a majority of the citizens of California, believing money could accelerate the alchemic process whereby basic research yields medical treatments, voted to cure diabetes and defeat Alzheimer’s. They voted for fairness—they wanted life-saving cures derived from “stem cells” (arguably two words that without other modifiers have little biologic or therapeutic meaning) to be accessible and available to all, including the economically disadvantaged. The citizens of California were not asked, at least not in the flyers, billboards, and advertisements, to decide on investing $3 billion in a life-sciences economic stimulus package primarily benefiting University of California research universities and biotechnology companies. They might have been willing to fund such an investment. But they weren’t given the option. What serious problems would California citizens chose to solve in a decade with $3 billion to spend? We don’t know. The powerful few who knew what it was that they wanted didn’t stop to ask them.

SUSAN M. FITZPATRICK

Vice President

James S. McDonnell Foundation

St. Louis, Missouri


Donna Gerardi Riordan points out some of the rotten teeth in California’s $3 billion gift horse: funding for human embryonic stem cell and related research. California’s was the biggest and one of the first such state initiatives in the wake of the Dickey-Wicker federal appropriations ban and President Bush’s August 2001 Executive Order permitting but hemming in federal funding.

California’s referendum mechanism does indeed introduce some wrinkles into the process of funding and governing science. Riordan focuses on the consequences of insulating the program from conventional state legislative and executive processes. Insulating stem cell research from mainstream politics was understandable, however, because of a foreseeable political problem. The opposition was strongly motivated and managed to delay funding for several years through court battles despite the insulation. Fighting this out in the legislature would surely have been contentious, although perhaps eventually reaching more or less the same outcome (but only perhaps).

A previous California health research program, the Tobacco-Related Diseases Research Program (TRDRP), also built in insulation from legislative and gubernatorial politics. TRDRP was created by another referendum, Proposition 25, which increased cigarette taxes and dedicated some of the proceeds to research. The research program was clearly specified in the constitutional amendment but was nonetheless blocked at several turns by the governor and the speaker of the State Assembly, challenges resolved only by the California Supreme Court. TRDRP was immensely valuable to tobacco control research, for years the largest program in the country, surpassing federal funding (sound familiar?). It laid a foundation for tobacco control research nationally and internationally. It mattered, and but for its built-in protections, it clearly would have been scuttled by conventional politics.

The common element of stem cell and tobacco control research is determined opposition, and so there is a plain political explanation for why the insulating provisions were built into the propositions. That does not take away from the consequences of following the referendum route that Riordan so aptly describes.

Attention may now turn to the serious coordination problem that follows from state research programs. How will these integrate with federal funding and with other states and other nations? This may well be tested in embryonic stem cell research if the federal brakes come off next spring, regardless of which party wins the presidency. Should Congress and the National Institutes of Health (NIH) race to match California, Massachusetts, Hong Kong, Israel, Korea, and other jurisdictions that have generously funded stem cell research? NIH merit review awards funds according to scientific opportunity and health need. The need for federal funding is arguably reduced in scientific areas where states and other countries have stepped in. Or is it? The NIH has no clear mechanism to take such funding into account. Pluralism is one of the virtues of U.S. science funding. But too much uncoordinated funding can leave some fields awash in money while others starve. With several independent state-based funding programs, California’s being the largest, the coordination problem will be unprecedented in scale and intensity.

ROBERT COOK-DEEGAN

Director, Center for Genome Ethics, Law, and Policy

Institute for Genome Sciences and Policy

Duke University

Durham, North Carolina

Editor’s Journal: Questions That Blur Political Party Lines

Presidential election season is not the best time to be a policy wonk. Seminars at Harvard’s Kennedy School or colloquia at the National Academy of Sciences can feel like exercises in the theater of the absurd when voters seem eager to turn the choice of a vice president into an episode of “American Idol.” As experts in universities, think tanks, and professional organizations debate the fine points of position papers and action plans, ever hopeful that this will be the election when substantive policy debates take center stage, campaign strategists work feverishly to craft the bumper sticker that will somehow capture the whimsy of voters on November 4.

The urge to simplify is not the inclination of only the disengaged, the uninformed, or the cynical. During this season, we all tend to become Manicheans, seeing the world in black and white. The nation’s problems somehow align themselves so that the solutions are either Democratic or Republican. Even those of us who should know better can convince ourselves that the election results will determine everything about the course of policy.

There is no doubt that the election results will dramatically influence some very real and meaningful decisions. Sadly, Supreme Court appointments have come to hinge more on political orientation than constitutional insight, and any question related to abortion has become highly polarized. On numerous other questions, party affiliation will play a role, but we should try to be realistic about how strong that role will or should be.

One reason that those of us in the science, technology, and health policy community might take a simple view of the presidential choice is that neither candidate pays much attention to the details of STH policy questions during the campaign. A large number of leading individuals and institutions (including Issues and the National Academies) have supported the efforts of Science Debate 2008 to sponsor a public discussion of these questions between the candidates. No debate will take place, and many jaded policy veterans have pointed out from the beginning that the candidates have nothing to gain from debating complex topics that do not engage the public. Science Debate 2008 has sent a list of 14 questions to each of the candidates and asked for written responses. Barack Obama has submitted his answers, and John McCain has promised to send his. Answers will be posted at www.sciencedebate2008.com.

This will undoubtedly be useful to voters who care about these topics, but what will not be addressed is the reality that many of the most pressing STH policy concerns do not lend themselves to predictable ideological solutions. The candidates will discuss health care finance plans, but they are unlikely to have detailed suggestions for how to manage health care delivery more efficiently, how to ensure that all physicians practice up-to-date evidence-based medicine, and how to shift the emphasis in health care from disease treatment to health promotion.

The articles in this issue offer numerous other examples. Republicans and Democrats alike have discovered that the cold war is over and that the nature of threats to the nation’s security is markedly different from those of the past. Both parties have announced when in power that they are restructuring the military, and both are ready to restructure again. Although the Clinton and Bush administrations did add new dimensions to the military, neither could claim to have truly succeeded at restructuring because neither managed to eliminate investments in outdated technologies and systems. Torpedoing unnecessary programs is essential to free up the resources necessary to create a restructured military. Once that is done, both parties will finally have to face the difficulty of creating the military of the future, beginning with an R&D program designed to serve new purposes.

Likewise, both parties trumpet the need to develop renewable energy technologies, but when the country tried to do this in the late 1970s, the results were far from transformative. No obvious formula exists for stimulating technological innovation, making markets efficient, and encouraging consumers to make decisions with a long-term perspective. No amount of government spending will be enough to transform the energy system and markets. More federal research, tax incentives, market signals, and regulatory reform will all be part of the mix, but concocting the best recipe will be a challenge.

The political parties have fought over whether the United States should join the Kyoto Protocol on climate change, the Law of the Sea Treaty, and other international environmental agreements. These debates would be more meaningful if we could be certain that the agreements actually advance their stated goals. The task for both parties is to find a way to establish an international system that gives these treaties some teeth. Otherwise, they are ripe for political gamesmanship, grandstanding, and cynicism. Indeed, in many areas of global affairs science and technology are paid lip service but given no real influence at the highest levels of government. The insights and expertise of scientists and engineers that is incorporated into policy advice is of little value if it is not effectively incorporated into policy formulation.

Most environmentalists have aligned themselves with the Democrats, who have demonstrated a greater willingness to regulate industrial pollution and to preserve natural resources. But advances in technology create new options to consider. Environmentalists have long advocated the use of integrated pest management (IPM) as an alternative to the excessive use of chemical pesticides in agriculture, and they have been wary of the risks that they perceive with the development of genetically engineered crops. But now researchers are finding that biotechnology can be a very effective tool in improving IPM. Choosing the best path forward will require reexamining past assumptions.

The Luddites have never gained a foothold in U.S. political culture, so all candidates happily declare themselves proponents of innovation. It’s less fattening than apple pie, and the economic rewards have been widely appreciated. The head start that the country gained by emerging relatively unscathed from World War II provided benefits for decades, and U.S. industry responded very effectively to the challenge from Japan in the 1990s. But the need for the United States to remain at the forefront of innovation is growing in importance as many other countries demonstrate their ability to produce high-technology goods at competitive prices and their desire to become leaders in technology innovation. We have learned that innovation is not simply a matter of new technology. It is a complex process that involves culture, finance, geography, regulation, and management. The diversity of innovation policies around the globe and the growing recognition that each country must find its own policy brew should be evidence enough that neither Republicans nor Democrats are likely to possess the holy grail.

Besides, it should be apparent by now that we are talking about politicians, not political theorists, and that rigorous adherence to philosophical principles is not a common practice within the Beltway. Republicans might sing the praises of market forces and rail against the dangers of industrial policy, but they cannot escape the reality that the market cannot be free of the influence of tax, trade, education, intellectual property, and numerous other policies. Democrats want to align themselves with the independence and creativity of scientific research, but they have yet to think through all the implications that will arise with the application of advances in genetics and biotechnology to the environment, human reproduction, equity, and health.

Indeed, the presidential candidates are no fools. They avoid detailed prescriptions for thorny STH policy problems because they know very well that when they actually have to confront these issues, they will not be able to implement a simple campaign pledge. So put the election in perspective, cast your vote, and be prepared for the much more engaging work of STH policymaking that will follow the election no matter who wins.

Predicting the future

Predicting the future of the global human community often seems to be a fool’s errand. The track record of futurism is notoriously deficient; mid-20th century prognostications of life in the early 21st century are now used mainly for generating laughter. The failure of prophecy has many roots. Forecasters often merely extrapolate existing trends, unreasonably assuming that the underlying conditions will remain stable. Wrenching discontinuities are often difficult even to imagine, yet history has been molded by their inevitable if unpredictable occurrences. Many futurists also allow their ideological commitments, if not their underlying personalities, to shape their conclusions. Thus pessimists and environmentalists commonly see doom around the corner, whereas technophiles and optimists often envisage a coming paradise. As years go by and neither circumstance comes to pass, the time of fulfillment is merely put off to another day.

Vaclav Smil is well aware of these and other problems that confront any would-be seer. As a result, he has written a different kind of consideration of the global future, one marked by careful analysis, cautious predictions, and a restrained tone. “In sum,” he tells us in the book’s preface, “do not expect any grand forecasts or prescriptions, any deliberate support for euphoric or catastrophic views of the future, any sermons or ideologically slanted arguments.” Smil readily acknowledges that we live in a world of inherent uncertainty in which even the near-term future cannot be accurately predicted. Yet he also contends that a number of risks and trends can be quantitatively assessed, giving us a sense of the relative likelihood of certain outcomes. Such a modest approach, limited in its purview to the next 50 years, is unlikely to generate public excitement or large book sales. It can, however, provide a useful corrective for the inflated claims of other futurists as well as generate constructive guidelines for risk minimization.

Few authors are as well qualified to write about the coming half-century as Smil, a Czech-born polymath who serves as Distinguished Professor at the University of Manitoba, Canada. Smil works in an impressive array of languages; reads voraciously; skillfully engages in economic, political, and ecological analysis; and is fully global in his concerns and interests. He initially gained attention as a Sinologist, his 1984 book The Bad Earth: Environmental Degradation in China easily counting as pathbreaking if not prescient. More recently, Smil has emerged as a leading expert on global issues, his topics ranging from energy production to food provision to biospheric evolution. In general, he aims for a broad but highly educated audience. Readers of Global Catastrophes and Trends should be prepared for a good dose of unadorned scientific terminology and quantitative reasoning as well as a qualified style of argumentation in which both sides of heated debates are given due hearings.

SMIL DEBUNKS ALARMIST CONCERNS ABOUT THE IMMINENT EXHAUSTION OF OIL, EXCORIATES ALL FORMS OF BIOMASS-BASED ENERGY AS ENVIRONMENTALLY DESTRUCTIVE, AND DISMISSES THE QUEST FOR FUSION POWER AS QUIXOTIC.

As his current title indicates, Smil divides his consideration of the future into two parts: the first examining the possibility of catastrophic events, the second turning to the playing-out of current trends. Smil initially focuses on potential catastrophes of global scale, whether human-induced or generated by nature. He concludes that the risks of “fatal discontinuities” emerging from fully natural events are real but small. Some possible hazards, such as those posed by volcanic mega-eruptions, must be accepted as unavoidable but highly unlikely. Others, such as the cataclysmic collision of an asteroid or comet with Earth, could potentially be addressed. Threatening objects, for example, might be nudged away from Earth-intersecting trajectories by docked rockets. Smil urges NASA to reorient its mission toward gaining such capabilities.

Overall, Smil is less concerned about possible physical calamities than he is about epidemic diseases. He downplays the significance of new pathogens, such as the Ebola virus, to focus on novel strains of influenza, concluding that the likelihood of a flu pandemic in the next 50 years approaches 100%. In a similar vein, he worries more about the possibility of a “megawar” than he does about terrorism, the risks of which he considers overstated and manageable.

Major trends

After having dealt with possible catastrophes, Smil outlines the trends that he thinks will be most influential over the next 50 years. He wisely begins with energy, contending that the world’s most momentous near-term change will be its “coming epochal energy transition.” Smil will disappoint both environmentalists and high-tech enthusiasts, however, with his argument that the movement away from fossil fuels will be protracted because of the continuing economic advantages of oil, coal, and natural gas and the inherent limitations of solar, wind, and other green energy sources. He debunks alarmist concerns about the imminent exhaustion of oil, excoriates all forms of biomass-based energy as environmentally destructive, and dismisses the quest for fusion power as quixotic. Smil is guardedly supportive of nuclear fission but rejects it as any kind of panacea. In the end he calls for governmental programs to increase energy efficiency and reduce overall use.

From energy, Smil abruptly turns to geopolitics and international economics. His main goal here is to assess which parts of the world are likely to occupy positions of leadership 50 years from now. He argues that Europe, Japan, and Russia will probably see their influence diminish, largely because of their imploding populations and the resulting stresses generated by mass aging. In the case of Europe, he is also alarmed by growing immigrant populations, mostly Muslim, that are not experiencing social integration. Smil is no more sanguine about the prospects of the demographically expanding Islamic world, which he sees as producing a dangerous surfeit of unemployed young men. He is also concerned about “Muslim countries’ modernization deficit,” warning us that for “sleepless nights, think of a future nuclear Sudan or Pakistan.”

Overall, Smil contends that the two countries that will matter the most are China and the United States. Based on deeply entrenched trends, he foresees the continuing rise of China as the world’s new workshop, coupled with the gradual decline of the spendthrift, deindustrializing United States. By the end of the period in question, he thinks that the Chinese economy will outrank all others in absolute terms. Such economic prowess will translate into geopolitical clout; as early as 2020, Smil argues, China could match the United States in defense spending. He insists, however, that such trends indicate a likely rather than a preordained future. As a result, he takes care to summarize the weak points of the Chinese system that could disrupt the country’s ascent. Similarly, near the end of the book, Smil reconsiders the future position of the United States, this time stressing its economic and political resilience.

The final substantive chapter in Global Catastrophes and Trends turns to the world’s environmental predicaments, especially those posed by climate change. Although Smil accepts the reality of global warming, he emphasizes the uncertainty intrinsic to all climate forecasts. Because of the complex and poorly understood feedback mechanisms involved, he concludes that “even our most complex models are only elaborate speculations.” And although he does expect continued warming, he thinks that the overall effects will be manageable, with little damage done to crop production and a relatively small rise in sea level. Smil also cautions that excessive concern about climate distracts attention from other pressing environmental threats, including those generated by invasive species, water shortages, and the excessive use of nitrogen-based fertilizers. Basic biospheric integrity, he argues, ultimately underwrites all economic endeavors, yet is often taken for granted.

Global Catastrophes and Trends concludes by urging a calmly rational approach to crucial problems, avoiding extreme positions. Smil fears that society at large has embraced a kind of manic-depressive attitude in which “unrealistic optimism and vastly exaggerated expectations contrast with portrayals of irretrievable doom and indefensibly defeatist prospects.” He correspondingly calls for a strategy of prudent risk minimization that would emphasize “no-regrets options.” Reducing energy use and carbon-intensive commodities and services, developing new antiviral drugs, protecting biodiversity, and guarding against asteroid collisions are all, Smil argues, not only possible but economically feasible.

Prediction’s pitfalls

Smil’s moderate and rational approach to major issues of global significance has much to recommend it, as does his rejection of sensational predictions of impending collapse. He is also right to remind us that climate change is such an inordinately complex matter that we should avoid making unduly confident forecasts. But that said, global warming might prove far more damaging to both the economy and the biosphere than Smil expects, especially if the time horizon is extended beyond 50 years. On this issue, Smil seems to have adopted an attitude of optimism that many sober climatologists would find unwarranted.

Smil can also be faulted for occasionally ignoring his own warnings against extrapolating trends into the future. He thus captions a graph showing China’s economy surpassing that of the United States in 2040 with the bald assertion that China’s rapid growth will make it the world’s largest economy. Perhaps, but perhaps not, as Smil readily admits elsewhere. More problematic is Smil’s belief that some trends are so deeply embedded that they will prove highly resistant to change, leading to his assertion that low birthrates will essentially doom Europe, Russia, and Japan to relative decline. Yet in just the past two years, fertility rates in both France and Russia have significantly increased. It is not inconceivable that the birth dearth of the industrial world will eventually come to an end, as did the baby boom of the post–World War II era.

One of the more underappreciated forces of social change is that of conflict between the generations. Rising cohorts often differentiate themselves from those that came before, adopting new attitudes and embracing distinguishing behaviors. Such generational dynamics potentially pertain to a number of tendencies analyzed in Global Catastrophes and Trends. Many Muslim young people today react against their parents and grandparents by espousing a harsh form of Islam, but such an option will not necessarily be attractive to their own children 30 years from now. By the same token, can we be sure that the coming generation of Italian and Japanese youth will be as averse to reproducing themselves as were their parents’ peers? Perhaps they will respond to the impending crises of national diminution and aging by changing their behavior on this score. In any event, fertility rates will almost certainly continue to fluctuate, ensuring that any precise forecasts of future population levels will be wrong.

Another generator of unpredictability is the so-called unknown unknowns: future events or processes of potentially world-transforming magnitude that we cannot postulate or even imagine, given our current state of knowledge. Although Smil by no means denies the possibility of such occurrences, I think he gives them inadequate attention. As such, he could profitably engage with the work of Nassim Nicholas Taleb, perhaps the premier theorist of randomness and uncertainty. As Taleb shows in The Black Swan: The Impact of the Highly Improbable, the rare and unprecedented events that he calls “black swans” have repeatedly swept down to make hash out of many of the world’s most confident predictions.

But one could hardly expect Smil to deal with every form of unpredictability or with every author who has written on risk and uncertainly. The pertinent literature is vast, as is the subject matter itself. Smil has written a terse, focused work on the world’s main threats and trends, not an all-encompassing tome on futurology and its discontents. In doing so, he has digested and assessed a huge array of scientific studies, economic and political analyses, and general prognostications. For that he is to be commended, as he is for his dispassionate tone and rational mode of investigation. Readers interested in large-scale economic, political, environmental, and demographic tendencies will find Global Catastrophes and Trends a worthwhile book. Those suffering from sleepless nights as they fret about the world’s dire condition or enthuse about its coming techno-salvation, on the other hand, may find it an invaluable emollient.


Martin Lewis () is a senior lecturer at Stanford University and the author of Green Delusions: An Environmentalist Critique of Radical Environmentalism.

Creating a National Innovation Foundation

The issue of economic growth is on the public agenda in this election year in a way that it has not been for at least 15 years. Policymakers have thus far been preoccupied with providing a short-term economic stimulus to counteract the economic downturn that has followed the collapse of the housing bubble. Yet the problem of how to restart and sustain robust growth goes well beyond short-term stimulus. The nation needs a firm foundation for long-term growth. But as of yet, there has been no serious public debate about how to create one. At best, there has been a rehash of 1990s debates about whether tax cuts or lower federal budget deficits are the better way to increase saving and (it is often assumed) stimulate growth.

A growing number of economists have come to see that innovation—not more saving—is the key to sustained long-term economic growth. Some economists have found that R&D accounts for nearly half of U.S. economic growth, and that R&D’s rate of return to the United States as a whole is as high as 30%. But R&D is not all there is to innovation. Properly conceived, innovation encompasses new products, new processes, and new ways of organizing production, along with the diffusion of new products, technologies, and organizational forms throughout the economy to firms and even entire industries that are not making effective use of leading technologies or organizational practices. Innovation is fundamentally about applying new ideas in organizations (businesses, nonprofits, and governments), not just about creating those ideas.

Innovation has returned to the federal policy agenda, most recently in the form of the America COMPETES Act signed into law in 2007. That law, unfortunately not yet fully funded, provides for much-needed increases in federal support for research and science and engineering education—key inputs into the process of innovation. But it does not go far enough. It does little to promote the demand for those inputs or to organize them in ways that lead to the commercial application of new ideas. More engineers and more R&D funding do not automatically create more innovation or, particularly, more innovation in the United States. In the mid-20th century, the nation could largely rely on leading firms to create research breakthroughs and turn them into new products, leaving the tasks of funding basic research and scientific and technological education to government. But it can no longer do so. Moreover, in the previous era, when the United States was the dominant technology-based economy, both old and new industries were domestic (for example, U.S. semiconductor firms replaced U.S. vacuum-tube firms). But a flat world means that more potential first movers will come from an increasingly large pool of technology-based economies, and that shifts in the locus of global competitive advantage across technology life cycles will occur with increasing frequency.

As a result, it is time for the federal government to make innovation a central component of its economic policy, not just a part of technology or education policy. To do so, it should create a National Innovation Foundation (NIF), which would be funded by the federal government and whose sole responsibility would be to promote innovation.

Growing innovation challenge

Since the end of World War II, the United States has been the world leader in innovation and high-value-added production. But now other nations are posing a growing challenge to the U.S. innovation economy, and increasingly, services as well as goods are subject to international competition. Because the United States cannot and should not try to maintain its standard of living by competing with poorer countries through low wages and lax regulations, it will have to compete in two other ways: by specializing in innovation-based goods and services that are less cost-sensitive, and by increasing productivity sufficiently to offset the lower wages paid in countries such as India and China. Both strategies rely on innovation: the first on product innovation and the second on process and organizational innovation. These same strategies are essential for maintaining the U.S. competitive position relative to other economically advanced countries.

However, there is disturbing evidence that the nation’s innovation lead is slipping. Companies are increasingly shifting R&D overseas. Between 1998 and 2003, investment in R&D by U.S. majority-owned affiliates increased twice as fast overseas as it did at home (52 versus 26%). In the past decade, the share of U.S. corporate R&D sites in the United States declined to 52 from 59%, while the share in China and India increased to 18% from 8%. The United States’ shares of worldwide total domestic R&D spending, new U.S. patents, scientific publications and researchers, and bachelor’s and new doctoral degrees in science and engineering all fell between the mid-1980s and the beginning of this century. The United States ranks only 14th among countries for which the National Science Foundation (NSF) tracks the number of science and engineering articles per million inhabitants. The United States ranks only seventh among countries in the Organization for Economic Co-operation and Development in the percentage of its gross domestic product (GDP) that is devoted to R&D expenditures, behind Sweden, Finland, Japan, South Korea, Switzerland, and Iceland, and barely ahead of Germany and Denmark.

Why has the United States’ innovation lead been slipping? One reason is that the process by which R&D is financed and performed has changed. During the first four decades after World War II, large firms played a leading role in funding and carrying out all stages of the R&D process. Companies such as AT&T and Xerox did a substantial amount of generic technology research, as well as applied R&D, in house. More recently, private funders of R&D have become more risk-averse and less willing to fund long-term projects. U.S. corporations, while investing more in R&D in this country overall, have shifted the mix of that spending toward development and away from more risky, longer-term basic and applied research. Similarly, venture capitalists, who have become a leading source of funding for cutting-edge science-based small firms, have shifted their funding away from startups and early-stage companies, which are riskier investments than later-stage companies, and have even begun to shift funding to other nations. In addition, as short-term competitive pressures make it difficult for even the largest firms to support basic research and even much applied research, firms are relying more on university-based research and industry/university collaborations. Yet the divergent needs of firms and universities can hinder the coordination of R&D between these two types of institutions.

Problems with the diffusion of innovation have also become more important. Outside of relatively new science-based industries such as information technology and biotechnology, many industries, including construction and health care, lag in adopting more productive technologies. Regardless of their industry, many small and medium-sized firms lag in adopting technologies that leading firms have used for decades. This is perhaps most visible in the manufacture of durable goods, where small and medium-sized suppliers have lagged behind their larger customers in adopting waste-reducing lean production techniques. Although smaller firms have long been late adopters of new technologies, the problem was less serious in an era when large firms manufactured most of their own components and designed products for their outside suppliers. With today’s more elaborate supply chains, technological lag by suppliers is a more serious problem for the U.S. economy.

Finally, geographic clustering—the tendency of firms in the same or related industries to locate near one another—enables firms to take advantage of common resources, such as a workforce trained in particular skills, technical institutes, and a common supplier base. Clustering also facilitates better labor-market matching and facilitates the sharing of knowledge, thereby promoting the creation and diffusion of innovation. It exists in such diverse industries and locations as information technology in Silicon Valley, autos in Detroit, and insurance in Hartford, Connecticut. Evidence suggests that geographic clustering may have become more important for productivity growth during the past three decades. Yet because the benefits of clustering spill over beyond the boundaries of the firm, market forces produce less geographic clustering than society needs, and firms have little incentive to collaborate to meet needs that they share in common, such as worker training to support new technologies or ways of organizing work.

Federal innovation policy should respond directly to these innovation challenges. It should help fill the financing gaps in the private R&D process, particularly for higher-risk, longer-term, and more-generic research. It should spur collaboration between firms and research institutions such as universities, colleges, and national laboratories. It should help speed the diffusion of the most productive technologies and business practices by subsidizing the training of workers and managers in the use of those technologies and practices and by giving firms (especially small and medium-sized ones) the information and assistance they need to adopt them. There is also a growing need for government to encourage the development of industry clusters, as governments such as that of China have deliberately done as a way of reducing costs and improving productivity. The federal government should do all of these things in an integrated way, taking advantage of complementarities that exist among activities to create an integrated, robust innovation policy that can make a real contribution to long-term economic growth.

Current federal policy does little to address the nation’s innovation challenges. Most fundamentally, the federal government does not have an innovation policy. It has a basic science policy (supporting basic scientific research and science and technology education). It has an intellectual property policy, carried out through the Patent and Trademark Office. It has agencies and programs that promote innovation in specific domains as a byproduct of agencies and missions that are directed at other goals (for example, national defense, small business assistance, and energy production). It even has a few small programs that are designed to promote various types of commercial innovation. But this activity does not add up to an innovation policy. Innovation-related programs are fragmented and diffuse, scattered throughout numerous cabinet departments, including Commerce, Labor, Energy, and Defense, and throughout a host of independent agencies, such as NSF and the Small Business Administration. There is no federal agency or organization that has the promotion of innovation as its sole mission. As a result, it is not surprising that innovation is rarely thought of as a component of national economic policy.

Existing federal innovation efforts are underfunded as compared with efforts in other economically advanced nations. In fiscal year 2006, the U.S. government spent at most a total of $2.7 billion, or 0.02% of GDP, on the principal programs and agencies that are most centrally concerned with commercial innovation. If the federal government were to invest the same share of GDP in these programs and agencies as many other nations do in comparable organizations, it would have to invest considerably more: $34 billion per year to match Finland, $9 billion to match Sweden, $5.4 billion to match Japan, and $3.6 billion to match South Korea. Some U.S. programs, particularly the Technology Innovation Program and its predecessor, the Advanced Technology Program, and the Manufacturing Extension Partnership Program, have had their budgets drastically reduced (from already low levels), largely because the current administration has tried to have them abolished.

The only federal support for technology diffusion comes through the Manufacturing Extension Partnership Program, an outstanding but underfunded program whose existence has been threatened during the current administration. But although federal support for technology diffusion in manufacturing is meager, in services it is almost nonexistent, as is federal support for innovation in services generally. Yet services, which include everything other than agriculture, mining, and manufacturing, account for four out of every five civilian jobs.

Likewise, there is little federal support for regional industry clusters. In fact, few federal innovation promotion programs engage in any way with state or regional efforts to spur innovation. Yet state governments and regional partnerships of businesses, educational institutions, and other actors involved in innovation have developed many effective efforts to promote innovation. These efforts are relatively small-scale, amounting to only $1.9 billion annually, and they are, understandably, undertaken with only the interests of states and regions, not the interests of the nation as a whole, in mind. Federal support, with appropriate federal incentives, could remedy both of these defects.

Charting a new federal approach

Establishing a NIF should lie at the heart of federal efforts. NIF would be a nimble, lean, and collaborative entity devoted to supporting firms and other organizations in their innovative activities. The goal of NIF would be straightforward: to help firms in the nonfarm U.S. economy become more innovative and competitive. It would achieve this goal by assisting firms with such activities as joint industry/university research partnerships, technology transfer from labora tories to businesses, technology-based entrepreneurship, industrial modernization through the adoption of best-practice technologies and business procedures, and incumbent worker training. By making innovation its mission, funding it adequately, and focusing on the full range of firms’ innovation needs, NIF would be a natural next step in advancing the innovation agenda that Congress put in place when it passed the America COMPETES Act.

Because flexibility should be one of NIF’s key characteristics, it would be counterproductive now to overspecify NIF’s operational details. NIF would determine how best to organize its activities; it would not be locked into a particular programmatic structure. Nonetheless, there are some core functions that NIF should undertake.

  • Catalyze industry/university research partnerships through national-sector research grants. To begin, NIF would offer competitive grants to national industry consortia to conduct research at universities—something the government does too little of now. These grants would enable federal R&D policy to break free of the dominant but unproductive debate over science and technology policy, which has tended to pit people who argue that the federal government should fund industry to conduct generic precompetitive R&D against those who maintain that money should be spent on curiosity-directed basic research at universities. This is a false dichotomy. There is no reason why some share of university basic research cannot be oriented toward problems and technical areas that are more likely to have economic or social payoffs for the nation. Science analyst Donald Stokes has described three kinds of research: purely basic research (work inspired by the quest for understanding, not by potential use); purely applied research (work motivated only by potential use); and strategic research (work inspired by both potential use and fundamental understanding). Moreover, there is widespread recognition in the research community that drawing a hard line between basic and applied research no longer makes sense. One way to improve the link between economic goals and scientific research is to encourage the formation of industry research alliances that fund collaborative research, often at universities.
Current federal policy does little to address the nation’s innovation challenges. Most fundamentally, the federal government does not have an innovation policy.

Currently, the federal government supports a few sector-based research programs, but they are the exception rather than the rule. As a result, a key activity of NIF would be to fund sector-based research initiatives. NIF would offer competitive Industry Research Alliance Challenge Grants to match funding from consortia of businesses, businesses and universities, or businesses and national labs. These grants would resemble the National Institute of Standards and Technology’s (NIST’s) Technology Innovation Partnership programs and NSF’s innovation programs (Partnerships for Innovation, Industry-University Cooperative Research Centers, and Engineering Research Centers). However, NIF grants would have an even greater focus on broad sectoral consortia and would allow large firms as well as small and mid-sized ones to participate. Moreover, like the NIST and NSF innovation programs, NIF’s work in this area would be industry-led, with industry coming to NIF with proposals.

To be eligible for NIF matching funding, firms would have to form an industry-led research consortium of at least five firms, agree to develop a mid-term (3- to 10-year) technology roadmap that charts out generic science and technology needs that the firms share, and provide at least a dollar-for-dollar match of federal funds.

This initiative would increase the share of federally funded university and laboratory research that is commercially relevant. In so doing, it would better adjust the balance between curiosity-directed research and research more directly related to societal needs.

NIF would also support a productivity enhancement research fund to support research into automation, technology-enabled remote service delivery, quality improvement, and other methods of improving productivity. Automation (robotics, machine vision, expert systems, voice recognition, and the like) is a key to boosting productivity in both manufacturing and services. Technology-enabled remote service delivery (for example, home health monitoring, remote diagnosis, and perhaps even remote surgery) has considerable potential to improve productivity in health care and other personal service industries. A key function of NIF would be to fund research at universities or joint business/university projects focused on increasing the efficiency of automated manufacturing or service processes. NIF would support early-stage research into processes with broad applications to a range of industries, not late-stage research focused on particular companies. It also would fund a service-sector science initiative to conduct research into productivity and innovation in the nearly 80% of the economy that is made up of service industries.

  • Expand regional innovation promotion through state-level grants to fund activities such as technology commercialization and entrepreneurial support. The design of a more robust federal innovation policy must consider, respect, and complement the plethora of energetic state and local initiatives now under way. Although the federal government has taken only very limited steps to promote innovation, state governments and state- and metropolitan-level organizations have done much more. They engage in a variety of different technology-based economic development activities to help spur economic growth. They spur the development of cutting-edge science-based industries by boosting research funding. Moreover, they try to ensure that research is commercialized and good jobs created in both cutting-edge science-based industries and industries engaging in related diversification. States have established initiatives to help firms commercialize research into new business opportunities. They also promote upgrading and project-based innovation by helping existing firms become more competitive.

Although already impressive, these entities could do even more, and their current efforts could be made more effective. Because the benefits of innovation often cross state borders or take at least a few years to result in direct economic benefits, or both, state elected officials have less incentive to invest in technology-based economic development activities than in other types of activities, such as industrial recruitment, that lead to immediate benefits in the state.

Moreover, any effective national innovation initiative will need to find a way to assist the tens of thousands of innovation-focused small and mid-sized firms as well as larger firms that have specific regionally based innovation needs that they cannot meet on their own. Unlike small nations, the United States is too big for the federal government to play an effective direct role in helping these firms. State and local governments and regional economic development organizations are best positioned do this.

As a result, without assistance from the federal government, states will invest less in these kinds of activities than is in the national interest. NIF would compensate for this political failure by offering state Innovation-Based Economic Development (IBED) Partnership Grants to help states expand their innovation-promotion activities. The state IBED grants would replace part of the grantmaking that the NIST and NSF innovation programs currently perform but would operate exclusively through the states.

To be eligible for NIF funding, states would need to provide at least two dollars in actual funding for every NIF dollar they receive. Rotating panels of IBED experts would review proposals. NIF staff would also work in close partnership with states to help ensure that their efforts are effective and in the national as well as the state interest.

  • Encourage technology adoption by assisting small and mid-sized firms in implementing best-practice processes and organizational forms that they do not currently use. Although NIF’s national-sector grants and state IBED grants would largely support new-to-the-world, sometimes radical product and process innovation, its technology diffusion work would focus more on the diffusion of existing processes and organizational forms to firms (mostly small and mid-sized) that do not currently use them. This effort would incorporate and build on NIST’s Manufacturing Extension Partnership (MEP) program, the only federal program whose primary purpose is to promote technology diffusion among such firms. NIF effort would follow the MEP model of a federal/state partnership. One or more technology diffusion centers would be located in each state. Like existing MEP centers, the centers could be operated by state or private organizations. States would submit proposals to NIF for the operation of these centers, and NIF would evaluate the centers periodically. Some specific changes in the current MEP program would enable NIF to serve as a more comprehensive and more effective promoter of technology diffusion for both manufacturing and service industries. NIF would expand the scope of the current MEP beyond its current emphasis on applying waste-reducing, quality-improving lean production techniques to the direct production of manufactured goods. It would do so by helping improve productivity in some service activities where lean production could be applied.

In addition to supporting efforts that assist firms directly, NIF would analyze opportunities and challenges regarding technological, service-delivery, and organizational innovation in service industries, such as health care, construction, residential real estate, financial, and transportation services. It also might recommend steps that federal and state governments could take to help spur innovation, including the digital transformation of entire sectors through the widespread use of information technology and e-business processes. Such steps might include revising procurement practices, modifying regulations, and helping spur standards development.

Emphasizing accountability

To guide its own work and provide firms and government agencies with the information they need to promote innovation, NIF would create methods of measuring innovative activity and carry out research on innovation. It would be the primary entity for conceptualizing how innovation should be measured and the primary advocate within the federal government for measuring innovation. It would help the major federal statistical agencies (the Census Bureau, Bureau of Economic Analysis, and Bureau of Labor Statistics) and NSF develop operational measures of innovation that can be included in new or existing economic data sources.

NIF would also work with other agencies to improve the measurement of productivity and innovation, particularly in better measuring output in the service sector; total factor productivity (the most comprehensive measure of productivity, which accounts for capital, materials, energy, and purchased services, in addition to labor, as productive inputs); and better bottom-up estimates of gross product and productivity for counties and metropolitan areas.

In addition, NIF would be the federal government’s major advocate for innovation and innovation policy. As a key step, it would produce an annual Innovation Report, akin to the annual Economic Report of the President. More generally, NIF’s advocacy role in support of innovation would resemble the Small Business Administration’s role as a champion for small business. NIF would seek input into other agencies’ decisions on programs that are likely to affect innovation. However, unlike the Small Business Administration, NIF would not have any authority to intervene in other agencies’ decisions.

Compelling need, but obstacles

In the current fiscal climate, it will be difficult for the federal government to launch major new investment initiatives, especially because strong political forces on either side of the aisle oppose raising taxes or cutting other spending. Nevertheless, the compelling need to boost innovation and productivity merits a substantial investment in NIF. The federal government should fund it at an initial level of $1 billion per year, but approximately 40% of this funding would come from consolidating existing innovation programs and their budget authority into NIF. (Rolled up would be the NIST and NSF innovation programs, as well as the Department of Labor’s WIRED program. Federal expenditures on all of the programs that NIF would replace or incorporate total $344 million. In addition, the America COMPETES Act provides a total of about $88 million more in 2010 than in 2006 for the programs that will be folded in. Therefore, current and already-planned expenditures on the programs whose work would be included in NIF total $432 million.) After several years, NIF could easily be ramped up to a budget of $2 billion, a level that would make its budget approximately one-third the size of NSF’s. In addition, because of its strong leveraging requirements from the private sector and state governments, NIF would indirectly be responsible for ensuring that states and firms spent at least one dollar on innovation for every dollar that NIF spent.

NIF could be organized in several ways. It could be organized as part of the Department of Commerce, as a government-related nonprofit organization, as an independent federal agency, or as an arm of the Office of the President. But whatever way it is organized, it should remain relatively lean, employing a staff of approximately 250 individuals. It should recruit the best practitioners and researchers whose expertise overlaps in the areas of productivity, technology, business organization and strategy, regional economic development, and (to a lesser extent) trade. Like NSF, NIF would be set up to allow some staff members to be rotated into the agency for limited terms from outside of government and to allow some permanent NIF staff members to go on leave for limited terms to work for private employers.

Already there is legislation in the Senate to create an NIF-like organization. The National Innovation Act, introduced by Senators Hillary Clinton (D-NY) and Susan Collins (R-ME), would create a National Innovation Council, housed in the Office of the President and consolidating the government’s primary innovation programs.

Now more than ever, the U.S. standard of living depends on innovation. To be sure, companies are the engines of innovation, and the United States has an outstanding market environment to fuel those engines. Yet firms and markets do not operate in a vacuum. By themselves they do not produce the level of innovation and productivity that a perfectly functioning market would. Even indirect public support of innovation in the form of basic research funding, R&D tax credits, and a strong patenting system—important as it is—is not enough to remedy the market failures from which the nation’s innovation process suffers. At a time when the United States’ historic lead in innovation is shrinking, when more and more high-productivity industries are in play globally, and when other nations are using explicit public policies to foster innovation, the nation cannot afford to remain complacent. Relying solely on firms acting on their own will increasingly cause the United States to lose out in the global competition for high-value-added technology and knowledge-intensive production.

The proposed NIF would build on the few federal programs that already succeed in promoting innovation and borrow the best public policy ideas from other nations to spur innovation in the United States. It would do so through a combination of grants, technical assistance, information provision, and advocacy. It would address the major flaws that currently plague federal innovation policy and provide the United States with a state-of-the-art initiative for extending its increasingly critical innovation prowess.

Yet NIF would neither run a centrally directed industrial policy nor give out “corporate welfare.” Rather than taking the view that some industries are more important than others, NIF is based on the idea that innovation and productivity growth can happen in any industry and that the nation benefits regardless of the industry in which they occur. It would work cooperatively with individual firms, business and business/university consortia, and state governments to foster innovation that would benefit the nation but would not otherwise occur. In a world of growing geographic competition for innovative activities, these economic and political actors are already making choices among industries and technologies to serve their own interests. NIF would give them the resources they need to make those choices for the benefit of the nation as a whole.

Without the direct federal spur to innovation that NIF would offer, productivity growth will be slower. Wages will not rise as rapidly. U.S. companies will introduce fewer new products and services. Other nations have realized this and established highly effective national innovation-promotion agencies. It is time for the United States to do the same. By combining the nation’s world-class market environment with a world-class public policy environment, the United States can remain the world’s innovation leader in the 21st century.

Archives – Fall 2008

DENNIS ASHBAUGH, Marlyn, Mixed media on canvas, 74 × 80 inches, 2000.

Marlyn

Dennis Ashbaugh was among the first contemporary artists to incorporate genetic imagery into his work. His large-scale paintings based on autoradiographs fuse the traditions of abstract art with cutting-edge scientific imaging technology. He is interested in technology’s ability to translate a hidden reality into a visible pattern, to reveal the inner code beneath appearances. Ashbaugh is a Guggenheim Fellowship recipient. The Institute of Modern Art in Valencia, Spain, mounted a retrospective of his work in the fall of 2007, following an exhibit of his work at the National Academy of Sciences.