Forum – Spring 2012

More and better U.S. manufacturing

Stephen Ezell’s “Revitalizing U.S. Manufacturing” (Issues, Winter 2012) makes two vital points: The United States must not discount the importance of manufacturing to our economy, and we need to employ the types of policies being used in other industrialized nations to nurture small and medium-sized businesses in that sector. The widely held notion that U.S. manufacturing’s decline is due to cheap labor in Third-world countries is flawed and too fatalistic. That is only part of the story. Indeed, we can’t win the “race to the bottom” on who can churn out the cheapest commodity. We need to look instead at what’s being done in counties such as Germany, Japan, Great Britain, and Canada to develop manufacturing and create jobs.

For one, we need to move up the value chain, with less manufacturing based on price-sensitive goods and more on profitable, specialized niche products. Our company, Marlin Steel, formerly produced baskets to supply bagel bakeries. Precision wasn’t so critical, but we were getting killed on price competition from China. We shifted into specialized wire and sheet metal products to serve industries such as aerospace, automotive, and pharmaceuticals. Since then, we’ve been much more resilient, profitable, and able to treat our employees better. It wasn’t easy—we had to reengineer our entire operation. Making a wire container that was plus or minus a bagel width was no longer good enough. Now our accuracy is measured in micrometers, aided by robots and lasers. About 60% of U.S. manufacturing is still low-tech. That’s too high.

Second, exports are critical to the future of U.S. manufacturing. About 11% of U.S. industrial output is in exports. That compares to 28% in Canada (roughly our percentage at Marlin now) and 41% in Germany. I think the reason for the relatively low percentage of exports in the United States is simply that it’s always been easier to sell your product to Ohio and Massachusetts than to Istanbul and China. Bridging different languages, customs, and currencies takes greater effort, but operating in the global economy is crucial to growing U.S. manufacturing. We can do this: President Obama set a goal two years ago of doubling export growth by 2015. We’re on pace to meet that target, but it’s only a start.

Third, we need to focus more on education and training. The National Association of Manufacturers teamed up with ACT (formerly the American College Testing Program) to create a Manufacturing Skills Certification System. This certificate can help manufacturers identify potential job candidates who’ve demonstrated skills such as mastery of basic math and the ability to read a blueprint. We’ve had applicants with high-school diplomas who couldn’t perform fifth-grade math. About 30 states have adopted this certification system. Others need to.

We’re still the top manufacturer in the world by a factor of 2 over China. But, as Ezell notes, we need to bolster our commitment and shift direction to create more opportunity and jobs. Manufacturing remains essential to the prosperity and security of our nation.

DREW GREENBLATT

President

Marlin Steel

Baltimore, Maryland

[email protected]


Stephen Ezell makes a compelling case for federal initiatives to retool U.S. manufacturing; particularly those aimed at small and mid-sized enterprises. If federal programs are to have a meaningful effect, mechanisms must be in place to cost-effectively link the federal intent to the local manufacturing realities. The ultimate measure of success is the willingness of industry to co-invest with federal and state entities.

Manufacturing is a broad sector with technical, financial, and regional differences. The ability of any federal manufacturing initiative to deliver game-changing results will depend heavily on the delivery mechanism. Public-private partnerships run by enlightened manufacturing personnel and seeded with federal funding can be extremely effective provided that there are operative feedback mechanisms from the local level to the federal level. The local need must be able to rapidly influence the federal strategy, while the federal managers focus on program integrity, national strategy, and performance.

As policymakers compare and contrast the effectiveness of our value chains relative to those in countries with more centralized federal governments, we need to consider legislative changes that will improve the effectiveness of public-private partnerships. Whether in procurement law, intellectual property law, or in aspects of antitrust legislation, the nation needs value chains that can compete with those employed by those who are manufacturing our iPhones. (New York Times, “How the United Sates Lost Out on iPhone Work,” January 22, 2012). If the federal government is to play an effective role in revitalizing our manufacturing value chains, the role of public-private partnerships in developing those value chains should be redefined and expanded.

MARK RICE

President

Maritime Applied Physics Corporation

Baltimore, Maryland

[email protected]


Stephen Ezell’s article reveals that the U.S. manufacturing decline is a core structural issue for United States to end the recession. I say structural because it is not a cyclical problem. He lays to rest the erroneous claim that our economy can be prosperous with a predominantly service-sector approach.

Ezell clearly shows how successful countries support and maintain manufacturing through institutional strategies that are absent here in the United States. However, I would add that successful trading and manufacturing countries also use strategic mercantilism and protectionism—up to and including state capitalism—as an essential component of their strategies.

The so-called Washington Consensus on the issue of trade is that if we lower tariff barriers, others will follow and we will all be better off. As we removed the conditions for foreign state-owned and private companies to invest and sell in the United States, we expected to have market opportunities elsewhere. In practice, this policy has resulted in U.S. unilateral disarmament (or, if you like, nonreciprocity), because other countries replaced tariffs with other, harder-to-address barriers. However, tariffs are about 10% of the issue, not 100% as some nontrade specialists believe.

The point is that successful manufacturing and trading countries such as Japan, China, Brazil, South Korea, and Germany combine Ezell’s manufacturing strategies with a host of mercantilist tools. They protect their existing and cutting-edge industries while working hard to penetrate the largest, richest market in the world: the U.S. consumer market.

The tools are many, but here are a few. Currency manipulation was key to Japan’s and Korea’s 1970s and 1980s growth and to China’s more modern growth. Value-added taxes are a legitimate tax tool used by other countries, but act as a global 18% tariff on U.S. exports, which pay that tax at foreign borders. State-owned companies are becoming more, not less, prevalent in China. The state capitalism model is not responsive to the market and is massively subsidized with inputs of low-cost credit, energy, technology, and guaranteed domestic sales to drastically enhance export predation of the U.S. market.

From the 1800s (Alexander Hamilton) through World War II, which was America’s fastest growth period, the United States focused its domestic and trade programs strategically on building a broad, complementary host of massive industries. We have forgotten those lessons, but others have learned from us. The basic point is that trade policy is a key, not-to-be-ignored component of other countries’ manufacturing agenda. If we are to compete, we have to plan accordingly, without the comfort of ideological free-trade bromides.

MICHAEL STUMO

Chief Executive Officer

Coalition for a Prosperous America

www.prosperousamerica.org

www.tradereform.org

[email protected]


Not only does Stephen Ezell recognize the critical role that manufacturing plays in supporting a middle-class economy, but he asks the right question: Why is the United States alone among top industrialized nations in that it has no plan to strengthen its manufacturing base?

Ezell is correct that the United States finds itself at a serious crossroads. The National Science Board (NSB), which is part of the U.S. government’s National Science Foundation, recently issued an alarming report that found that the United States has lost 28% of its high-technology manufacturing jobs over the past decade and is losing its lead in science and technology in the global marketplace. The NSB says that one of the most dramatic signs of this trend is the loss of 687,000 high-technology manufacturing jobs since 2000.

The United States urgently needs to implement a cohesive national strategy to rebuild U.S. manufacturing and to restore the nation’s innovative edge. Ezell rightly cites a recent charter adopted by the Information Technology and Innovation Foundation (ITIF). After combining the input of a host of business and labor organizations, the ITIF assembled bipartisan agreement into a core set of policy actions that must be undertaken to renew U.S. manufacturing. Their subsequent Charter for Revitalizing American Manufacturing suggests some key steps, including the following: expanded worker skills and training; a renewed focus on R&D and innovation; a stronger trade policy to combat mercantilist dumping, subsidies, and currency manipulation; and an effective tax policy to support U.S. manufacturing.

Manufacturing’s critical role in our country’s economic future should be beyond dispute. But what’s not clear is whether our elected officials will move swiftly to implement a set of policies to revitalize U.S. manufacturing. Key steps such as those identified by the ITIF offer an important starting point.

SCOTT N. PAUL

Executive Director

Alliance for American Manufacturing

Washington, DC


Stephen Ezell, basing his article on the work he did at the Information Technology and Innovation Foundation with Robert Atkinson, is looking at a crucial issue: what needs to be done in order to ensure that the United States keeps a competitive edge in innovation-based manufacturing. As Ezell notes, from the national point of view, to paraphrase the work by John Zysman and Stephen Cohen, manufacturing matters. It matters, as Ezell explains, for five reasons: (1) its role in achieving balanced terms of trade; (2) it is a key supplier of large number of aboveaverage–paying jobs, specifically for mid- and low-skilled labor, the section of our society that suffers most from the current financial debacle; (3) it is a principal source of economy R&D and innovation activities; (4) the health of a nation’s manufacturing and service sectors are complementary; and last but certainly not least, 卌 manufacturing is essential to a country’s national security.

Indeed, a rarely spoken about but important facet of the aftermath of the financial crisis is that the wealthy economies that fared the best are those that tend to excel in advanced manufacturing, such as Germany, Denmark, and Finland. Furthermore, the bedrock of these countries’ success is their small and medium-sized enterprises (SMEs). It is therefore crucial that the United States think about ways to enhance, upgrade, and maintain the R&D intensity and competitiveness of our manufacturing SMEs. There are two key issues that make the case for supporting SMEs even more urgent than Ezell’s already urgent call to arms.

In the joint work that several of my colleagues and I are conducting with the Connect Innovation Institute of San Diego, we emphasize that one of the reasons why a healthy ecosystem of SMEs is more important than ever before is the fact that the production of products, components, and services now occurs in discrete stages around the world. In the past, a manufacturing company such as Ford Motors conducted almost every part of its production, from R&D to final assembly, in-cluding the manufacturing of many components, in house. Today, most of these activities are done in well-defined stages, by multiple companies, in many different locations. One of the least understood effects of this production fragmentation is that in order to excel, it is not enough for a single firm to be superior; it must be part of an ecosystem of superb collaborators, within which much of the innovation occurs, in particular the critical innovation that allows these products to be made, improved, and continuously differentiate themselves. Hence, in order to excel as a country, we must develop and sustain more than ever before the ability of our manufacturing SMEs to innovate, grow, and prosper. It also means, however, that we should not think solely in terms of market failure but in terms of what my colleagues Josh Whitford and Andrew Schrank have termed network failure: systematic failures that do not allow a whole network of companies to come to the market with superior products that retain U.S competitiveness and terms of trade while increasing U.S. employment.

This system-wide failure of U.S. manufacturing led Ezell, and rightly so, to focus on the role of government, especially giving detailed and superb suggestions for how to expand and upgrade the current Manufacturing Extension Program. Although this is crucial, it does not solve the other obstacle that manufacturing SMEs face—the scarcity of financial firms whose aim and specialty are to profitably invest in production innovation in the United States. Our system for financing innovation, with its overreliance on winners-take-all, high-stakes, short-term venture capital financing, is the world’s best at coming up with new products and technologies. However, it is one of the world’s worst at finance innovation in unglamorous areas that provide stable stolid growth over many years, but not instantaneous winning lottery tickets. Successful policy must solve not only the government side but also the capital market side: We must have a financial system in which U.S. investors are able to profitably invest in production within U.S. borders.

DAN BREZNITZ

Associate Professor

College of Management

Sam Nunn School of International Affairs

Georgia Institute of Technology

Atlanta, Georgia

[email protected]


How to store spent nuclear fuel

In “Improving Spent-Fuel Storage at Nuclear Reactors” (Issues, Winter 2012), Robert Alvarez describes a lesson relearned at the Fukushima Daiichi plant in Japan. The reactors at Fukushima and roughly one-third of our reactors have spent-fuel pools located inside the same building surrounding the containment that houses all the emergency pumps providing reactor core cooling and makeup.

This cohabitation allows reactor accidents to cascade into spent-fuel pool accidents and for spent-fuel pool accidents to in turn trigger reactor accidents. For example, the radiation levels inside this building during a reactor accident can prevent workers from entering to restore cooling for or makeup to the spent-fuel pool. And in the converse, the water escaping from a boiling spent-fuel pool can condense and drain down to the basements, disabling all the emergency pumps by submergence—if the elevated temperature and humidity conditions have not already done so.

Alvarez does more than merely describe a safety problem. He defines its ready solution. Five years after discharge from reactor cores, spent fuel can and should be transferred to dry storage. The accelerated transfer will result in more spent fuel being in dry storage, which translates into an increased dry storage risk. But that risk increase is more than offset by the risk reduction achieved in the spent-fuel pool. As Alvarez states, the typical spent-fuel pool for this type of reactor contains 400 to 500 metric tons of irradiated fuel. A single dry cask contains only 15 to 20 metric tons. Thus, unless something causes many dry casks to nearly simultaneously fail, the radioactivity emitted from a spent-fuel pool accident is significantly greater than from a dry cask accident.

The relatively higher hazard from irradiated fuel in spent-fuel pools as compared to dry casks has been known for many years. After our 9/11 tragedy, the Nuclear Regulatory Commission (NRC) issued a series of orders requiring plant owners to upgrade security measures. The first orders went out for greater protection of the reactors, followed by orders seeking protection of spent-fuel pools, and trailed months later by orders for better security of dry storage facilities. The NRC knows the relative hazards.

Just as the Fukushima Daiichi tragedy rediscovered this spent-fuel storage problem, it also revisited its solution. There were nearly 400 irradiated fuel assemblies in dry storage at Fukushima when it encountered the one-two punch from the earthquake and tsunami. The tsunami’s water partially flooded the dry storage facility, temporarily replacing the normal cooling by air convection with water cooling. When the floodwaters receded, the normal cooling process restarted automatically. There was no need for helicopters to drop water or truck-mounted water cannons to spray water to prevent damage to irradiated fuel in dry storage.

If irradiated fuel in a spent-fuel pool causes an accident or increases one’s severity, shame on us. We know the problem and its solution. We have no excuse for failing to implement the known fix.

DAVID LOCHBAUM

Director

Nuclear Safety Project

Post Office Box 15316

Chattanooga, Tennessee


Fuel-use reduction strategies

There are three basic ways to reduce fuel use in the transportation sector in the United States. The first is corporate average fuel economy (café) standards, with which the U.S. government tells the automakers what cars to sell. The second is gas taxes, with which the U.S. government raises taxes on motorists. The third is what I would call the OPEC option, in which foreign governments raise fuel prices because they can, or because they need more money for domestic needs, or because they start running short of oil supply.

The fine article by Emil Frankel and Thomas Menzies (“Reducing Oil Use in Transportation,” Issues, Winter 2012) reinforces the obvious point that the U.S. public strongly prefers the first strategy to the second one: “Don’t tax me, don’t tax thee, tax the man behind the tree.” The authors also explicate more subtly that the CAFE strategy may not prove all it’s cracked up to be. But then, regulations seldom are. The automakers have made Swiss cheese out of the first two words in CAFE: “corporate” and “average.”

I share the authors’ weariness and skepticism about the stale debate over raising fuel taxes. It’s exactly what we should be doing for a host of reasons, not least being national security and our massive infrastructure and budget deficits. But the state of our politics today dooms any self-evident strategy to oblivion.

I do think the article falls short in not providing enough attention to the third option: OPEC. After all, petroleum is a finite natural resource, the Middle East is as politically unstable as it’s been in generations, China and India continue to drive worldwide demand for oil, and the list goes on. The question on our minds should be this: Are we ready for a sharp and sustained rise in fuel prices?

I would answer that question in the negative and suggest at least two policy prescriptions we ought to pursue in response. First, I think Frankel and Menzies are too quick to dismiss the importance of public transit as an alternative to auto travel, especially if gasoline costs $7 per gallon instead of $3.50. Second, before we give the gas tax a proper burial, let’s consider converting the existing excise tax to a sales tax on gasoline, initially on a revenue-neutral basis. If average gas prices rise over the long term, as they are expected to do, such a sales tax on gasoline could not only hold its own against inflation but raise substantial new sums for infrastructure or other public purposes.

STEVE HEMINGER

Executive Director

Metropolitan Transportation Commission

Oakland, California

[email protected]


Emil Frankel and Thomas Menzies make a strong case as to why the United States should complement more rigorous fuel economy standards with an increase in fuel taxes, in order to reward people for driving less and ultimately reduce how much oil we use for transportation. They also acknowledge a major obstacle: the lack of political will. “Indeed,” they write, “it is difficult to envision a scenario in which policymakers could ever generate public support for higher fuel taxes without offering a compelling plan for use of the revenue.”

That specific plan is the critical missing piece. Americans are tired of being asked to give more of their hard-earned dollars for . . . what? With no clear vision for our nation’s transportation network and no performance metrics in place to measure return on investments, it’s no wonder taxpayers are leery of increasing the gas tax.

State and regional transportation decisionmakers are proving that if they articulate the goals, plans, and criteria for measuring return on investment, voters are willing to share the cost of maintaining and expanding roads and transit. Examples abound:

  • The Illinois Tollway Board approved a 35-cent toll hike late last year to fund a $12 billion, 15-year capital plan for infrastructure improvements throughout the Chicago region. Predictably, motorists weren’t jumping for joy, but the media reported quite a few drivers who said things like, “Since it means better roads, it will be a plus for me.”
  • Despite being faced, as many cities have been, with reduced state funding, Oklahoma City has been able to maintain and expand infrastructure using revenues from a temporary, 1-cent increase in the sales tax. How did they build political will to levy this tax? City leaders including Mayor Mick Cornett credit their success with explaining to voters what they planned to do with the funding, doing it (without incurring additional debt), and then ending that sales tax as promised. City voters are so bought into these new investments that they’ve agreed on multiple occasions to resurrect the 1-penny tax to fund new projects.
  • Minnesota converted nine miles of carpool lanes along I-394 into toll lanes, guaranteeing that drivers can travel at about 45 mph nearly 95% of the time. More than 60% of residents in the Twin Cities area support the program, and more than 90% of toll lane users maintain a very high level of satisfaction. Because of the success of I394’s conversion, the federal government provided Minnesota with a $133 million grant to expand the program.

To fight gridlock and keep our cities and regions competitive, the United States needs a new approach to transportation planning and investment, one that maximizes the use of existing infrastructure, evaluates and captures the value of new investments, and taps creative financing tools. The public is ready to be leveled with and is willing to invest in solutions that deliver results. It’s time we provided a strong vision and plan they can get behind.

MARYSUE BARRETT

President

Metropolitan Planning Council

Chicago, Illinois

[email protected]


Emil Frankel and Thomas Menzies offer an incongruous jump from a realistic and well-reasoned analysis of the importance of oil to both U.S. transportation needs and historic U.S. economic growth to a completely contradictory and contraindicated conclusion that we must reduce the use of oil in transportation!

There can be little doubt that energy use, and particularly the use of oil for transportation, has been instrumental in achieving substantial economic growth in the United States and now also in the larger developing nations: China and India, together representing 37% of the world’s population, are recognizing that they can lift their people out of poverty by using more oil.

Cross-plotting gross domestic product (GDP) per capita of different nations versus either energy or oil use per capita yields a remarkably strong correlation. In the case of oil, the correlation becomes even more pronounced if one incorporates population density: The sparsely populated nations such as the United States, Canada, Australia, and Norway use disproportionately more oil per capita than the more densely populated European states or Japan for roughly the same GDP per capita. One might expect that result because more-densely populated regions need less energy for transportation. Similarly, plotting GDP growth rate versus oil consumption growth rate yields an even stronger correlation.

From a different perspective, if we look at GDP per unit of either energy or oil use, developed nations, including the United States, are creating more wealth per unit of energy consumption than the less developed ones. Or to put it another way, the less developed nations are more wasteful of energy, in terms of creating wealth, than are the developed nations. From an overall global economic perspective, should we not be encouraging the developed nations, and particularly the United States, to use more oil rather than less?

But too many people worldwide have accepted, without much independent thought or analysis, the view of the United Nations Intergovernmental Panel on Climate Change that carbon dioxide emissions from fossil fuel use will cause catastrophic global warming by the end of this century. The lesser developed nations, whose populations outnumber those of the developed ones by about 6 to 1, support this view, and particularly the notion that developed nations should reduce energy use and transfer wealth to the lesser developed nations, as “punishment” for alleged past contributions to global warming!

An important new and very perceptive book (The Great Stagnation, by Tyler Cowen) notes that a significant drop in the U.S. economic growth rate, as measured by median U.S. family income, occurred around 1973; dropping by 75% from 2.7% per year (1949 to 1973) to 0.6% per year (1973 to 2006).

Coincidently, 1973 was the year the Yom Kippur War and the Arab Oil Embargo triggered a large oil price increase. Oil prices had actually decreased 1.5% per year from 1949 to 1970 in real terms, but after a big jump (of 75% between 1970 and 1975) have now increased by 4.2% per year since 1975. This triggered a change in the U.S. rate of growth of oil consumption from 3.1% per year to a negative 0.4% per year, or a total decrease of 3.5% percentage points, clearly contributing significantly to the “Great Stagnation”!

So some serious economic realism about U.S. oil consumption is desperately needed, but the Frankel and Menzies article fails to provide it, adhering instead to the latter-day shibboleth that the United States must reduce oil use.

ARLIE M. SKOV

[email protected]

The author was the 1991 president of the International Society of Petroleum Engineers and a member of the U.S. National Petroleum Council from 1994 to 2000.


The home of the future

U.S. buildings are responsible for approximately 10% of global greenhouse gas emissions. Improving them presents one of the most direct and cost-effective opportunities to reduce those emissions. Our buildings in general are dreadful energy wasters, neither designed nor operated to minimize their draw on precious energy resources. “Blueprint for Advancing High-Performance Homes” (Issues, Winter 2012) by James H. Turner Jr. and Ellen Larson Vaughn makes utterly clear that it does not have to be this way. We know how to build homes and office buildings with a net zero consumption of energy, have done so in climates as inhospitable as Minneapolis, and watch our European counterparts do it routinely. We just don’t do it or require it. Why not?

The authors do a good job of explaining the resources and practices that we need to employ, describing eight actions that would help tilt the market in favor of high-performance buildings. They do not explain, however, why we are so stubbornly wasteful when it comes to our buildings. They assert that “High performance will come when consumers feel they must have it and can afford it. Getting to that point requires streamlined new ways of doing business for the construction industry, for related real estate professionals, for those who service homes, and for residents.”

The reality is that the construction industry, hammered by the recession and housing collapse, is hypersensitive to costs, and sees costs they fear they cannot recover from tight-fisted home-buyers in meeting new standards, using better materials, or providing more worker training. The major home-builder association therefore opposes energy-efficient building codes that would allow their customers to save the incremental cost of the building’s better performance perhaps 20 times over during its life. They take no responsibility for engendering enthusiasm among their customers for energy efficiency. Ironically, builders of high-performance homes have reportedly continued to stay busy through the worst housing market collapse in U.S. history, but there aren’t many and they don’t build in volume for the average family.

The real estate industry has actively opposed measures that would allow buyers to accurately understand the energy usage of one existing home versus another, believing that older homes—the ones they principally sell—would inevitably be seen in a negative light. Perhaps we should blindfold prospective homebuyers, so we wouldn’t have to worry about appearance biasing their decisions either. The service industry offers a myriad of small companies, most without the financial clout to stand behind the promises motivating a major efficiency upgrade, in part because they cannot bet their business on consumers’ behavioral patterns that lead to much of the waste. And the owners and occupants of U.S. buildings are blissfully clueless in most cases about the options they have to save energy, and generally take their buildings’ energy performance as a given over which they have little control.

All eight of the steps the authors prescribe are good ideas, and a number are at some stage of implementation. But each of them individually, and probably all of them together, are unlikely to get us to the point where we truly recognize that “the consequences of doing nothing are so great that there is no choice but to try.” The key driver that would really make us pay attention to low-cost ways of achieving greater energy efficiency is higher energy costs. However, with the new abundance of natural gas, our principal home-heating fuel, that signal is currently pointing the wrong way. A meaningful national consensus on the calamities to come from climate change could also motivate policies to require efficient energy use even in the absence of a strong price signal. However, that consensus is not only absent but diminishing with partisan divides on the validity of climate science.

So the question remains: What can be done to convince our builders, realtors, service companies, and building owners and occupants that saving energy is a critical priority to them and to their nation and world? Unfortunately, knowing how to achieve that efficiency is the easy part.

JOHN W. JIMISON

Managing Director

Energy Future Coalition

Washington, DC

[email protected]


This is a timely and provocative article that addresses an issue critical to our nation’s long-term sustainability: high-performance homes. Although this may appear to be a back-burner issue for many, given today’s challenging economic climate, James H. Turner Jr. and Ellen Larson Vaughn make a compelling case that now—when “business as usual” has proven inadequate in so many market sectors—is precisely the time to envision and begin to create a preferred future.

Two aspects of the article stood out to me as particularly interesting and forward-looking. First is the recognition that high performance is an end state that can and should be considered over a longer time frame than has traditionally been the case. Historically, high performance has been considered a goal that needs to be achieved quickly and completely in order to have maximum impact and benefit. If you’re going to build a new, high-performance home, do as much as you can now, because you won’t get another chance that allows you this much freedom. All well and good, if you have the commitment and resources, but more often than not, people don’t. The result: medium to medium-high performance as the end state, with no built-in capacity to add performance features over time. Put simply, people settle for and are proud to have achieved something less than full high-performance, but that’s as far as they can go. They have closed down their ability to add future high-performance upgrades because they haven’t planned for them.

The authors acknowledge this dilemma by promoting what they refer to as “high-performance–ready” strategies that deliberately anticipate and plan for future upgrades as an explicit component of any new construction and/or renovation project. The idea is not new, but it has rarely been applied in new construction, let alone in the renovation/remodeling market. Yet it offers the potential for steady, incremental performance improvements over time, a process that allows upgrades to occur as and when homeowners have the resources to afford them and, importantly, facilitates the incorporation of new technologies as and when they become available. To their credit, the authors make this important concept a central part of their argument.

A second component of the article that seems particularly intriguing is the concept that the energy cost differential between high-performance homes and their more common, code-minimum counterparts can be considered “wasted funds,” not, as would be the case in more traditional analyses, as “savings potential.” By introducing this new nomenclature, the article helps move us conceptually away from a context focused on “saving” to one focused on “not wasting,” a potentially powerful new motivational construct for individual homeowners, given today’s economic climate, and for policymakers, given the huge scale of the potential “waste” to be avoided.

This is a compelling article that tackles a tough but highly promising problem: converting all the homes in the United States to high performance. The authors recognize that this will be a decades-long process, requiring “millions of small steps,” and that the time for the first step is now.

DEANE M. EVANS

Executive Director

Center for Building Knowledge

New Jersey Institute of Technology

Newark, New Jersey

[email protected]


James H. Turner Jr. and Ellen Larson Vaughan’s article picks up where many travelers in the past three decades have tread: identifying the disconnect between the many R&D efforts to improve the physical performance of our nation’s housing on one hand, and the often overwhelming institutional, behavioral, financial, and cultural realities that have muted a complete diffusion of these designs and technologies on the other.

In this particular effort, the authors hope to take us on this familiar journey, but in the new terrain of the post-recession housing environment. They identify key landmarks among diffusion barriers, such as financial products that neglect savings from energy and water reductions in the borrower’s capacity to pay, sporadic R&D funding, and fluctuating energy and material prices that perpetuate the status quo. Yet, in making this journey, there is little time to consider the view. The authors occasionally conflate residential with non-residential construction. They misread trails that might be good for new construction but are more difficult for the traveler focused on retrofits. They describe journeys in other nations without contextualizing those housing market landscapes. The occasional platitude (“High performance will come when consumers feel they must have it and can afford it,” for example), unfortunate causal statements (“It took the U.S. Green Building Council’s LEED rating system… to reengage the public”), and missing citations cause the traveler to stumble.

The authors motion us in the general direction of our journey’s destination through eight routes. Some of these routes have been well-traveled. In particular, the authors are on solid ground when surveying various public incentives for home improvements, but could have explored studies of the incentives’ effectiveness. The case for continuing public investments in focused R&D and coordinating national benchmarks for their effects is strong. A few additional maps provided by other scholars, practitioners, and advocates may prove useful in the authors’ pursuit.

Other routes the authors suggest are more treacherous. Life-cycle costing methods are advancing, but are not necessarily applicable to consumer financial products. Advanced manufacturing techniques do not necessarily yield higher-performance within key attributes, such as energy use. Calls for building codes that “should cover home performance” overlook the fact that codes, by definition, can only cover design and construction— and only in new construction—and not operations. Better knowledge of the topography of the housing industry and related research topics might help, including: consumer behavior; lending underwriting practices; the home building and remodeling industry’s composition and capacity; and the economic effects of public research funding,

A final group of routes may be on ground that is too unstable. Applying well-tested high-performance products and techniques in subsidized housing can benefit its occupants, but overlooks the cost burdens and may open the door to less-than-beneficial experimentation. Ultimately, too, no lessons about how these routes might be negotiated for the current economic and political times—or descriptions of how they differ from past voyages— are provided.

In our guides’ defense, any inaccurate deciphering of the landscape is not intentional, nor is it even avoidable in some cases. The sociology and economics of housing technology is still virgin territory. Taken as a whole, housing design and construction is a technological subject with significant social and cultural connections, both obvious and complex. The authors correctly try to point us down the path of exploring these connections in ways that benefit home occupants, owners, buyers, and builders. Regardless of how far the authors go in this article, the journey is always worth taking.

CARLOS MARTIN

Abt Associates Inc.

Bethesda, Maryland

[email protected]


Transportation: the California model

In “California’s Pioneering Transportation Strategy” (Issues, Winter 2012), two of its leading lights, Professor Daniel Sperling and California Air Resources Board Chair Mary Nichols, highlight many of the key aspects of why it is not only possible, but logical, for California, which “only” emits about 2% of global greenhouse gas emissions, to set aggressive local goals to protect the climate.

First, climate protection is not a luxury but a necessity, for any entity, from individual to municipality to nation to region, that looks at the long-term economic and social viability of its community. The impacts we are already facing, and the damages we risk through inaction, are simply too great. Second, as the authors note, “the 80% goal cannot be met without dramatic change in driver behavior and transportation technology.” California’s approach is to recognize and then work to implement two key guiding principles:(1) Greenhouse gas emissions are waste that is currently largely unpriced, so clever sectoral strategies to reduce climate pollution can often produce economic and social savings and benefits that we should look to capture; and (2) energy use and greenhouse gas emissions are not limited to transportation to and from the electricity sector, so an integrated set of policies across the entire economy both makes the most economic sense and builds a foundation for innovative, job-creating, waste-reducing industries that can become central to a new model of economic growth. In my laboratory at the University of California, Berkeley, we see this every day through the job-creation potential of energy efficiency and renewable energy http://rael.berkeley.edu. edu/greenjobs). We also observe that when consumers are armed with tools to cut waste, financial savings and carbon savings often go together in unexpected ways (http://coolclimate. berkeley.edu).

Transportation was for many years thought to be the tough nut to crack in this story, because “Americans like their cars big and powerful.” Several years ago, before the current renaissance of electric vehicles, Alan Alda and I hosted a track race at the Infineon Speedway in northern California, where electric cars raced and won against a series of gasoline-powered sports cars. The electric roadster won that day because the race was a sprint, and electric motors outaccelerate gasoline engines. Now, thanks in part to California’s commitment to an integrated climate strategy, electric vehicles are entering the market that could win both the sprint and the long haul. What is needed is large-scale production to bring the cost of these vehicles down.

For innovators there is always a next challenge, and for California it is how to decarbonize the driving we must do and avoid the driving we do not need or want to do. Here, too, California’s integrated strategy is useful. In a new high-resolution model of the electricity system across not just California but the U.S., Canadian, and Mexican west, my laboratory is looking at the value of not only electric vehicles to meet climate targets but of the distributed network of batteries that these vehicles represent, as a resource to put power back into the grid at times of high-cost power or when emergency power is needed. Although this work is ongoing, it already provides a clear lesson about the need and value of integrated planning.

DANIEL M. KAMMEN

Class of 1935

Distinguished Professor of Energy

University of California, Berkeley

Berkeley, California

[email protected]


Daniel Sperling and Mary Nichols outline California’s more than 50 years of innovative pollution-reduction measures for cars and trucks to protect public health. The state’s efforts are a model for other states to follow and a reason why Americans today breathe cleaner air.

California’s ability to create pollution-reduction standards without sacrificing mobility is a model for the nation. Recently adopted clean car standards, when fully phased in by 2025, will result in new cars that emit half the carbon pollution, use half the gasoline, and emit only a quarter of the smog-forming gases as today’s new cars.

The technologies developed by major U.S. automakers to halve carbon pollution will be available in all U.S.and foreign-brand cars. This is because two national clean car agreements, brokered by the Obama administration, brought together California, the U.S. Environmental Protection Agency, the National Highway Traffic Safety Administration, and 13 automakers to coordinate a harmonized set of national standards to reduce carbon pollution and increase fuel economy.

One of the most important lessons from the California experience is that good climate policy is also good for the economy, consumers, and public health. Creating this win-win result was fundamental to gaining widespread support for efforts such as the Sustainable Communities and Climate Protection law (known as SB 375), which will reduce sprawl and the need to drive. In this case, cutting carbon pollution also means developing more-livable and -walkable neighborhoods and saving cities money on costly road infrastructure. When the business community, builders, chambers of commerce, local governments, and environmentalists jointly support climate policy, it creates a model for other local and state governments to adopt.

California has proven time and again that there isn’t one solution to fight climate change, but rather an arsenal of smart policies and innovative technologies. The main ingredients for success are political leadership and courage. In California, there is plenty of both, including enough to challenge the multitrillion-dollar oil industry to start phasing out investments in dirtier fuels, such as tar sands, while phasing in cleaner fuels under the state’s Low Carbon Fuel Standard. The state has recognized that energy and climate security will remain at risk if the oil industry continues spending less than half a penny on alternative fuels for every dollar invested in oil. That is a system designed to keep America addicted to polluting fossil fuels instead of clean energy of the future.

These are just a few of the many reasons why the Natural Resources Defense Council, with its 1.3 million members and online activists, advocates for climate policies in California, including sponsoring AB 1493, Senator Pavley’s Clean Car bill; and AB32, California’s Global Warming Solutions Act. California’s pioneering efforts continue to serve as a model for others going forward, proving once again that thinking globally means acting locally.

FRANCES BEINECKE

President

Natural Resources Defense Council

New York, New York


Reducing nitrogen emissions

We were pleased to read “The Climate Benefits of Better Nitrogen and Phosphorous Management” by Alan R. Townsend, Peter M. Vitousek, and Benjamin Z. Houlton in your winter 2012 issue. They address the difficult balance we must seek between minimizing environmental effects from reactive nitrogen leakages while ensuring sustainable growth of our food supply, particularly in poorer countries. Greater efficiency of nitrogen use can provide additional benefits through climate change mitigation.

And yet, our recent work on the California Nitrogen Assessment (CNA) shows that both the challenges and the opportunities for improved nitrogen management depend crucially on context. As nitrogen science progresses, it must address the variability of ecosystems, practices, and media, and the array of stakeholders that encompass them.

The CNA, coauthored by myself and other members of the University of California, Davis, faculty, is an ongoing project of the Agricultural Sustainability Institute at UC Davis, designed to comprehensively examine existing knowledge about nitrogen science, practice, and policy in California. The results will lay the groundwork for informed discussion and decisionmaking on nitrogen management, including policy options and field-level practices. Following the assessment model of the Intergovernmental Panel on Climate Change and the Millennium Ecosystem Assessment, the CNA began by engaging with more than 300 individuals from 50 organizations.

Engaging stakeholders throughout the process was critical to shape the CNA’s approach and ensure that its outputs are considered useful and legitimate. The CNA involves a rigorous scientific review, currently under way, as well as a subsequent stakeholder review process.

The CNA identifies key drivers of nitrogen use decisions, including global demand for California’s commodities and the prices of fuel and fertilizers, and examines how these drivers influence the statewide mass balance of nitrogen: how much enters the state through new sources and the multiple media through which these compounds enter the environment. We investigate nitrogen’s effects on environmental health and human well-being and examine technological and policy options to minimize nitrogen leakage while sustaining the vitality of agriculture.

We found that the most troublesome nitrogen leakage in California is not climate change forcing by nitrous oxide, but groundwater pollution by nitrates. Whereas about 2.5% of nitrogen inputs in California are emitted as nitrous oxide, nitrate leakage to groundwater accounts for roughly one-fourth of the state’s nitrogen flows, with more than 80% of leakages arising from cropping and livestock production. Unlike eutrophication, the effects of which are readily observed, groundwater contamination is a slow process unfolding over decades and largely hidden from view. Even if nitrate leakage were halted immediately, groundwater pollution would persist as nitrates move through our aquifers.

Although the essential role of nitrogen in food production is well known, the CNA shows that aspects of nitrogen’s negative effects are context-specific. Understanding and addressing climate change forcing, air pollution, contamination of ground- or surface water and coastal zones, crop production costs, and dairy manure management require accounting for distinct ecosystems, stakeholders, and their roles in the nitrogen cycle.

We agree with Townsend, Vitousek, and Houlton that change requires neither “phantom technologies” nor “massive social upheaval.” Technologies available now can lessen nitrogen’s negative effects. But to give that action momentum, whether in science, policy, or practice, the multimedia nature of nitrogen needs to be understood contextually and in collaboration with those most affected.

THOMAS P. TOMICH

W. K. Kellogg Endowed Chair in Sustainable Food Systems

Director, UC Davis Agricultural Sustainability

Institute Director, UC Sustainable Agriculture Research and Education Program

Professor of Community Development, Environmental Science, and Policy

University of California, Davis

[email protected]


New directions for climate talks

In “A Course Adjustment for Climate Talks” (Issues, Winter 2012), Ruth Greenspan Bell and Barry Blechman join other analysts in expressing frustration with the United Nations Framework Convention on Climate Change (UNFCCC) process and seek more effective negotiating methods. They draw lessons from the disarmament sphere, although its characteristics appear quite different from those of the climate change process: few countries around the table, evident mutual threats, and consequent scope for reciprocal concessions. Nevertheless, some ideas they advance fit well into the climate change context.

For example, separating issues for independent resolution is an approach that is established under the UNFCCC umbrella (for example, on deforestation emissions) and can be further encouraged (for example, in agriculture and other sectors). Moreover, interest in controlling short-lived climate-warming agents such as black carbon and methane can be pursued alongside a long-term focus on the transition to low–carbon dioxide (CO2) economies.

Several other improvements in the UNFCCC process, some of which are alluded to by the authors, can be contemplated, such as less peripatetic diplomacy, more resident technical expertise, space for conversation among the essential economic heavyweights about their mitigation commitments, and rules for majority voting.

But is it the process that should be the target of frustration? Or do the difficulties of achieving international agreement lie in the climate change phenomenon itself?

The core objective of the UNFCCC is to “prevent dangerous anthropogenic interference with the climate system.” But science cannot define what degree of climate change is dangerous. That is a societal judgment. Thus, in a fragmented global society, the danger of climate change is in the eyes of the beholder, subject to national assessments of vulnerability and economic interests that will vary according to coping capacity, geography, and the valuation of future generations.

This subjectivity tends to disconnect demand and supply in the negotiations: the demand of vulnerable countries for mitigation action and the supply of such action by countries with the capability to provide it. It is difficult to discern in this disconnected dynamic the scope for reciprocal quid pro quo concessions, such as those that occur in negotiations on disarmament or trade.

Can one conclude from these summary observations that (1) there is no inherent tendency to convergence in negotiations on mitigating climate change? (2) The drive for effective, fair, and ambitious outcomes must therefore come from abstract factors: reason, ethics, responsibility, and political will, backed by a long-term profit motive? (3) The key institutional need is for common accounting methods and for building the culture of accountability that is the foundation of trust?

Therein might be an interesting debate.

Beyond these questions is a more familiar stumbling block: the constitutional aversion of the United States to tying its hands in treaties and the role of the U.S. Senate in treaty approval. Perhaps the greatest setback in two decades of climate change negotiations was the inability of the Clinton administration to win approval from the Senate for the U.S.-engineered Kyoto Protocol. Currently, U.S. politics make it very difficult for the United States to engage in multilateral negotiations.

If the authors’ idea of playing up the security dimension of climate change and engaging national security officials in the negotiations is aimed at sensitizing the Senate to the urgent need for action, that is a fair point to make in the U.S. context. However, it should not be generalized indiscriminately beyond the Beltway.

Lead actors outside the United States include powerful ministries for planning (China), economics (Japan), external relations (Brazil), and even petroleum (Saudi Arabia). The positions of the European Union’s (EU’s) environmental standard-bearers are carefully filtered by their economic counterparts. Judged against these facts, the article’s closing shots at the “environmental ghetto” and “politically powerless environmental officials” are off target.

In the end, the fundamental institutional question to be addressed is whether or not a global forum is needed to impart coherence, legitimacy, and collective ambition to efforts to address a quintessentially global problem. The United States answered in the affirmative more than 20 years ago by agreeing to negotiate within the UN framework. Above all, what is needed to make the resulting process work is U.S. political leadership.

MICHAEL ZAMMIT CUTAJAR

Former Executive Secretary

UNFCCC

Former Ambassador on Climate Change, Malta

St. Julian’s, Malta

[email protected]


Ruth Greenspan Bell and Barry Blechman persuasively argue for a more focused approach to climate change negotiations, one that would draw on the lessons of arms control. Of course, some might argue that the parallel be-tween arms control and climate change control is strained and artificial. The high level of threat and anxiety aroused by nuclear weapons was more conducive to concentrated effort and creativity. The threat posed by climate change is much more distant, and politicians tend to be motivated by short-term considerations such as reelection. But this contrast is overdrawn. The arms control negotiations were primarily aimed at controlling the arms race rather than reducing the risk of nuclear war. Moreover, some techniques that were used in arms control negotiations were drawn from social psychological research on conflict resolution, suggesting that these methods would have broader applicability than arms control.

Experience with arms control suggests the advisability of separating issue areas, the parties to the agreement, and the time frame. Bell and Blechman cite the Limited Test Ban Treaty (LTB), in which success required distinguishing atmospheric tests, which could be reliably monitored, from underground tests, where verification was questionable. A more limited agreement was more politically feasible. If underground testing had not been excluded, a treaty could not have been ratified in the U.S. Senate, which was concerned about Soviet cheating. With the LTB, each could test the other’s intentions at relatively low cost and risk. If the Soviets had violated the treaty, they would not have acquired much of an advantage before their cheating was discovered.

It may also be worthwhile to focus on those states most likely to reach agreement. Large multilateral negotiations increase the role of potential veto players. Neither France nor China favored the LTBT, but both eventually signed the 1996 Comprehensive Test Ban Treaty, largely because of pressure from world opinion. Agreement may also be built up over time, with less costly measures of compliance in the first phase, followed by more onerous obligations at a later date, to provide time for building trust.

Another valuable suggestion made by Blechman and Bell is to politicize the issue so that it can receive high-level political attention. The past history of arms control indicates that presidential involvement was essential to reach important agreements. President Kennedy believed that it was urgent to reach agreement on a test ban to serve as the basis for other agreements and prevent the proliferation of nuclear weapons. President Nixon saw the SALT I treaty as the centerpiece of his détente policy. Finally, President Reagan was motivated to reach agreement with Soviet leader Gorbachev on any measures that would reduce the number of nuclear weapons.

The disadvantage of deconstructing the issue is that smaller agreements may remove the urgency to reach a more a comprehensive treaty. But because reaching agreement on climate change in a universal forum appears ever more elusive, this objection carries less weight.

DEBORAH WELCH LARSON

Professor of Political Science

University of California, Los Angeles

[email protected]


Ruth Greenspan Bell and Barry Blechman note with reason the concern that the UNFCCC process may not meet the challenge to close the gap between proposed emission reductions and what is needed to keep the world below a 2oC temperature increase. Because of this concern, they outline how a multifaceted approach might be necessary to achieve progress. Without debating the merits of segmentation, it is important to note that in many ways the UNFCCC Durban Platform facilitates a varied approach. It does this by a series of decisions that can encourage a range of actors to move forward, while awaiting future legal agreement.

Some of these key decisions from Durban relate to the Conference of Parties (COP) agreements regarding a wide range of reporting responsibilities, continuing the Kyoto Protocol through a second commitment period, and perhaps most importantly, setting in place a series of incentives for early action. The Durban Platform decisions will further the degree to which the UNFCCC is a repository for crucial data on amounts and trends in greenhouse gases (GHGs), including reporting by sector. These key data will support future decisionmaking and citizen engagement. The new UNFCCC registry may also make it easier for clean-energy and climate-friendly projects to attain sources of finance, whether from the future Green Climate Fund or existing sources.

The decision to establish a second commitment period for the Kyoto Protocol garnered perhaps most media attention for the decisions of Japan, Canada, and Russia to stay out, thereby reducing the number of countries with pledged emission reductions. What received less attention was that Durban made decisions that will make the protocol’s clean-development mechanism function more smoothly and credibly than under the first commitment period, added its first new gas, and agreed that carbon capture and storage could be added as a new eligible technology. The parties recognized that there could be scenarios in which the protocol links with other mechanisms, a key ambition of those who sought to improve its relevance. A positive outcome from these decisions could someday be realized at a greater scale if the parties continue these modifications and make it more attractive for all to reengage.

Perhaps most importantly, Durban went beyond earlier COP decisions in the degree to which it recognized the role of markets, and that market-based approaches could occur at the individual- or joint-country level. The acceptance of markets and their connection to results-based financing should offer public and private actors some confidence that although a global regulatory framework is not yet in place, they may be able to consider mitigation activities today as a form of early action. The Durban Platform also negotiated and then decided to consider whether the critical agriculture and transport sectors could be included to enhance implementation under the convention. This willingness described above and in many other aspects of the Durban Platform can encourage countries and private actors to move forward, with reason to believe that what they undertake may link later to a global system.

CHARLES E. DI LEVA

Chief Counsel

Climate Change, Sustainable Development and International Law

World Bank

Washington, DC

[email protected]


Many things are wrong with the UNFCCC process, and the interminable COP meetings appear to go nowhere. Ruth Greenspan Bell and Barry Blechman rightly call for changes in the structure and direction, perhaps even the goals, of the global climate negotiations. However, I believe they err in a number of ways.

The UNFCCC process may be many things, but futile it is not. The convention and COPs have been mainly responsible for highlighting the global character and imminent threat of climate change and also for bringing governments the world over to acknowledge the crisis and work toward its solution. For most governments, especially in developing countries, the UN system has a legitimacy and persuasiveness that other multilateral forums do not. Although it is true that powerful developed nations throw their weight around in UN forums, and that COP decisions too have been distorted as a result, the UN framework gives smaller, weaker nations a voice and weight they lack elsewhere, as is clearly seen by the decisive role of small-island states at Durban. Jettisoning the UN framework at this stage in pursuit of an ephemeral alternative would only turn the clock back two decades, bringing into question every painfully gained point of agreement.

It is not self-evident that the all-ornothing approach of the Kyoto Protocol, or of any legally binding successor under the UNFCCC, is inimical to a solution and that less stringent and voluntary pathways stand a better chance. The Copenhagen Accord, formalized at Cancun, proposed a bottom-up, pledge-and-review framework and is seen as a more pragmatic approach. Agreement may have been easier, but is the solution any nearer? Estimates by several reputable organizations show that these pledges will result in a temperature rise of 3o or even 4oC, not the stated goal of 2oC maximum.

There are many arguments against the authors’ nuclear arms control analogy, but one stands out. In the Non-Proliferation Treaty (NPT), one could claim partial success if 80% of nuclear weapons were eliminated, but in climate change, getting to 4oC is no success at all, especially if populated islands are submerged or food production drops by 20% in some nations. And the devastation may be even worse. If legally binding promises are not adhered to by some nations, what compliance can be expected with voluntary pledges?

Maybe this is a gestalt problem. Things look very different from a U.S. vantage point. The United States has historically been more comfortable with multilateral agreements than with international treaties and has repeatedly asserted its own sovereignty. Does this mean that the world should not strive toward international agreements? The United States also prefers self-monitored rather than regulated systems, despite its success with a capand-trade program to curb sulfur dioxide emissions. In contrast, European nations are far more regulation-driven, and most developing countries are also happier with regulatory frameworks.

The arms control analogy holds at least one lesson, though. The NPT architecture, whatever its weaknesses, has been retained, not abandoned, and strengthened by supplementary multilateral and sectoral agreements. Why can’t the same approach work for the UNFCCC process?

D. RAGHUNANDAN

Delhi Science Forum

New Delhi, India

[email protected]

In considering how the global community should move forward in tackling climate change, it is clear that the status quo is not working. Emissions continue to climb, and pledges for reductions from countries are far from what are needed to stay below a 2oC temperature rise in comparison to preindustrial levels. It is time to step back, especially after the last major international climate meeting in Durban, to assess how we can change course to accelerate the pace and scale of change.

In that assessment, we need to think through what the actual underlying problems are that are slowing down progress, consider what each different forum can deliver, and then build a regime that leverages initiatives against each other most effectively. One of those forums is the UNFCCC, and although it is clear that it cannot do the whole job, it is also clear that it plays an important role. It is also important to note that although it is certainly possible to try to engage additional forums, some efforts are already underway. Whether it is the discussions in the Major Economies Forum of the G20 countries or bilateral initiatives, there are attempts to bring new voices into the climate debate and pull some functions and issues away from the UNFCCC. For example, the EU is now, against significant pressure, implementing an aviation approach regionally. We need to think more deeply about which issues should be tackled in which groups and how they can add up to a greater impact. We need new messengers and players that hold greater sway with their publics.

After the Durban outcome, the importance of having a voice for the most vulnerable countries, which is possible only in the UN system, became more evident. The new Durban Alliance, made up of the EU and the most vulnerable countries, brought a new dynamism and created pressure on all major economies to act, something that had been absent in previous years. Building on that alliance is critical to continue the push into the uncomfortable politics of climate change. That alliance need not be vocal only in the UNFCCC. Having a group of countries that represent the progressive and the vulnerable could be a motor for whichever forum one is operating in.

In fact, one cannot really speak of a change in course without talking about how to change the underlying politics of this issue. Although we must learn from other treaties, such as arms control agreements, we also have to recognize that until the power dynamics around fossil fuels and the interests that represent fossil fuels are changed, it will be very difficult to get anything done in any forum. Ignoring this factor risks pushing old debates into new forums without any results.

JENNIFER L. MORGAN

Director, Climate and Energy Program

World Resources Institute

Washington, DC

[email protected]


Ruth Greenspan Bell and Barry Blechman raise a number of points worthy of serious consideration regarding the structure and dynamics of the current UN-led climate negotiations. They point out how nations made progress in tackling the problem of nuclear nonproliferation by moving forward on smaller targets, such as a partial test ban, and first reaching agreement among the smaller subgroups of countries willing to lead. In his 2011 book Global Warming Gridlock, David Victor similarly pointed to the dynamic of the World Trade Organization (WTO) negotiations, in which countries agree to reduce their trade barriers and abide by a common set of rules in order to enjoy the benefits of WTO membership, as a potentially more productive model for moving international climate cooperation forward.

Countries have made progress in the 20 years since agreeing to the UNFCC in 1992, but the pace has been slow and insufficient to meet the urgency of rising GHG emissions and our increasing scientific understanding. Seeking agreement among key countries makes sense given that the majority of GHG emissions arise from a handful of players: The top five emitting countries/entities (China, the United States, the EU, Russia, and India) account for 66% of global CO2 emissions, and the top 10 account for nearly 82%. The great majority of these emissions come from burning coal, oil, and natural gas for power, industry, and transportation, and from deforestation. Just two countries, Brazil and Indonesia, account for nearly 60% of CO2 emissions from deforestation.

But how exactly could the climate negotiations be broken down into more actionable pieces, and which countries could take the first key steps for moving each of these pieces forward? Compared to the negotiations for arms control or phasing out ozone-depleting chemicals, which are more discrete problems, climate change touches on the entire mode of production of economies, insomuch as they depend on fossil fuels for growth. Countries will need to develop new and more efficient non–fossil fuel—based technologies and growth models to address climate change. Fortunately, the 10 to 15 highest-emitting countries, both developed and developing, are also the ones most likely to play the largest role in developing, building, and distributing the needed technologies.

These countries could strengthen the use of smaller forums such as the G-20 or the Major Economies Forum on Energy and Climate to establish more concrete commitments than have been made in the UN climate negotiations and implement concrete plans to act on them. Rather than focusing solely on setting the numerical targets and timetables that have been the focus of the UN negotiations, these commitments should focus on very specific policies and actions, such as funding the Green Climate Fund, phasing out fossil fuel subsidies and inefficient lighting, implementing efficiency and renewable energy incentives and programs, reducing black carbon from diesel and cook stoves, and supporting clean-technology innovation. The input of the private sector (the primary source of most emissions) and nongovernmental organizations (which can help ensure the integrity of the process) is also critical. As Bell and Blechman suggest, the negotiations should not be limited to foreign affairs and environmental officials, but should include representation from countries’ energy, finance, and political ministries.

These agreements would inject a welcome boost of momentum into the UN-led negotiations by developing complementary pathways for speedier and more effective agreements among key countries to take action. The UN negotiation process itself could also be made more effective by establishing a tradition of requiring heads of state to participate in the meetings every five years (as they did in Copenhagen) and replacing the rule of decision by consensus with a more realistic rule that attaining 90 or 95% of country votes would be sufficient to pass a COP decision. Improving the dynamics of international climate cooperation could help us to more rapidly take the actions needed to mitigate the effects of climate change.

YANG FUQIANG

ALVIN LIN

Natural Resources Defense Council

Washington, DC

[email protected]


Rapidly approaching is the 20-year follow-up to the UNFCCC treaty that was signed at the Earth Summit in Rio de Janeiro, making this a propitious time to ask two core questions: How effective has the treaty been? What other options might be pursued for meeting the stabilization goal, set at the 15th COP in Copenhagen, of holding global warming to 2oC?

As virtually every knowledgeable observer knows, the effectiveness of the treaty, because of a lack of enforcement mechanisms as well as political and other structural factors, has been painfully disappointing. This reality magnifies the importance of the second question. What else can be done? The challenge embedded in the question is further magnified by the uncertainty about unknown tipping points in climate systems that could lead to an irrevocably changed global environment.

An important first step in addressing this question is provided by Ruth Greenspan Bell and Barry Blechman. They argue for the augmentation or perhaps even abandonment of the pervasive top-down, unanimous consensus framework of the UNFCCC in favor of other more narrowly focused paths. Although circumscribed in scope and impact, such other paths may be more immediately successful in not only abating GHGs but also in establishing proving grounds of cumulative knowledge useful in other domains and with a broader scope of applicability. To illustrate the effectiveness of taking on parts of a problem one step at a time, Bell and Blechman trace the incremental successes in controlling weapons of mass destruction, especially nuclear weapons. Today, the world has 9 nations with nuclear weapons, a number far below the 25 projected by nuclear experts before the Non-Proliferation Treaty (NPT) was signed—a significant reduction in risk.

How might the Bell-Blechman idea work within a more general conceptual framework? And how might the framework enhance chances for successful policies?

Nobel Laureate Lyn Ostrom addresses the first question in her background paper to the 2011 World Development Report of the World Bank (http://siteresources.worldbank.org/
INTWDR2010/Resources/52876781255547194560/
WDR2010_BGpaper_Ostrom.pdf
). Taking the failure of top-down global approaches to climate policy as her launching pad, as do Bell and Blechman, Ostrom develops a case for a “polycentric approach” to climate solutions. This framework recognizes that climate drivers and actions as well as climate effects take place at diverse locations and scales, from the global to the local and in between. Governance takes place at all of these scales, too. Hence, it makes perfect sense to address climate issues at these multiple levels.

Another important feature of the polycentric framework is its scholarly foundation. It derives from the considerable amount of knowledge we have about collective action problems and the application of that knowledge to common pool resources such as the global atmosphere. That understanding derives not simply from armchair thinking but from a large body of empirical studies allowing us to distinguish between successful and unsuccessful governance. Hence, the framework is founded on a set of principles for guiding policymaking and for evaluating policy effectiveness.

Neither the Bell-Blechman nor the polycentric framework is the final word on new directions in climate governance, but they do lead us to refocus our efforts along more promising paths.

EUGENE A. ROSA

Boeing Distinguished Professor of Environmental Sociology

Edward R. Meyer Professor of Natural Resource and Environmental Policy

Washington State University

Pullman, Washington

[email protected]


The loss of momentum in climate change negotiations is undoubtedly worrying, and Ruth Greenspan Bell and Barry Blechman are right to urge the environmental community to look for models elsewhere. There is also much to be said for trying to engage diplomatic heavyweights as well as scientific specialists to get the debate moving, and for breaking up the big environmental issues into manageable proportions. A good rule of thumb on almost any topic is that the fewer parties involved and the sharper and more immediate the focus, the greater the chance of some agreement.

The authors are correct to note that the earliest and most idealistic schemes for complete disarmament were dashed by the Cold War and then an arms race, which meant that nuclear arsenals grew rapidly. Against this background, the move away from the more grandiose schemes for disarmament did not simply reflect a pragmatic determination to make progress with a number of small moves rather than one giant step, but an important conceptual change as well. The new field of arms control developed as a direct challenge to established notions of disarmament. The arms controllers pointed to the stability in East-West relations as a result of a shared fear of nuclear Armageddon. From about 1956, when this view became U.S. policy, an effort was made to clarify the meaning of living with nuclear weapons and to determine how best to stabilize the nuclear age rather than finding a means of escape. From this perspective, the number of weapons was unimportant compared to the risks of first strikes or accidental launches. Certainly, the 1962 Cuban missile crisis shocked Moscow and Washington into looking for ways to improve relations. The intense period of arms control that followed benefitted from the pathbreaking conceptual and policy work that had been undertaken during the previous decade. The aim was more mitigation than elimination.

Moreover, all of the negotiations were conducted within the terms of a hierarchical view of the international system. The deals between the superpowers had to be self-policing, although the more-multilateral agreements brought into being international organizations to police them. Although the Non-Proliferation Treaty has been significant in reducing the risk of nuclear war, it has also had the effect of reinforcing the political status quo and has always caused resentment among nonnuclear powers as an unequal treaty. The importance of a U.S.-Soviet understanding during the Cold War also meant that negotiations on matters of substance were regularly entangled with broader signaling about the views being taken in Washington and Moscow about the other’s overall conduct. Arms control negotiations tended to reinforce underlying trends, whether negative or positive, in superpower relations. When relations were tense, which was when agreements were most needed, they were much harder to obtain. None of this is to challenge the core thesis of Bell and Blechman, but it does warn that any agreements require a common conceptual basis and a favorable political context.

LAWRENCE FREEDMAN

Professor of War Studies and Vice-Principal King’s College

London, England


International relations scholars have long argued that advocates of strong and meaningful policies to mitigate GHG emissions and facilitate adaptation to the effects of climate change have placed too much faith in the Holy Grail of global cooperation, given the now well-known challenges to reaching and implementing global agreements. Meanwhile, proponents of climate change action and clean-energy transitions have long decried the failures of citizens and politicians to adopt a long-term perspective and act accordingly. It is well past time to take both of these problems seriously and change course in climate change policy advocacy.

It is all well and good to assert that people and states share an interest in avoiding catastrophic climate change. This sentiment is usually accompanied by a further claim that global cooperation is needed to achieve shared interests. In fact, individuals and states have many competing interests, and most of them are more immediate than averting or reducing climate change. Rather than complain about this situation, climate change policy advocates would do well to accept it and get to work. Put simply, if we cannot simultaneously address immediate challenges faced by citizens and states while reducing GHG emissions and enhancing adaptation capacities, we will continue to fail to induce the energy, social, economic, and political transformations necessary to address climate change.

By all means, let’s learn more lessons from the rich history and the accomplishments of arms control negotiations, as suggested by Ruth Greenspan Bell and Barry Blechman. Frankly, there is also much to learn from international trade negotiations and international cooperation to improve airport safety and deliver the mail. But we cannot confine our gaze to instances of successful international cooperation. Even if we start smaller and break up negotiations into more manageable bits, or sing the praises of global volunteerism, as many analysts are doing now, the changes needed are too many and the time available is too short. International cooperation is needed, but it is not enough, and it is probably not the most important venue for action. Instead, advocates of climate change action must work to meet peoples’ needs, wants, and concerns at every level of authority and community, from the proverbial kitchen table to the UN. We must take seriously the fact that addressing climate change requires action and change at every level of social organization. Taking multilevel climate change governance seriously means that environmental activists and scientific analysts must curb their inclinations to tell citizens and states what they should do and care about, and start asking them what they want and need. How can we solve problems now in a way that helps curb climate change later?

In virtually every wealthier country, public officials are grappling with how to pay for health care and education, how to create jobs and reduce deficits, and how to address worrying dependence on others for energy and other resources, even as food and energy prices remain higher and more volatile than they have been in decades. In fast-growing emerging economies, citizens and states also grapple with these issues. Generally, in the rapidly developing nations, energy efficiency is even lower than in wealthier states, and the massive human and economic costs of severe air pollution are rising exponentially. The problems in peoples’ everyday lives need solving now.

Let’s close with four brief and interconnected examples: carbon taxes, strict air pollution controls, energy efficiency, and networks to improve urban governance. Even relatively low carbon taxes could help fund schools, universities, and health care, because they reduce deficits and encourage energy efficiency, carbon reductions, and renewable energy. Carbon and energy taxes and fees can be recycled into local and national economies with energy efficiency programs that put people to work, save money, and reduce pollution. Those worried about the global climate might get busy helping their friends and neighbors fund schools and improve energy efficiency in cities and provinces. Rather than complain that Chinese and Indian officials won’t agree to limit their emissions growth, those of us from wealthier countries should scale up our efforts to help them address horrible air pollution and make huge energy efficiency gains; things we have a lot more experience in actually doing as compared to our rather mediocre records in setting and meeting GHG reduction goals. Finally, this is an urban century, and working together, across borders and within hundreds of professions, to improve urban governance and urban life is necessary for many reasons beyond climate change. Megacities may well have more to learn from each other than from their national governments.

Global conferences, summits, and agreements are needed, but people, states, local governments, and firms make and implement decisions with more authority than most international organizations. There are hundreds of ways to work with our neighbors, fellow citizens, and far-flung associates everywhere in our many personal and professional networks, to help address the problems in peoples’ lives right now. Let’s get busy doing that and curb emissions along the way.

STACY D. VANDEVEER

Associate Professor of Political Science

University of New Hampshire

Senior Fellow, Transatlantic Academy, German Marshall Fund of the United States

Washington, DC

[email protected]


Stabilizing climate change has long been the chief objective for the countries that come to the UNFCCC table to negotiate. It is clear that climate change–related events represent a vital threat to the survival of many. However, when countries speak climate they think economy and that is, in fact, the real driver behind the negotiations. This is good news because the climate negotiations can succeed only if they credibly reflect the interests of the parties.

After Durban, the UN climate change negotiation process deserves some credit, but we don’t know whether it will perform up to expectations. It hasn’t in the past, and therefore it would not be wise to place a great deal of trust in the UNFCCC while exploring alternatives. The goal of policymaking is not the preservation of structures but achieving the goals these structures were set up to reach.

Whatever the final agreement, it is vital to all negotiating parties that it does not hamper their economic development prospects. The players want to be sure that while assuming pledges, they will not take on more burden than others. In my view, the problems of the least-developed countries and small-island states should be treated separately and with due attention to make sure that these parties, whose survival is at stake, do not become hostages of the negotiation game between the big economic players.

The core question therefore seems to be how to achieve a balance between the interests of the major players, most notably the United States and other advanced economies, China with the other major emerging powers, and the EU. Ruth Greenspan Bell and Barry Blechman suggest that much inspiration to answer this question may be found in the nonproliferation negotiations.

Nonproliferation negotiations have to address the problem of how to maintain inequality in status—nuclear haves versus have-nots—without creating inequality in security. The nuclear powers can prevent others from going nuclear only if they can ensure that both groups enjoy the same level of security. This implies that all nuclear actors are responsible players and do not pursue purely egoistic interests at the expense of other countries. An informal guarantor of this system is the United States. Opposition to Iran going nuclear stems from fear that the country’s behavior would destroy the entire system. The awareness of the system’s volatility led to the initiative of nuclear zero—the idea of a world free of nuclear weapons.

A key element of any international negotiation is the question of trust. It played a significant role in the disarmament and arms control process in the Cold War period. The answer to the problem was confidence-building measures. Their credibility was based on transparency and verification mechanisms. Trust is also a core problem of the climate process.

How can trust be built in the climate negotiations? Measurement and verification mechanisms are expected to play an important part in global climate agreements. However, they can work only if all parties perceive the agreement as fair. The idealistic approach is, however, generally a handicap rather than an advantage. Fear of a climate catastrophe remains the basic moral motive of the climate process. If it succeeds, it will be a clear moral victory. The question of how to achieve this goal is a political, not a moral one. The language of moral rigor can easily be used to camouflage real interest. The language of interests is much more appropriate for finding a compromise.

A change in language may be helpful, but is far from being a grand design for successful negotiations. The climate process has lost the initial sense of urgency. The fear of a future catastrophe must compete against the fears of present economic and financial trouble. And the incentives of the low-carbon economy are not perceived as strong enough to cause countries to give up the advantages of the traditional economic model.

The nonproliferation process is no blueprint for the climate change negotiations. Its role is inspirational instead. There is another international process that offers interesting but mostly negative experiences: the Doha Round of trade negotiations. These lessons should be studied carefully. International trade is not less complex than climate protection. Both processes are closely interlinked. And the next conference of the UNFCCC will take place in Doha. If we draw conclusions from the Doha Round, we may help prevent failure at COP 18 in Doha.

JANUSZ REITER

President and Founder, Center for International Relations

Warsaw, Poland

The writer is a former ambassador of Poland to Germany and the United States.


Looking at the essence of global anthropogenic climate change, I am led to believe that, as individuals consider the issue, they will necessarily conclude that it is in their best interest to limit their emissions of GHGs and promote such limitation on the part of other individuals as well. If some individuals insist on not limiting their emissions, this would be sufficient cause for them to be sued for damages by the ones who do so, based on their objective responsibility. It is possible to establish a causal link between emissions over a given period of time and the fraction of climate change caused by them, and therefore the fraction of the losses sustained.

If what I stated above is correct, I can then conclude that the majority of individuals on the planet have not considered the issue of climate change sufficiently. Otherwise, we would already have a comprehensive implementation of the UNFCCC. It is reasonable to assume that this will happen over time, as the magnitude of climate change increases and knowledge about it spreads further.

The experience of the past 20 years with the UNFCCC indicates that this process will not occur simultaneously all over the world, because concern about climate change must compete for attention with other priorities of societies. It is in this sense that I agree with the authors, for the UNFCCC process with its requirement for consensus will tend to follow the pace of the laggard countries.

On the other hand, a global treaty with emission limits for all countries should continue to be the goal of all negotiations. Any partial agreement should be designed so that it can be incorporated without difficulty into the global treaty. Any partial agreement should also contain incentives for outsiders to join it and/or disincentives for outsiders to stay out of it. It could even contain penalties for those outside it, such as trade restrictions.

Some of the progress made from implementation of the UNFCCC thus far can be attributed to the use of other groupings of countries, rather than only those that are customary in the negotiations themselves. The results achieved in Copenhagen were preceded by agreement in the Major Economies Forum earlier in the year. The Berlin Mandate, which originated the Kyoto Protocol, was made possible with the establishment of a Green Group. Unfortunately, the group worked only during the first COP to the UNFCCC.

Another avenue that could be exploited further is the inclusion of GHG emissions control in the agenda and in the regulatory framework of regional economic and trade agreements. The success of the EU in dealing with the control of GHG emissions is an extreme example, of course. There are, however, many other regional agreements that could and should be used to deal with the issue, even if at a more superficial level.

LUIZ GYLVAN MEIRA FILHO

Visiting Researcher

Institute for Advanced Studies

University of São Paulo

São Paulo, Brazil

From the Hill – Spring 2012

President proposes slim increase in R&D funding for FY 2013

Federal R&D investment would rise to $142.2 billion under President Obama’s fiscal year (FY) 2013 budget request, according to an analysis by the American Association for the Advancement of Science. This would represent a $1.7 billion or 1.2% increase above FY 2012 estimated funding levels but is less than the expected rate of inflation. In constant dollars, the president’s budget would leave federal R&D expenditures approximately 10% below the peak achieved in FY 2010, when the American Recovery and Reinvestment Act boosted spending.

The overall increase would be driven largely by gains in nondefense R&D, with defense R&D declining by $1.5 billion or 1.9%. Basic and especially applied research would receive increases above FY 2012 levels, whereas development activities would be reduced by 1.7%. All three trends represent a continuation of changes begun last year. Funding for weapons development and research at the Department of Defense (DOD) would be reduced.

In the nondefense realm, R&D at several agencies that fared well in last year’s budget cycle, including the Department of Energy (DOE), the National Aeronautics and Space Administration (NASA), the National Institute of Standards and Technology (NIST), and the National Science Foundation (NSF), would again receive increases under the president’s budget. Conversely, National Institutes of Health (NIH) funding would fail to keep up with inflation for the second year in a row. The budget proposed that NIH adopt “new grant management policies” to increase the number of new research grants. The Department of Agriculture is subject to a mixed picture, with a flat overall R&D budget but increases in some key research-oriented offices and initiatives.

The administration still supports ultimately doubling the budgets of DOE’s Office of Science, NSF, and NIST, although the timing of this doubling is not clear. Congress also has shown strong support for these agencies, but appropriations in recent years have not achieved the sustained increases authorized by the COMPETES legislation.

The administration once again proposed that the R&D tax credit be expanded and made permanent. In addition, the administration continues to make manufacturing innovation a priority, proposing $2.2 billion for advanced manufacturing R&D.

The funding picture in Congress remains murky. Although many research agencies remain popular with appropriators, several specific components of the president’s budget have been criticized for being too generous or not generous enough. The constrained budget environment will force difficult funding choices, as it did for the FY 2012 budget, which was not finalized until late December. That budget decreased R&D spending by $1.8 billion or 1.3% below FY 2011 levels and $7.4 billion or 5% below the president’s request. The DOD budget saw the largest portion of these cuts, with R&D spending down $2.5 billion. These cuts were driven largely by reductions in development and support activities; basic and applied research increased 6.5%. Defense-related spending at DOE was up $322 million or 8.1% over FY 2011.

Nondefense spending held steady overall as compared to FY 2011 levels, although it was again far below the administration’s request. The total NIH research budget was essentially unchanged at $30.2 billion. Most individual research centers received a 0.5% increase, but there were some notable changes. Congress agreed to establish within NIH the National Center for Advancing Translational Sciences, an entity that will seek to reengineer the process by which new discoveries in fundamental science move from labs to clinics. Meanwhile, the Centers for Disease Control and Prevention and the Food and Drug Administration saw substantial cuts in R&D budgets of 11.2% and 28.5%, respectively.

TABLE 1
R&D in the FY 2013 budget by agency (budget authority in millions of dollars)

FY 2010 FY 2011 FY 2012 FY 2013 Change FY 2012–2013
Actual Actual Estimate Budget Amount Percent
Total R&D
(Conduct of R&D and R&D Facilities)
Defense (military) 83,325 79,112 74,464 72,572 -1,892 -2.5%
   S&T (6.1-6.3 + medical) 14,749 12,751 13,530 12,534 -996 -7.4%
   All Other DOD 68,575 66,361 60,935 60,038 -897 -1.5%
Health and Human Services 31,758 31,186 31,153 31,400 247 0.8%
   National Institutes of Health 30,489 29,831 30,046 30,051 5 0.0%
   All Other HHS 1,269 1,355 1,107 1,349 242 21.9%
Energy 10,836 10,656 11,019 11,903 884 8.0%
   Atomic Energy Defense 3,854 4,081 4,281 4,691 410 9.6%
   Office of Science 4,528 4,461 4,463 4,568 105 2.4%
   Energy Programs 2,454 2,114 2,275 2,644 369 16.2%
NASA 9,262 9,099 9,399 9,602 203 2.2%
National Science Foundation 5,392 5,494 5,614 5,872 258 4.6%
Agriculture 2,611 2,135 2,331 2,297 -34 -1.5%
Commerce 1,344 1,275 1,258 2,573 1,315 104.5%
   NOAA 685 686 574 552 -22 -3.8%
   NIST 588 533 556 1,884 1,328 238.8%
Transportation 1,073 954 945 1,106 161 17.0%
Homeland Security 887 664 577 729 152 26.3%
Veterans Affairs 1,034 1,160 1,164 1,166 2 0.2%
Interior 776 757 796 863 66 8.3%
   US Geological Survey 646 640 675 727 51 7.6%
Environmental Protection Agency 597 584 568 580 12 2.1%
Education 353 362 392 398 6 1.5%
Smithsonian 213 259 243 243 0 0.0%
International assistance programs 121 121 121 121 0 0.0%
Patient-Centered Outcomes 10 40 120 312 192 160.0%
Justice 79 109 92 100 8 8.7%
Nuclear Regulatory Commission 81 99 83 91 8 9.6%
State 73 75 75 75 0 0.0%
Housing and Urban Development 100 79 57 98 41 71.9%
Social Security 49 42 8 48 40 500.0%
Tennessee Valley Authority 18 18 15 15 0 0.0%
Postal Service 12 14 14 14 0 0.0%
Corps of Engineers 11 11 11 11 0 0.0%
Labor 4 4 4 4 0 0.0%
Consumer Product Safety Commission 0 2 2 2 0 0.0%
Telecom Development


7


7


4


0


-4


-100.0%


Total R&D 150,025 144,318 140,530 142,194 1,664 1.2%
Defense R&D 87,179 83,193 78,745 77,263 -1,482 -1.9%
Nondefense R&D 62,846 61,125 61,785 64,931 3,147 5.1%

Source: OMB R&D data, agency budget justifications, and agency budget documents.

Note: The projected GDP inflation rate between FY 2012 and FY 2013 is 1.7 percent.

All figures are rounded to the nearest million. Changes calculated from unrounded figures.

In total dollars, the largest nondefense gains were for DOE’s Office of Science ($209 million or 4.9% above FY 2011) and Energy Programs ($198 million or 10.5% above FY 2011). However, both of these increases were far below the administration’s request. Although the Office of Energy Efficiency and Renewable Energy’s overall budget remained steady from FY 2011, the portion devoted to R&D increased by 36.7% or $283 million. Conversely, fossil energy R&D declined by $70 million or 15.4%. The Advanced Research Projects Agency–Energy was the subject of substantial debate, but ended up receiving a $92 million or 53.1% increase. The Department of the Interior and the Environmental Protection Agency (EPA) also saw significant relative gains in research funding, but the Department of Homeland Security saw a small reduction.

Hearings examine R&D activities at the EPA

In November, the Energy and Environment Subcommittee of the House Committee on Science, Space, and Technology held two hearings on the merit and quality of R&D activities at the EPA. A November 17 hearing began as a discussion of the EPA’s peer-review process but ultimately centered on the issue of hydraulic fracturing, or fracking, the controversial process in which water and chemicals are injected at high pressure into underground shale formations to extract natural gas.

In an opening statement, Chairman Andy Harris (R-MD) charged that the EPA acts based on political rather than scientific motivations. “The perception,” he said, “is that EPA has a penchant for pursuing outcome-based science in order to validate its regulatory agenda.” Rep. Paul Tonko (D-NY) disagreed sharply with this assessment. “Let me be clear,” he said, “there may be some legitimate concerns related to EPA’s research enterprise, but EPA is not the demonic agency that the Republican majority has made it out to be.”

Arthur Elkins, the EPA’s junior inspector general, said that his office found the agency’s peer-review process to be satisfactory, but that some organizational problems remain. For example, the EPA does not collect sufficient data on its employees, hampering its ability to control for inefficiencies. “To their credit,” he concluded, “the Office of Research and Development has been receptive to many of our recommendations.” Paul Anastas, assistant administrator of the EPA’s Office of Research and Development, testified that the agency is implementing recommendations from the Government Accountability Office (GAO) and the EPA’s Office of the Inspector General by restructuring programs and strengthening ties among national, regional, and local offices. David Trimble, director of natural resources and environment at the GAO, expressed concern that the EPA’s 35 labs are too independent and as a result could be overspending on redundant projects.

Questioning by Harris and other Republican representatives centered on fracking. Harris said it was foolish for the EPA to pursue a study on the method’s safety because no documented cases of water contamination have been found. Anastas disagreed, saying, “You can’t find something if you don’t look, if you don’t ask the questions, if you don’t do the science.”

The November 30 hearing, which was billed as an opportunity to hear “perspectives on common sense reform,” featured scholars with essentially the same message. They said the EPA should not be allowed to pursue both research and regulatory activities because one will inevitably influence the other, resulting in politicized research and pseudoscientific policy. Susan Dudley, director of George Washington University’s Regulatory Studies Center and former Office of Management and Budget administrator under President George W. Bush, called for greater transparency in the EPA’s risk assessment process, particularly with regard to uncertainty. Gary Marchant, faculty director of the Center for Law, Science and Innovation at Arizona State University, recommended that the EPA be split into two separate agencies: one for research and one for regulation. Committee Chairman Harris agreed with the idea, citing the NIH as an example of a successful science-only agency.

Future of U.S. planetary science explored

The House Science, Space, and Technology Subcommittee on Space andAeronautics held a November 15 hearing to discuss the future of U.S. planetary science, particularly Mars exploration. In opening remarks, ChairmanSteven Palazzo (R-MS) and RankingMember Donna Edwards (D-MD)raised questions about NASA missionsand asked panelists to identify potential impediments to progress.

Witnesses discussed NASA’s implementation of recommendations from the National Academy of Sciences’ Planetary Science Decadal Survey, released in March 2011, which calls for a balanced mix of small, medium, and large missions.

Jim Green, director of NASA’s Planetary Science Division, said that the survey placed the highest priority on the planned 2018 Mars mission. But to complete the mission, NASA must reduce its cost to below $2.5 billion, which Green believes is achievable only if NASA has a partnership with the European Space Agency (ESA). In 2009, NASA and the ESA signed a letter of intent in support of the Mars mission. Green said that in order to maintain its global leadership in planetary science, the United States needs a flagship mission, such as the Mars Rovers, that can be implemented during this decade.

The second witness, Steve Squyres, chair of the Committee on the Planetary Science Decadal Survey, said that NASA has chosen to follow the survey’s recommendations closely in all areas except for flagship missions in which budget concerns jeopardize progress. He stressed that flagship missions are essential to planetary science and that the ability to carry out these missions is one of the greatest U.S. science and technology achievements. He reiterated that the joint NASA-ESA Mars mission is of the utmost priority. Squyres also highlighted the importance of international partnerships in general, because they can help reduce the risk involved and provide additional funding. Although the Obama administration has not canceled the Mars mission, it has not explicitly approved it either, witnesses said. Squyres said that although the international community is enthusiastic about partnerships with NASA, there is also frustration at its inability to commit.

Federal science and technology in brief

  • NSF has released its biennial Science & Engineering Indicators report, which tracks data trends in global and domestic R&D funding, science, technology, engineering, and mathematics (STEM) education, the science workforce, and public attitudes on science and technology. The report, using data through 2009, finds that long-term growth in domestic R&D investment, although slowing considerably, nevertheless outpaced broader economic growth. The United States also maintained a substantial lead in global research investments, although its share of global R&D declined from 38% percent in 1999 to 31% percent in 2009. This slippage is primarily because of substantial acceleration of research investments in Asia, especially in China. Thanks to explosive growth of 20% per year, China now ranks second in research investment, recently surpassing Japan. The report finds continuing public support for science and technology, but also notes some troubling trends for the U.S. high-tech sector in the global marketplace.
  • The final FY 2012 authorization bill for the DOD, which was signed into law on December 31, retained language to reauthorize the Small Business Innovation Research (SBIR) and Small Business Technology Transfer programs. The new law increases the current 2.5% set-aside, which is taken from the annual R&D budgets of all federal agencies with R&D above $100,000 to fund SBIR programs in all agencies, to 3.2% over six years.
  • On November 16, the EPA and the Department of Transportation announced a joint proposal to set stronger fuel economy and green house gas (GHG) pollution standards for 2017–2025 model year cars and light trucks. The program would increase fuel efficiency requirements from 35.5 miles per gallon (mpg), required by the Obama administration for 2012–2016, to 54.5 mpg. The net benefit to society is estimated at more than $420 billion and would reduce GHG emissions by 50%.
  • The Food and Drug Administration (FDA) approved 35 new drugs in FY 2011, its second highest annual total in the past 10 years, according to an agency report released in November. The report was released at nearly the same time that the White House issued an Executive Order directing the FDA to take action to reduce prescription drug shortages through expedited review and im proved reporting.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

The 80% Solution: Radical Carbon Emission Cuts for California

There is a lot of buzz about innovation being needed to radically reduce emissions of carbon dioxide (CO2) and other greenhouse gases while meeting energy needs. But what innovation is required? And are the gaps all technical? A study in California offers some insight. Although it does not provide all the answers, it may help to clarify the state of energy technology and identify areas where more R&D are needed.

In 2005, the governor of California issued an executive order requiring the state to reduce its CO2 emissions to 80% below the 1990 level by 2050. In response to this decision, the California Council on Science and Technology launched the California’s Energy Future project to explore whether the technology and resources were available, or likely to become available, to meet this goal.

As part of the project, we and our colleagues developed a simple model for identifying energy systems that would meet the state’s energy requirements for buildings, transportation, and industry in the future, while at the same time addressing the emission-reduction goal. Using this model, the team identified four steps that the state would need to take:

  1. Decrease the demand for electricity and fuel as much as possible through efficiency measures.
  2. Decrease the demand for distributed use of hydrocarbon fuels as much as possible by focusing on electrification of transportation (including light-duty vehicles, trains, buses, and some trucks), water and space heating in buildings, and industrial process heating.
  3. Produce electricity with very low emissions through a combination of nuclear power, fossil fuel generation with carbon capture and storage (CCS), and renewable sources; and provide load-balancing services without emissions as much as possible, using energy storage or smart-grid solutions.
  4. Use low-carbon–intensity biofuels to meet as much of the remaining hydrocarbon fuel demand (both liquid and gaseous) as possible.

The first step will have the effect of decreasing the demand for fuel and electricity relative to what is called the “business as usual” scenario (Figure 1). The second step will decrease the demand for fuel and increase the demand for electricity. In the third step, the state will have to double the amount of electricity generated to meet increased demand, while also decarbonizing the generation process. Even after the first three steps are taken, the state will still need to use fuel, because some uses, such as heavy-duty trucks, airplanes, and high-quality industrial heat, cannot be electrified. Low-carbon biofuels could help meet this demand.

In examining these four steps, the team ranked technologies in their order of availability. First, there are technologies that are currently available on the market. Second, there are technologies that have been demonstrated but are not currently for sale at scale. Third, there is a limited number of technologies that are still in development but may become available commercially by 2050. Analysts did not include technologies deemed to be only research concepts or technologies that although available, were excessively expensive and likely to remain so. This bottom-up analysis provided a clear picture of what kind of innovation would be required to do the job.

FIGURE 1
Historical and business as usual emissions


Here is what we thought might be possible by 2050 (Figure 2):

FIGURE 2
GHG reductions with a single strategy


First, energy efficiency measures could cut the demand for energy roughly in half by 2050. In the built environment, current buildings would either be demolished or retrofitted to much higher efficiency standards, and all new buildings would be built to much higher efficiency standards. The majority of the energy savings would come from demolishing old buildings and building new ones. The majority of the costs are likely to be incurred by retrofits. The innovation required to accomplish this has to do entirely with implementation: bringing down the costs, changing the building codes, educating the workforce, and paying for the changes. What needs to be done is clear, but the state does not have the institutional structures in place to do it.

In transportation, the automobile fleet, given historic turnover rates, would evolve to average over 70 miles per gallon as it becomes more efficient (and largely electrified). There is room for technical innovation, but the technologies needed are largely known. Innovation will be required to introduce them and expand their use.

The same is not true for efficiency measures in the industrial sector. Here, we found a large number of technologies that were only in development and had not yet been demonstrated that would provide significant energy savings in industrial processes. For example, integrated and predictive operations and sensors, advanced materials and processing, electrified process heating (for example, by using microwave or ultraviolet energy), and process intensification are all under development and would produce economic improvements. Thus, the necessary innovation is motivated and likely.

Second, all buildings could be heated with electricity, and many forms of transportation—light-duty vehicles (cars), short-range trucks, buses, and trains—could be electrified. The electrification of transportation and of space and water heating can be accomplished with technologies available today. There are policies and innovation to support the electrification of light-duty vehicles, but generally, other sectors that could be electrified are ignored. The policy and economic framework to do this will require innovation.

Third, electricity generation capacity could be doubled at the same time as it is decarbonized, using almost any combination of nuclear power, fossil fuel generation with CCS, and renewable energy sources. In this scenario, electricity generation would increase from the 270 terawatt hours (TWh) per year used statewide today, to a projected demand for about 510 TWh in 2050. Although we may decarbonize electricity generation capacity, the electricity system can still produce emissions. Supply and demand both fluctuate during the day, and some forms of renewable energy (wind and solar in particular) can experience long periods when they cannot produce electricity at all (intermittency). If peaking, ramping, and covering for intermittent renewable energy are accomplished with natural gas, this will produce emissions that must be eliminated.

The team determined that nuclear power has no technical obstacles. With a modest efficiency penalty, power plants can be air-cooled—that is, run without cooling water—or cooled with wastewater, there is a sufficient supply of nuclear fuel, nuclear waste can be safely stored, siting reactors safely is possible, and new passive reactor concepts have many improved safety advantages. However, the March 2011 nuclear accidents at the Fukushima Daiichi nuclear power facility in Japan, triggered by an earthquake and tsunami, have significantly affected public opinion and confidence that it is possible to manage nuclear power safely, and California law prohibits the building of new nuclear power plants until there is a licensed federal nuclear waste repository. Innovation in the managing of nuclear power so that public opinion favors this solution will be required.

Technologists know how to build electricity generation plants that use natural gas or coal, as well as how to separate CO2 from the flue gas, and the oil and gas industries have a great deal of experience in putting CO2 underground (albeit for enhanced resource recovery, not permanent CO2 storage). Although these processes are currently available, the integration of power generation with CCS at scale is yet to be demonstrated. The energy required to drive the CCS process is currently very high, as much as 30% of generation. Innovation could reduce this, and engineers expect that it could be as low as 10% by 2050. If California chooses to use natural gas for electricity generation, there will be decades of storage available within the state in abandoned oil and gas reservoirs. These sites have been of economic interest and are well characterized and known historically to effectively trap hydrocarbons. The use of saline aquifers to store CO2 is also possible but will require more effort.

California has a wealth of renewable resources, including hydropower, geothermal, wind, solar, and biomass, that are sufficient to provide all the capacity it needs. The technology is available now. Innovation could make it less expensive, but engineers already know how to build renewable energy–generation facilities. However, if much of this generation is intermittent wind and solar, significant innovation will be needed to integrate these resources in order to maintain reliability when the wind does not blow or the Sun does not shine.

All forms of electricity generation will require load balancing for meeting peak requirements; for providing rapid ramping up of power to meet sudden changes in demand; and, in the case of renewables, for covering periods of intermittent supply. The electricity sector knows how to accomplish load balancing with natural gas turbines, but these produce carbon emissions. In our analysis, we found that, in the case of a completely renewable-energy electricity portfolio, if all load balancing is accomplished with natural gas, the emissions from this source alone will nearly equal the allowed emissions for the entire energy system. The load balancing for electricity is much easier to achieve if part of the carbon-free electricity generation comes from base-load plants. We found little information to quantify how much a smart grid could contribute to solving the load-balancing problem without emissions. We see innovation required to implement smart-grid concepts, particularly in the business models and controls required to achieve these gains. We found that the technology to support “load following” through energy storage was insufficient and very expensive at scale. Importantly, the technology for providing large amounts of electricity to cover for intermittent renewable energy is currently lacking. Technical innovation is clearly required here.

Fourth, innovation in biofuels can be expected to lower their carbon footprint by about 80% as compared to fossil fuels by 2050. Likely feedstocks for biofuels would include all of the waste biomass from agriculture, forestry, and municipal waste, plus crops that could be grown on marginal land without irrigation or fertilizer. Other sources such as algae may contribute, but we deemed them extremely difficult to scale up.

Even though this would be of significant help, there is also a problem: The state is expected to be able to produce or import enough biofuels to meet only about half of its requirement for fuel. The remaining demand would still have to be met with fossil fuel, which would generate emissions that would total about twice the state target and represent the primary source of carbon emissions in 2050. Given this shortage, the state would need policy innovation to reserve biofuels primarily for uses that cannot be electrified, such as heavy-duty transport or load balancing that cannot be handled with energy-storage devices or smart-grid solutions. Technological innovation will be needed to lower the carbon footprint of biofuels and to supply biomass that does not compete with food supplies. Such innovations will help enormously, because every gallon of biofuel displaces a gallon of fossil fuel, and thus the effect of each gallon is leveraged significantly. However, even with imports of biofuel, the state is almost certainly not going to have enough biomass to meet its fuel demand, and there is a major technology innovation gap in solving the remaining fuel problem.

As our bottom line, we determined that the four necessary steps identified, even taken collectively and aggressively, would not be sufficient to reach California’s stated goal (Figure 3). At best, taking all four steps has the potential to reduce emissions to about 60% below 1990 levels, leaving them at about twice the target rate.

FIGURE 3
Getting to 60% below 1990 level


Getting from 60% to 80% reduction

At a more detailed level, the emissions remaining in the energy system all arise from the continued use of fossil fuel in transportation and of natural gas to provide load balancing. Thus, the single largest technology gap in achieving radical emission cuts is the fuel problem.

There are a number of ideas for filling this gap. They are generally complex from an industrial perspective, require substantial infrastructure, and are likely to be expensive. They also will require technological, economic, industrial, and perhaps societal innovation. These ideas include, among others, solving the load-balancing problem without emissions, through storage technology or the smart grid; developing a biofuel with no net emissions; using biomass combined with CCS to make electricity and/or fuel to offset fossil emissions elsewhere; using hydrogen, produced from coal or methane with CCS, to replace fossil fuel use that electricity cannot displace; and developing the industrial process to use biomass, coal, and CCS to simultaneously make electricity and fuel with very low net CO2 emissions.

Although none of these innovations could solve the remaining fuel problem on its own, in combination they hold the potential of reducing the remaining emissions to the target of 80% reduction below the 1990 level, or perhaps even lower. In addition, societal innovations could reduce demand through behavior change. Although there is little solid data to support the potential of behavior change, there is considerable speculation that this could be a major component of a larger energy strategy. In the long term, research ideas including getting fuel from sunlight may solve this problem, but this is unlikely before 2050.

It is notable that many of the potential solutions for fuel-use reduction involve CCS. Even load balancing could involve CCS if technologists can find a way to make the process economical for gas turbines in load-following mode. It seems that even if the state does not choose to use CCS for electricity, it will remain an important technology for solving the fuel problem.

Based on our analysis, then, we concluded that the major technology gaps in making radical emissions reductions are in energy storage, smart-grid solutions, and low-carbon fuels that go beyond the simple conversion of biomass to biofuels. If current problems in these areas are solved, then the prospects for radical emission reductions are dramatically improved.

The radical reduction of emissions also will require substantial policy and institutional innovation. A short list of questions inspired by our analysis includes:

  • What policies could result in halving the energy required for the same services? How would these policies prevent rebound effects, in which lowering the energy cost of an activity stimulates an increase in its use? Are certain efficiency measures more effective in the long term than the short term? What are the comparative costs versus benefits for critical policy choices? What are the relative effects and costs of new procurements that are energy-efficient, as compared with those of retrofitting for energy efficiency?
  • What policies could help achieve electrification at the least expense? What alternative policy designs could achieve the required levels of electrification beyond light-duty vehicles (cars)?
  • What policies would eliminate emissions from the electricity sector and allow for doubling the capacity as well? What are good policies for controlling emissions that come from filling in the gaps created by intermittent power? What kind of policy designs would encourage investment in energy storage and result in eliminating emissions from this underappreciated sector?
  • Given that biomass supply is limited, what policies would ensure that supplies are used to greatest advantage and least harm and that every end use that can be electrified is? What policies would be appropriate to improve low-carbon, non–biomass-based fuel technology options?

In addition to addressing such technology-inspired questions, there is a need to gather more and better data on how best to change behaviors in ways that would help reduce the size of the energy problem and to better delineate what the potential for saving energy through behavior change actually is.

In summary, achieving radical emission cuts will take deployment and new technology. California needs to take the four key deployment steps identified, and the longer it waits to take them, the steeper the climb will be.

The state cannot achieve its goal with efficiency improvements alone, but doing so without efficiency gains will make the lift enormous and virtually unachievable. The state cannot do it with electrification alone, but without increased electrification, the demand for emission-free fuels cannot be met. The state needs to replace fossil fuels with nonemitting energy sources for the generation of electricity. The problem would be a lot easier to solve if the state were to develop additional base-load generating capacity that does not emit carbon; this would include geothermal or nuclear capacity or facilities that incorporate CCS. Also, sustainable sources of biofuels need to be developed, and these fuels should be used primarily for heavy-duty transport and airplanes. And as there almost certainly will not be enough biomass for all our fuel needs, major new innovations will be needed for decarbonizing the remaining fuel use.

Innovation will be the hallmark of a new energy system with radically reduced emissions. But waiting on innovation to solve the problem will make the target much harder to reach. California already knows much about what must be done and has many effective tools for the job. The challenge today is to both apply known technologies, aided by policies designed to foster their implementation, and at the same time continue the search for better technologies that can be phased in over time to better balance energy needs with minimal carbon emissions.

Learning from Fukushima

Disasters prompt us to seek lessons. After the tragic trifecta of earthquake, tsunami, and nuclear failure at the Fukushima Daiichi reactors in March 2011, many people have turned to Japan to understand what went wrong and how to prevent such an event from recurring. As we approach the first anniversary of the earthquake, there still seems to be little agreement on what these lessons should be. Some point to the relatively minor release of radioactive material and the outdated design of the reactors to argue that nuclear power is safe, whereas others take Fukushima as blatant evidence that nuclear power remains unsafe. Germany responded to Fukushima by accelerating its nuclear exit, but France reaffirmed its strong commitment to nuclear energy. Moreover, as with previous nuclear accidents at Three Mile Island and Chernobyl, lessons tend to fall within one of two categories: those that blame technology, such as the reactor design, and those that blame social factors, such as poorly conceived regulations or corporate greed.

We argue that it is impossible to separate the social and technical features in a complex operation such as Fukushima. Nuclear power is best understood as a thoroughly hybrid entity in which the social and technological cannot be separated from one another for analytical or policy purposes. Technologies such as reactors, risk models, and safety mechanisms are embedded in social values and practices; similarly, national identity, risk regulation, and corporate culture are materialized in the production and operation of nuclear power plants. Acknowledging these irreducible linkages, we critique several analyses that apply a strictly technical or social lens to Fukushima and illustrate how a sociotechnical approach results in a more realistic and useful understanding of events. This more complex account uncovers a different set of lessons and points to novel responses that include participatory technology policy, global nuclear governance, and a more reflexive approach to modeling.

It’s not just politics

One common refrain has been that Fukushima happened because politics interfered with technology, leading to political biases in risk assessments and safety reports as well as inconsistent regulation by industry-captured government authorities. According to this narrative, political influence should be prevented by shifting power to regulators and expert scientists, who are assumed to be objective and independent. Politics should be excluded.

The Fukushima meltdown was not an isolated and accidental case of a technology being corrupted by a few politically motivated individuals. Instead, the entire Japanese nuclear power industry was a product of a highly cultivated, decades-long political agenda of crafting a strong Japanese national identity.

In contrast to this storyline, we argue that the failure at Fukushima was not just that decisions were political, but that they were asymmetrically and incompletely political. A technological system should not be seen as political only when things go wrong. Rather, political values and interests are continually part of nuclear operation, including periods of normal management and robust technological performance. From a sociotechnical perspective, the challenge is not to exclude politics but to ensure that political agendas and channels are transparent, explicit, and open to debate.

As in all other nations, nuclear power in Japan has always been deeply political. In the aftermath of the nation’s disastrous defeat in World War II, several Japanese politicians were looking for ways to build the country’s legitimacy and demonstrate its technological capacity. For the later prime minister and nationalist Yasuhiro Nakasone and for media mogul Matsutarō Shōriki, who in 1956 became the first head of the Japan Atomic Energy Commission, nuclear power was a vehicle for national prestige and honor. To involve its citizens in its nuclear agenda, the Japanese government designated October 26 as a Day of Nuclear Power to commemorate the first successful operation of an experimental reactor at Tokai in 1963, exactly seven years after Japan joined the International Atomic Energy Agency (IAEA). The state offered incentives to communities willing to host nuclear power plants, undertook publicity campaigns to promote atomic energy, and mandated the inclusion of “safe nuclear power” visions in schoolbooks. Moreover, the government’s modernization efforts were enthusiastically supported by wartime industry conglomerates Mitsui, Mitsubishi, and Sumitomo, which continued to be cornerstones of national identity.

In short, nuclear power has never been a value-neutral technology, in Japan or elsewhere. The creation of the Japanese nuclear power industry was directly tied to the political goal of reestablishing Japan as a strong and powerful nation. However, nation-building around the atom represented only one of the possible political positions. Although the interests of the central state, modern science, and several large corporations were articulated clearly, democratic deliberation about the management and distribution of risks from this national project received much less attention than did narratives of modernization and growth. Japanese antinuclear activists such as the nuclear chemist Jinzaburō Takagi have noted that the dissenting views have been consistently discouraged by the state. For example, when the Japanese government responded to citizen protest in 1977 by allowing meetings on proposed nuclear power plants, the resulting hearings were highly scripted affairs where questions were screened in advance and speakers were cut off by the moderators. In the minds of state technocrats, being a good Japanese citizen meant supporting these state efforts, thus helping the nation rise up from shame and defeat to become an economic superpower. In this sense, the Fukushima meltdown was not an isolated and accidental case of a technology being corrupted by a few politically motivated individuals. Instead, the entire Japanese nuclear power industry was a product of a highly cultivated, decades-long political agenda of crafting a strong Japanese national identity.

The way forward, then, does not lead to less politics but to more political deliberation. Nuclear power for a developed nation in the 21st century cannot be sustained on the same moral and strategic grounds as in a post–World War II setting. Thus, developing alternative technology policy options, in Japan and elsewhere, means reassessing old narratives of national resurrection and economic growth at all costs that once served as the justification for nuclear power, as well as the inherited technocratic structures that came with it. Moreover, it means allowing publics to participate in building a new vision of technological progress that takes a wider range of values into account, facilitated by a more robust and open set of political mechanisms that allow dissent and local citizen autonomy as part of technological advancement and national self-imagination. Although state experts may have greater scientific knowledge than does the average citizen about how nuclear power plants should be designed and operated, this does not justify unquestioned authority over all decisions. In particular, it does not substitute for citizen input as to why a society should pursue a hazardous industry such as nuclear power, where and under which risk assumptions plants should be located, and what types of safety measures should be used. Consultative exercises, employing citizen juries and social media for example, as well as state responses to citizen protests, can lead to improved safety and greater attention to the equal allocation of costs and benefits. More open and comprehensive politics make for better technology.

It’s not just Japan

Another storyline explains Fukushima in terms of failures specific to Japan: Japanese geography is unlike any other; Japanese regulators were too close to company officials to notice flaws in risk assessment or mandate additional safety features in the design basis for reactors; Tokyo Electric Power Company (TEPCO) responded to the disaster in a slow and bumbling manner; the Japanese media were prevented from raising critical perspectives by the state and by a public culture of deference to authority. The responsibility for the disaster, in this view, lies within Japan’s borders, and the problem can be solved by improving the accountability of the Japanese state and nuclear industry.

We argue that both the origins and consequences of Fukushima are thoroughly transnational and should be addressed through an enhanced regime of global nuclear governance. The focus on Japanese exceptionalism masks the extent to which sociotechnical systems such as nuclear power are enmeshed in global networks of exchange and prestige.

Japan’s nuclear history began in global conflict. During World War II, Japan sought to develop a nuclear bomb, and it was the dropping of atomic weapons at Hiroshima and Nagasaki in 1945 that brought the realities of splitting the atom into public view. Eight years later, U.S. President Eisenhower delivered his “Atoms for Peace” speech at the United Nations, in which he offered U.S. assistance to other nations interested in developing nuclear power. Drawing on his U.S. political connections, Japan’s Harvard-trained prime minister–to–be Yasuhiro Nakasone gathered support for a nuclear technology budget in Japan’s House of Representatives three months later. Over the next several decades, a generation of Japanese scientists trained abroad at institutions such as the U.S. Argonne National Laboratory, and the nation’s first reactors were purchased from foreign companies such as GEC (United Kingdom), Westinghouse and GE (United States), and Areva (France). Nuclear fuel was obtained from uranium mines in Australia, Canada, and elsewhere. As Japan became the world’s third largest producer of nuclear energy, the electricity from these plants went into producing manufactured goods that were consumed around the world and drove Japan’s rise as an economic superpower. Japanese companies benefited greatly from technological spillovers from nuclear engineering.

Fukushima’s aftermath provides more evidence that Japan’s nuclear decisions reach beyond national borders. Radioactive materials were carried across the globe, being detected as far away as the United Kingdom. Sony, Toyota, and numerous other multinational Japanese companies fear that a nuclear stigma will attach to the “made in Japan” brand, adding to the record losses from the earthquake and tsunami, and in turn altering business expectations for foreign competitors and dependent manufacturers. Germany’s energy industry, faced with a new wave of antinuclear resentment, is threatening that the plan to close all reactors in Germany will result in bankruptcy and blackouts. This reveals how, tacitly or otherwise, the costs and benefits of nuclear power cannot be contained within a single country.

Fukushima’s global origins and repercussions suggest that current national models of nuclear governance are not adequate. In light of the extensive supranational linkages, it seems anachronistic at best that key regulatory aspects of power plant licensing and security and safety standard-setting should remain the exclusive preserve of national governments. The time has come to separate national economic and political interests in promoting nuclear power from the regulatory function, which concerns all nations. History suggests that significant changes in nuclear governance are both possible and desirable. For example, in 1974 the U.S. Atomic Energy Commission was split into the Nuclear Regulatory Commission and the Energy Research and Development Administration, with standard-setting delegated in part to the Environmental Protection Agency. This separation reflected the insight that the promotion and regulation of nuclear energy do not sit comfortably within the same body. Likewise, the Japanese government is now considering separating the Nuclear and Industrial Safety Agency from the Ministry of Economy, Trade and Industry, which promotes the use of nuclear energy.

In a similar vein, we recommend elevating the mandate of the IAEA to include a licensing function for nuclear power plants, thereby changing its status from an advisory body to that of an international institution with authority to make legally binding decisions. Such a move would go beyond other suggested expansions of the IAEA mission in light of Fukushima, such as the inclusion of response mechanisms to disasters. Under this new international regulatory regime, a global community would be involved in decisions about the siting of nuclear power plants, the proliferation of reactor technology, safety enforcement, and financial responsibility for events such as Fukushima. Nuclear disasters do not respect national borders, and governance models must therefore include a significant global dimension.

It’s not just the models

Inadequate risk assessment models have been identified as another main culprit in the Fukushima disaster. TEPCO’s models were criticized for having recommended the construction of a 5.7-meter retaining wall “based on tsunami folklore,” which was overwhelmed by a 14-meter tsunami wave crest. Moreover, TEPCO’s reactors suffered leakages from the 9.0 earthquake that would have been perilous even without the “unexpected” tsunami taking down the supposedly fail-safe emergency cooling systems. Had the models been correct and correctly followed, the story goes, the disaster might have been averted through the incorporation of more robust defenses into the original reactor design.

We argue that nuclear safety is not simply a matter of developing more powerful models and more accurate worst-case scenarios. It is equally about developing better strategies for societies to understand the role of modeling and deal with the inescapable limitations of all models. The modeling of complex sociotechnical systems itself incorporates sociotechnical assumptions that modelers and decisionmakers need to keep in mind.

Contrary to our common-sense understanding of modeling, models of sociotechnical systems frequently evade the empirical validation that is a touchstone for engineering quality. To test for a cumulative probability of a one in ten million chance of a nuclear failure each year would require building 1,000 reactors and operating them for 10,000 years, as noted by Alvin Weinberg in his famous 1972 article “Science and Trans-science.” In actuality, the number of data points that are relevant to our models of systems safety for nuclear power is relatively small. The most prominent ones— Three Mile Island, Chernobyl, and Fukushima—arose from very different circumstances, invalidated different modeling and risk assessment assumptions, and resist being assimilated into a single data set. The events are too idiosyncratic to allow easy generalizations about where our models fail, though they seem to suggest that annual failure-risk estimates such as one in ten million or even one in ten thousand are a serious underestimation.

Despite tremendous progress in systems thinking, modeling, and computation power, we are not yet able to synthesize insights from domains as diverse as geophysics, nuclear engineering, hazard containment, atmospheric physics, and hydrology, let alone individual and social psychology, history, and politics into a single conceptual framework . Nor is it realistic to expect that these challenges will ever be completely overcome. No improvement will ever exclude the possibility of unpredictable emergent behavior, which is a characteristic of complex systems. Fukushima underlines yet again that there are no good models without robust societal mechanisms to deal with their inescapable uncertainties, limitations, and failures.

Fukushima underlines yet again that there are no good models without robust societal mechanisms to deal with their inescapable uncertainties, limitations, and failures.

Modeling complex sociotechnical processes thus remains in part a scientific exploration of the adequacy of different modeling approaches, not simply a straightforward application of consolidated routines. Modeling a nuclear power system is not like predicting the performance of a car engine; it is closer to creating a climate change model. In the absence of comprehensive empirical tests, modelers rely extensively on extrapolation and coherence assumptions. Those who create the models are often very aware of the uncertainties. However, political and long-term investment pressures frequently drive decisionmakers who rely on models to ignore residual uncertainties and intermodel incompatibilities, and to operate models as if they were highly reliable and consistent. In fact, the debate and contestation inherent in both the scientific and political process often come to a halt once a model and its output are accepted as adequate. Questions such as what should be incorporated in the model and what should not, what are realistic boundary conditions, who has the authority to interpret results, and what to do with incompatible interfaces and remaining blind spots do not receive continuous attention over the lifetime of the modeled system. Only when a disaster occurs do they resurface.

Acknowledging these limitations alters how we should design and operate high-impact sociotechnical systems such as nuclear reactors. We should devote more attention and resources to the likelihood that our models are imperfect, and hence to the design of robust mechanisms to respond to subsequent problems, even if our models suggest that the chance of an accident is low. Thinking about models as needing a penumbra of responses for model failure shifts the focus away from technological optimism toward practices of humility, recognizing that models provide useful, but incomplete, guidance. This implies, in particular, that modeling must be informed by an understanding of the sociocultural pathways by which the model will be applied and anticipate undesirable social outcomes. For example, we can prepare mechanisms to protect the poorest members of society, who are often the most vulnerable to accidents and disasters. At the same time, we should make the modeling assumptions and parameters that guide decisionmaking in nuclear power safety explicit, public, and open to debate, rather than black-boxing them inside inaccessible computer programs or secluded expert discussions. This can be achieved by, for example, using open processes in the planning stage of power plants that allow the public to comment on and influence the terms and consequences of models, or by including forms of international peer review for national modeling assumptions. When sensitive security information is at stake, less open mechanisms such as panel-based review could be employed. This would facilitate a self-critical process that includes healthy and robust discussion of models and is aimed at learning over time.

Lessons for sustainability

The inseparability of the social and the technical in modern technological projects prompts us to derive from the Fukushima disaster three lessons that cut against the grain of conventional thought. We conclude, first, that disasters are not caused by politics interfering with technology but are exacerbated by a spurious assumption that politics can be removed from technological design and the subsequent practices of technology policy. Room should be made for robust discussion of the politics of standard operation alongside the politics of breakdowns and disasters, including the questioning of outdated national narratives of justification. Second, the risks, benefits, and responsibilities of nuclear power have global as well as national ramifications. There is thus an urgent need for international regulatory oversight that separates nationalist goals for advanced technologies from the safe operation and maintenance of such technologies. Third, complex sociotechnical systems elude available methods of modeling and prediction, thus calling for greater humility in the reliance on models and more attention to the consequences of model failure.

Our three lessons have important implications for how to think about the sustainability of nuclear power. If we want to take Fukushima seriously as a data point—and its graveness makes it apparent that we should—then we should take it as an important case study for what it means to be sustainable in the sociotechnical environments of the 21st century. Our accent in this paper has been on the “socio” component of sociotechnical. Northern Japan is far less sustainable today than it was before the nation undertook the nuclear power project. This is a consequence of social choices that could have been made in other ways. Consequently, if nuclear power is to remain an important part of our planet’s energy mix for the 21st century, we emphasize that ensuring its sustainability requires a resolute and explicit understanding of the interconnections between society and technology. Only by taking such considerations into account can we derive lessons from the past that can generate in the future more robust democratic debates and more dependable technological systems.

Time for Another Giant Leap For Mankind

In May of 1961, President John F. Kennedy announced a bold priority for the United States. He memorably urged the nation to send a man to the Moon by 1970:

“No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.”

Kennedy and his administration recognized that the United States risked losing the space race to the Soviets during the Cold War. With tremendous federal support from Congress, it took just eight years before Neil Armstrong left dusty footprints behind on the Moon.

There were other significant challenges he addressed as well that we don’t hear as much about. Just one month earlier, the president pressed the country to overcome another serious task:

“If we could ever competitively— at a cheap rate—get fresh water from salt water, that would be in the long-range interest of humanity and would really dwarf any other scientific accomplishment.”

Kennedy understood that improving desalination technologies would raise men and women from lives of poverty and improve human health globally. Unfortunately, that quote failed to make many history books.

Why should water be a great national priority? Even though the planet is covered in water, only 2.5% of it is fresh and two-thirds of that is frozen. This doesn’t leave much for the estimated 10.1 billion people on Earth by 2100. Water consumption continues to increase at a faster rate than population growth. Today over one billion people do not have access to clean drinking water, and the United Nations estimates that unsanitary water leads to more than 2 million deaths every year. Waterborne illnesses are associated with 80% of disease and mortality in developing nations, and sadly, the majority of the victims are children.

Since Kennedy’s 1961 call to arms, neither desalination nor the larger issue of the nation’s water infrastructure has received much public attention or regular directed federal support. Water R&D has not been a consistent priority, and investment has endured erratic boom and bust cycles.

During the 1960s and 1970s, the U.S. government cumulatively spent over $1 billion (in nominal dollars) on desalination R&D alone. The Water Resources Research Act of 1964 led to the creation of the Office of Water Research and Technology in the Department of Interior in 1974 to promote water resources management. It also helped to establish water research institutes at universities and colleges. Three years later, the Water Research and Conservation Act authorized $40 million for demon-stration-scale desalination plants. The following year, the Water Research and Development Act extended funding through 1980. But the Office of Water Research and Technology did not last. Just eight years after it opened, the Reagan administration abolished it, distributing authority over water programs among various agencies.

Congressional appropriations continue to be provided annually to fund water infrastructure, but it’s been 16 years since authorizing legislation was enacted to set drinking water policy. For wastewater, it has been a quarter century. Complicating matters further, because water programs are spread across a host of agencies and departments, it’s extremely difficult to track government R&D spending on water.

However, with the help of the National Academy of Sciences and the American Association for the Advancement of Science, we have been able to assemble a first-cut estimate for water R&D over time [in fiscal year (FY) 2011 dollars]. Although the details might be a little fuzzy, the overall picture is clear: Water R&D has been woefully neglected.

How can the nation expect to meet the looming water challenges when spending has not even been reliably tracked for the past 50 years? If it’s true that we cannot improve what we do not measure, then the fact that water R&D hasn’t been carefully tracked is a sign that we’re not taking it seriously. It’s no wonder that water treatment technologies have evolved so slowly, that water infrastructure leaks so abundantly, and that water quality is at risk from a variety of societal activities and policy actions. Despite decades of building the greatest innovation and R&D system the world has ever seen, progress in water innovations seems halting and stunted, especially when compared with the advances that occurred in parallel for information technology, energy, health care, or just about any other sector critical to society.

Just imagine what we would have accomplished by now had we devoted the same attention to looking for water on Earth as looking for water on the moon.

Today, even the youngest Americans can quote Neil Armstrong. We celebrate the space program as one giant leap for mankind. Now it’s time to take a second great leap by doing something even greater for humanity: investing in water research.

U.S. federal water-related R&D by agency, 1964-2010 (FY 2011 dollars)

This graph tracks water-related R&D spending at the U.S. Geological Survey, the National Science Foundation, the Environmental Protection Agency, and the National Oceanic and Atmosphere Administration. It does not include the relatively minor investments of several other agencies, such as the National Aeronautics and Space Administration’s spending on remote water sensing. Complicating matters, several agencies we examined have restructured budget-tracking methods over the years, making it difficult to make accurate comparisons across decades.

Available data for water R&D by agency


U.S. federal nondefense R&D spending by topic (FY 2012 dollars)

Federal R&D investment is a useful proxy for measuring a nation’s priorities. As this graph illustrates, from 1964 to 1967, space exploration received the lion’s share of funding by garnering over 60% of the total nondefense federal R&D budget. The space race was a top national priority, and that investment not only put a man on the moon but also stimulated progress in a number of technologies that are still producing economic benefits.

In the late 1970s and early 1980s, after two successive energy crises, energy was the top national priority. Energy received over 20% of the national nondefense R&D budget. Later in the 1980s, the United States invested in advanced weaponry (as part of an expansion of the defense R&D budget, not shown), followed by health care in the 1990s through early 2000s.

Throughout these decades, water was receiving so little attention that its spending was not tracked, making it difficult to provide firm estimates of total spending for many years. However, for those years during which spending was recorded, it is clear that the United States was paying far less attention to water than it was to defense, health care, energy, space exploration, or basic sciences.

Federal nondefense R&D, 1953–2013


Should the Science of Adolescent Brain Development Inform Public Policy?

The science of adolescent brain development is making its way into the national conversation. As an early researcher in the field, I regularly receive calls from journalists asking how the science of adolescent brain development should affect the way society treats teenagers. I have been asked whether this science justifies raising the driving age, outlawing the solitary confinement of incarcerated juveniles, excluding 18-year-olds from the military, or prohibiting 16-year-olds from serving as lifeguards on the Jersey Shore. Explicit reference to the neuroscience of adolescence is slowly creeping into legal and policy discussions as well as popular culture. The U.S. Supreme Court discussed adolescent brain science during oral arguments in Roper v. Simmons, which abolished the juvenile death penalty, and cited the field in its 2010 decision in Graham v. Florida, which prohibited sentencing juveniles convicted of crimes other than homicide to life without parole.

There is now incontrovertible evidence that adolescence is a period of significant changes in brain structure and function. Although most of this work has appeared just in the past 15 years, there is already strong consensus among developmental neuroscientists about the nature of this change. And the most important conclusion to emerge from recent research is that important changes in brain anatomy and activity take place far longer into development than had been previously thought. Reasonable people may disagree about what these findings may mean as society decides how to treat young people, but there is little room for disagreement about the fact that adolescence is a period of substantial brain maturation with respect to both structure and function.

Brain changes

Four specific structural changes in the brain during adolescence are noteworthy. First, there is a decrease in gray matter in prefrontal regions of the brain, reflective of synaptic pruning, the process through which unused connections between neurons are eliminated. The elimination of these unused synapses occurs mainly during pre-adolescence and early adolescence, the period during which major improvements in basic cognitive abilities and logical reasoning are seen, in part due to these very anatomical changes.

Second, important changes in activity involving the neurotransmitter dopamine occur during early adolescence, especially around puberty. There are substantial changes in the density and distribution of dopamine receptors in pathways that connect the limbic system, which is where emotions are processed and rewards and punishments experienced, and the prefrontal cortex, which is the brain’s chief executive officer. There is more dopaminergic activity in these pathways during the first part of adolescence than at any other time in development. Because dopamine plays a critical role in how humans experience pleasure, these changes have important implications for sensation-seeking.

Third, there is an increase in white matter in the prefrontal cortex during adolescence. This is largely the result of myelination, the process through which nerve fibers become sheathed in myelin, a white, fatty substance that improves the efficiency of brain circuits. Unlike the synaptic pruning of the prefrontal areas, which is mainly finished by mid-adolescence, myelination continues well into late adolescence and early adulthood. More efficient neural connections within the prefrontal cortex are important for higher-order cognitive functions—planning ahead, weighing risks and rewards, and making complicated decisions, among others—that are regulated by multiple prefrontal areas working in concert.

Fourth, there is an increase in the strength of connections between the prefrontal cortex and the limbic system. This anatomical change is especially important for emotion regulation, which is facilitated by increased connectivity between regions important in the processing of emotional information and those important in self-control. These connections permit different brain systems to communicate with each other more effectively, and these gains also are ongoing well into late adolescence. If you were to compare a young teenager’s brain with that of a young adult, you would see a much more extensive network of myelinated cables connecting brain regions.

Adolescence is not just a time of tremendous change in the brain’s structure. It is also a time of important changes in how the brain works, as revealed in studies using functional magnetic resonance imaging, or fMRI. What do these imaging studies reveal about the adolescent brain? First, over the course of adolescence and into early adulthood, there is a strengthening of activity in brain systems involving self-regulation. During tasks that require self-control, adults employ a wider network of brain regions than do adolescents, and this trait may make self-control easier, by distributing the work across multiple areas of the brain rather than overtaxing a smaller number of regions.

Second, there are important changes in the way the brain responds to rewards. When one examines a brain scan acquired during a task in which individuals who are about to play a game are shown rewarding stimuli, such as piles of coins or pictures of happy faces, it is usually the case that adolescents’ reward centers are activated more than occurs in children or adults. (Interestingly, these age differences are more consistently observed when individuals are anticipating rewards than when they are receiving them.) Heightened sensitivity to anticipated rewards motivates adolescents to engage in acts, even risky acts, when the potential for pleasure is high, such as with unprotected sex, fast driving, or experimentation with drugs. In our laboratory, Jason Chein and I have shown that this hypersensitivity to reward is particularly pronounced when adolescents are with their friends, and we think this helps explain why adolescent risk-taking so often occurs in groups.

A third change in brain function over the course of adolescence involves increases in the simultaneous involvement of multiple brain regions in response to arousing stimuli, such as pictures of angry or terrified faces. Before adulthood, there is less cross-talk between the brain systems that regulate rational decisionmaking and those that regulate emotional arousal. During adolescence, very strong feelings are less likely to be modulated by the involvement of brain regions involved in controlling impulses, planning ahead, and comparing the costs and benefits of alternative courses of action. This is one reason why susceptibility to peer pressure declines as adolescents grow into adulthood; as they mature, individuals become better able to put the brakes on an impulse that is aroused by their friends.

Importance of timing

These structural and functional changes do not all take place along one uniform timetable, and the differences in their timing raise two important points relevant to the use of neuroscience to guide public policy. First, there is no simple answer to the question of when an adolescent brain becomes an adult brain. Brain systems implicated in basic cognitive processes reach adult levels of maturity by mid-adolescence, whereas those that are active in self-regulation do not fully mature until late adolescence or even early adulthood. In other words, adolescents mature intellectually before they mature socially or emotionally, a fact that helps explain why teenagers who are so smart in some respects sometimes do surprisingly dumb things.

To the extent that society wishes to use developmental neuroscience to inform public policy decisions on where to draw age boundaries between adolescence and adulthood, it is therefore important to match the policy question with the right science. In his dissenting opinion in Roper, the juvenile death penalty case, Justice Antonin Scalia criticized the American Psychological Association, which submitted an amicus brief arguing that adolescents are not as mature as adults and therefore should not be eligible for the death penalty. As Scalia pointed out, the association had previously taken the stance that adolescents should be permitted to make decisions about abortion without involving their parents, because young people’s decision-making is just as competent as that of adults.

The association’s two positions may seem inconsistent at first glance, but it is entirely possible that an adolescent might be mature enough for some decisions but not others. After all, the circumstances under which individuals make medical decisions and commit crimes are very different and make different sorts of demands on individuals’ brains and abilities. State laws governing adolescent abortion require a waiting period before the procedure can be performed, as well as consultation with an adult—a parent, health care provider, or judge. These policies discourage impetuous and short-sighted acts and create circumstances under which adolescents’ decision-making has been shown to be just as competent as that demonstrated by adults. In contrast, violent crimes are usually committed by adolescents when they are emotionally aroused and with their friends—two conditions that increase the likelihood of impulsivity and sensation-seeking and that exacerbate adolescent immaturity. From a neuroscientific standpoint, it therefore makes perfect sense to have a lower age for autonomous medical decision-making than for eligibility for capital punishment, because certain brain systems mature earlier than others.

There is another kind of asynchrony in brain development during adolescence that is important for public policy. Middle adolescence is a period during which brain systems implicated in how a person responds to rewards are at their height of arousability but systems important for self-regulation are still immature. The different timetables followed by these different brain systems create a vulnerability to risky and reckless behavior that is greater in middle adolescence than before or after. It’s as if the brain’s accelerator is pressed to the floor before a good braking system is in place. Given this, it’s no surprise that the commission of crime peaks around age 17—as does first experimentation with alcohol and marijuana, automobile crashes, accidental drownings, and attempted suicide.

In sum, the consensus to emerge from recent research on the adolescent brain is that teenagers are not as mature in either brain structure or function as adults. This does not mean that adolescents’ brains are “defective,” just as no one would say that newborns’ muscular systems are defective because they are not capable of walking or their language systems are defective because they can’t yet carry on conversations. The fact that the adolescent brain is still developing, and in this regard is less mature than the adult brain, is normative, not pathological. Adolescence is a developmental stage, not a disease, mental illness, or defect. But it is a time when people are, on average, not as mature as they will be when they become adults.

I am frequently asked how to reconcile this view of adolescence with historical evidence that adolescents successfully performed adult roles in previous eras. This may be true, but all societies in recorded history have recognized a period of development between childhood and adulthood, and writers as far back as Aristotle have characterized adolescents as less able to control themselves and more prone to risk-taking than adults. As Shakespeare wrote in The Winter’s Tale: “I would there were no age between ten and three-and-twenty, or that youth would sleep out the rest; for there is nothing in the between but getting wenches with child, wronging the ancientry, stealing, fighting.” That was in 1623, without the benefit of brain scans.

Science in the policy arena

Although there is a good degree of consensus among neuroscientists about many of the ways in which brain structure and function change during adolescence, it is less clear just how informative this work is about adolescent behavior for public policy. Because all behavior must have neurobiological underpinnings, it is hardly revelatory to say that adolescents behave the way they do because of “something in their brain.” Moreover, society hardly needs neuroscience to tell it that, relative to adults, adolescents are more likely to engage in sensation seeking, less likely to control their impulses, or less likely to plan ahead. So how does neuroscience add to society’s understanding of adolescent behavior? What is the value, other than advances in basic neuroscience, of studies that provide neurobiological evidence that is consistent with what is already known about human behavior?

I’d like to consider five such possibilities, two that I think are valid, two that I think are mistaken, and one where my assessment is equivocal. Let me begin with two rationales that are widely believed but that are specious.

The first mistake is to interpret age differences in brain structure or function as conclusive evidence that the relevant behaviors must therefore be hard-wired. A correlation between brain development and behavioral development is just that: a correlation. It says nothing about the causes of the behavior or about the relative contributions of nature and nurture. In some cases, the behavior may indeed follow directly from biologically driven changes in brain structure or function. But in others, the reverse is true—that is, the observed brain change is the consequence of experience. Yes, adolescents may develop better impulse control as a result of changes within the prefrontal cortex, and it may be true that these anatomical changes are programmed to unfold along a predetermined timetable. But it is also plausible that the structural changes observed in the prefrontal cortex result from experiences that demand that adolescents exercise self-control, in much the same way that changes in muscle structure and function often follow from exercise.

A second mistake is assuming that the existence of a biological correlate of some behavior demonstrates that the behavior cannot be changed. It is surely the case that some of the changes in brain structure and function that take place during adolescence are relatively impervious to environmental influence. But it is also known that the brain is malleable, and there is a good deal of evidence that adolescence is, in fact, a period of especially heightened neuroplasticity. That’s one reason it is a period of such vulnerability to many forms of mental illness.

I suspect that the changes in reward sensitivity that I described earlier are largely determined by biology and, in particular, by puberty. I say this because the changes in reward seeking observed in young adolescents are also seen in other mammals when they go through puberty. This makes perfect sense from an evolutionary perspective, because adolescence is the period during which mammals become sexually active, a behavior that is motivated by the expectation of pleasure. An increase in reward sensitivity soon after puberty is added insurance that mammals will do what it takes to reproduce while they are at the peak of fertility, including engaging in a certain amount of risky behavior, such as leaving the nest or troop to venture out into the wild. In fact, the age at peak human fecundity (that is, the age at which an individual should begin having sex if he or she wants to have the most children possible) is about the same as the age at the peak of risk-taking—between 16 and 17 years of age.

Other brain changes that take place during adolescence are probably driven to a great extent by nurture and may therefore be modifiable by experience. There is growing evidence that the actual structure of prefrontal regions active in self-control can be influenced by training and practice. So in addition to assuming that biology causes behavior, and not the reverse, it is also mistaken to think that the biology of the brain can’t be changed.

How science can help

How, then, does neuroscience contribute to a better understanding of adolescent behavior? As I said, I think the neuroscience serves at least two important functions.

First, neuroscientific evidence can provide added support for behavioral evidence when the neuroscience and the behavioral science are conceptually and theoretically aligned. Notice that I used the word “support” here. Because scientific evidence of any sort is always more compelling when it has been shown to be valid, when neuroscientific findings about adolescent brain development are consistent with findings from behavioral research, the neuroscience provides added confidence in the behavioral findings. But it is incorrect to privilege the neuroscientific evidence over the behavioral evidence, which is frequently done because the neuroscientific evidence is often assumed—incorrectly— by laypersons to be more reliable, precise, or valid. Many nonscientists are more persuaded by neuroscience than by behavioral science, because they often lack the training or expertise that would enable them to view the neuroscience through a critical lens. In science, familiarity breeds skepticism, and the lack of knowledge that most laypersons have about the workings of the brain, much less the nuances of neuroscientific methods, often leads them to be overly impressed by brain science and underwhelmed by behavioral research, even when the latter may be more relevant to policy decisions.

A second way in which neuroscience can be useful is that it may help generate new hypotheses about adolescent development that can then be tested in behavioral studies. This is especially important when behavioral methods are inherently unable to distinguish between alternative accounts of a phenomenon. Let me illustrate this point with an example from our ongoing research.

As I noted earlier, it has been hypothesized that heightened risk-taking in adolescence is thought to be the product of an easily aroused reward system and an immature self-regulatory system. The arousal of the reward system takes place early in adolescence and is closely tied to puberty, whereas the maturation of the self-regulatory system is independent of puberty and unfolds gradually, from preadolescence through young adulthood.

In our studies, we have shown that reward sensitivity, preference for immediate rewards, sensation-seeking, and a greater focus on the rewards of a risky choice all increase between pre-adolescence and mid-adolescence, peak between ages 15 and 17, and then decline. In contrast, controlling impulses, planning ahead, and resisting peer influence all increase gradually from pre-adolescence through late adolescence, and in some instances, into early adulthood.

Although one can show without the benefit of neuroscience that the inclination to take risks is generally higher in adolescence than before or after, having knowledge about the course of brain development provides insight into the underlying processes that might account for this pattern. We’ve shown in several experiments that adolescents take more risks when they are with their friends than when they are alone. But is this because the presence of peers interferes with self-control or because it affects the way in which adolescents experience the rewards of the risky decision? It isn’t possible to answer this question by asking teenagers why they take more risks when their friends are around; they admit that they do, but they say they don’t know why. But through neuroimaging, we discovered that the peer effect was specifically due to the impact that peers have on adolescents’ reward sensitivity. Why does this matter? Because if the chief reason that adolescents experiment with tobacco, alcohol, and other drugs is that they are at a point in life where everything rewarding feels especially so, trying to teach them to “Just Say No” is probably futile. I’ve argued elsewhere that raising the price of cigarettes and alcohol, thereby making these rewarding substances harder to obtain, is probably a more effective public policy than health education.

I’ve now described two valid reasons to use neuroscience to better understand adolescent behavior and two questionable ones. I want to add a fifth, which concerns the attributions we make about individuals’ behavior. This particular use of neuroscience is having a tremendous impact on criminal law.

I recently was asked to provide an expert opinion in a Michigan case involving a prison convict named Anthony, who as a 17-year-old was part of a group of teenagers who robbed a small store. During the robbery, one of the teenagers shot and killed the storekeeper. Although the teenagers had planned the robbery, they did not engage in the act with the intention of shooting, much less murdering, someone. But under the state’s criminal law, the crime qualified as felony murder, which in Michigan carries a mandatory sentence of life without the possibility of parole for all members of the group involved in the robbery—including Anthony, who had fled the store before the shooting took place.

At issue now is a challenge by Anthony—who has been in prison for 33 years—to vacate the sentence in light of the Supreme Court’s ruling in Graham v. Florida that life without parole is cruel and unusual punishment for juveniles because they are less mature than adults. The ruling in that case was limited to crimes other than homicide, however. The challenge to Michigan’s law is based on the argument that the logic behind the Graham decision applies to felony murder as well.

I was asked specifically whether a 17-year-old could have anticipated that someone might be killed during the robbery. It is quite clear from the trial transcript that Anthony didn’t anticipate this consequence, but didn’t is not the same as couldn’t. It is known from behavioral research that the average 17-year-old is less likely than the average adult to think ahead, control his impulses, and foresee the consequences of his actions; and clinical evaluations of Anthony revealed that he was a normal 17-year-old. But “less likely” means just that; it doesn’t mean unable, but neither does it mean unwilling. As I will explain, the distinction between didn’t and couldn’t is important under the law. And studies of adolescent brain development might be helpful in distinguishing between the two.

The issue before the Michigan Court is not whether Anthony is guilty. He freely admitted having participated in the robbery, and there was clear evidence that the victim was shot and killed by one of the robbers. So there is no doubt that Anthony is guilty of felony murder. But even when someone is found guilty, many factors can influence the sentence he receives. Individuals who are deemed less than fully responsible are punished less severely than those who are judged to be fully responsible, even if the consequences of the act are identical. Manslaughter is not punished as harshly as premeditated murder, even though both result in the death of another individual. So the question in Anthony’s case, as it was in the Roper and Graham Supreme Court cases, is whether 17-year-olds are fully responsible for their behavior. If they are not, they should not be punished as severely as individuals whose responsibility is not diminished.

In order for something to diminish criminal responsibility, it has to be something that was not the person’s fault— that was outside his control. If someone has an untreatable tumor on his frontal lobe that is thought to make him unable to control aggressive outbursts, he is less than fully responsible for his aggressive behavior as a result of something that isn’t his fault, and the presence of the tumor would be viewed as a mitigating factor if he were being sentenced for a violent crime. On the other hand, if someone with no neurobiological deficit goes into a bar, drinks himself into a state of rage, and commits a violent crime as a result, the fact that he was drunk does not diminish his responsibility for his act. It doesn’t matter whether the mitigating factor is biological, psychological, or environmental. The issue is whether the diminished responsibility is the person’s fault and whether the individual could have been able to compensate for whatever it is that was uncontrollable.

Judgments about mitigation are often difficult to make because most of the time, factors that diminish responsibility fall somewhere between the extremes of things that are obviously beyond an individual’s control, such as brain tumors, and those that an individual could have controlled, such as self-inflicted inebriation. But in many cases, things are not so clear-cut. One must make a judgment call, and one looks for evidence that tips the balance in one direction or the other. Profound mental retardation that compromises foresight is a mitigating condition. A lack of foresight as a result of stupidity that is within the normal range of intelligence is not. Being forced to commit a crime because a gun is pointed at one’s head mitigates criminal responsibility. Committing a crime in order to save face in front of friends who have made a dare does not. Many things can lead a person to act impulsively or without foresight but are not necessarily mitigating. A genetic inclination toward aggression is probably in this category, as is having been raised in a rotten neighborhood. Both are external forces, but society does not see them as so determinative that they automatically diminish personal responsibility.

As I have discussed, studies of adolescent brain anatomy clearly indicate that regions of the brain that regulate such things as foresight, impulse control, and resistance to peer pressure are still developing at age 17. And imaging studies show that immaturity in these regions is linked to adolescents’ poorer performance on tasks that require these capabilities. Evidence that the adolescent brain is less mature than the adult brain in ways that affect some of the behaviors that mitigate criminal responsibility suggests that at least some of adolescents’ irresponsible behavior is not entirely their fault.

The brain science, in and of itself, does not carry the day, but when the results of behavioral science are added to the mix, I think it tips the balance toward viewing adolescent impulsivity, short-sightedness, and susceptibility to peer pressure as developmentally normative phenomena that teenagers cannot fully control. This is why I have argued that adolescents should be viewed as inherently less responsible than adults, and should be punished less harshly than adults, even when the crimes they are convicted of are identical. I do not find persuasive the counterargument that some adolescents can exercise self-control or that some adults are just as impulsive and short-sighted as teenagers. Of course there is variability in brain and behavior among adolescents, and of course there is variability among adults. But the average differences between the age groups are significant, and that is what counts as society draws age boundaries under the law on the basis of science.

Age ranges for responsibility

Beyond criminal law, how should social policy involving young people take this into account? Society needs to distinguish between people who are ready for the rights and responsibilities of adulthood and those who are not. Science can help in deciding where best to draw the lines. Based on what is now known about brain development—and I say “now known” because new studies are appearing every month—it is reasonable to posit that there is an age range during which adult neurobiological maturity is reached. Framing this as an age range, rather than pinpointing a discrete chronological age, is useful, because doing so accommodates that fact that different brain systems mature along different timetables, and different individuals mature at different ages and different rates. The lower bound of this age range is probably somewhere around 15. By this I mean that if society had an agreed-upon measure of adult neurobiological maturity (which it doesn’t yet have, but may at some point in the future), it would be unlikely that many individuals would have attained this mark before turning 15. The upper bound of the age range is probably somewhere around 22. That is, it would be unlikely that there would be many normally developing individuals who have not reached adult neurobiological maturity by the time they have turned 22.

If society were to choose either of these endpoints as the age of majority, it would be forced to accept many errors of classification, because granting adult status at age 15 would result in treating many immature individuals as adults, which is dangerous, whereas waiting until age 22 would result in treating many mature individuals as children, which is unjust. So what is society to do? I think there are four possible options.

The first option is to pick the mid-point of this range. Yes, this would result in classifying some immature individuals as adults and some mature ones as children. But this would be true no matter what chronological age is picked, and assuming that the age of neurobiological maturity is normally distributed, fewer errors would be made by picking an age near the middle of the range than at either of the extremes. Doing so would place the dividing line somewhere around 18, which, it turns out, is the presumptive age of majority pretty much everywhere around the world. In the vast majority of countries, 18 is the age at which individuals are permitted to vote, drink, drive, and enjoy other adult rights. And just think—the international community arrived at this without the benefit of brain scans.

A second possibility would be to decide, on an issue-by-issue basis, what it takes to be “mature enough.” Society does this regularly. Although the presumptive age of majority in the United States is 18, the nation deviates from this age more often than not. Consider, for a moment, the different ages mandated for determining when individuals can make independent medical decisions, drive, hold various types of employment, marry, view R-rated movies without an adult chaperone, vote, serve in the military, enter into contracts, buy cigarettes, and purchase alcohol. The age of majority with respect to these matters ranges from 15 to 21, which is surprisingly reasonable, given what science says about brain development. The only deviation I can think of that falls out of this range is the nation’s inexplicable willingness to try people younger than 15 as adults, but this policy, in part because of the influence of brain science, is now being questioned in many jurisdictions.

Although the aforementioned age range may be reasonable, society doesn’t rely on science to link specific ages to specific rights or responsibilities, and some of the nation’s laws are baffling, to say the least, when viewed through the lens of science or public health. How is it possible to rationalize permitting teenagers to drive before they are permitted to see R-rated movies on their own, sentencing juveniles to life without parole before they are old enough to serve on a jury, or sending young people into combat before they can buy beer? The answer is that policies that distinguish between adolescents and adults are made for all sorts of reasons, and science, including neuroscience, is only one of many proper considerations.

A third possibility would be to shift from a binary classification system, in which everyone is legally either a child or an adult, to a regime that uses three legal categories: one for children, one for adolescents, and one for adults. The nation does this for some purposes under the law now, although the age boundaries around the middle category aren’t necessarily scientifically derived. For example, many states have graduated drivers’ licensing, a system in which adolescents are permitted to drive, but are not granted full driving privileges until they reach a certain age. This model also is used in the construction of child labor laws, where adolescents are allowed to work once they’ve reached a certain age, but there are limits on the types of jobs they can hold and the numbers of hours they can work.

In our book Rethinking Juvenile Justice, Elizabeth Scott and I have argued that this is how the nation should structure the justice system, treating adolescent offenders as an intermediate category, neither as children, whose crimes society excuses, nor as adults, whom society holds fully responsible for their acts. I’ve heard the suggestion that society should apply this model to drinking as well, and permit individuals between 18 and 20 to purchase beer and wine, but not hard liquor, and to face especially stiff punishment for intoxication or wrongdoing under the influence of alcohol. There are some areas of the law, though, where a three-way system would be difficult to imagine, such as voting.

A final possibility is acknowledging that there is variability in brain and behavioral development among people of the same chronological age and making individualized decisions rather than draw categorical age boundaries at all. This was the stance taken by many of the Supreme Court justices who dissented in the juvenile death penalty and life without parole cases. They argued that instead of treating adolescents as a class of individuals who are too immature to be held fully responsible for their behavior, the policy should be to assess each offender’s maturity to determine his criminal culpability. The justices did not specify what tools would be needed to do this, however, and reliably assessing psychological maturity is easier said than done. There is a big difference between using neuroscience to guide the formulation of policy and using it to determine how individual cases are adjudicated. Although it may be possible to say that, on average, people who are Johnny’s age are typically less mature than adults, we cannot say whether Johnny himself is.

Science may someday have the tools to image an adolescent’s brain and draw conclusions about that individual’s neurobiological maturity relative to established age norms for various aspects of brain structure and function, but such norms do not yet exist, and the cost of doing individualized assessments of neurobiological maturity would be prohibitively expensive. Moreover, it is not clear that society would end up making better decisions using neurobiological assessments than those it makes on the basis of chronological age or than those it might make using behavioral or psychological measures. It makes far more sense to rely on a driving test than a brain scan to determine whether someone is ready to drive. So don’t expect to see brain scanners any time soon at your local taverns or movie theaters.

Accepting the challenges

The study of adolescent brain development has made tremendous progress in the very short period that scientists have been studying the adolescent brain systematically. As the science moves ahead, the big challenge facing those of us who want to apply this research to policy will be understanding the complicated interplay of biological maturation and environmental influence as they jointly shape adolescent behavior. And this can be achieved only through collaboration between neuroscientists and scholars from other disciplines. Brain science should inform the nation’s policy discussions when it is relevant, but society should not make policy decisions on the basis of brain science alone.

Whether the revelation that the adolescent brain may be less mature than scientists had previously thought is ultimately a good thing, a bad thing, or a mixed blessing for young people remains to be seen. Some policymakers will use this evidence to argue in favor of restricting adolescents’ rights, and others will use it to advocate for policies that protect adolescents from harm. In either case, scientists should welcome the opportunity to inform policy discussions with the best available empirical evidence.

The Tunnel at the End of the Light: The Future of the U.S. Semiconductor Industry

Today, as it was 25 years ago, U.S. leadership in the semiconductor industry appears to be in peril, with increasingly robust competition from companies in Europe and Asia that are often subsidized by national governments. Twenty-five years ago, the United States responded vigorously to a Japanese challenge to its leadership. U.S. industry convinced the government, largely for national security reasons, to make investments that helped preserve and sustain U.S. leadership. The main mechanism for this turnaround was an unprecedented industry/government consortium called SEMATECH, which today has attained a near-mythical status.

The world has changed in the past 25 years, however. Today, industry is not clamoring for government help. In a more globalized economy, companies appear to be more concerned with their overall international position, rather than the relative strength of the U.S.-based segment. More over, the United States continues to lead the world in semi conductor R&D. Companies can use breakthroughs derived from that research to develop and manufacture new products anywhere in the world.

Indeed, it appears increasingly likely that most semiconductor manufacturing will no longer be done in the United States. But if this is the case, what are the implications for the U.S. economy? Are the national security concerns that fueled SEMATECH’s creation no longer relevant? Unfortunately, today’s policymakers are not even focusing on these questions. They should be.

We believe that there could be significant ramifications to the end of cutting-edge semiconductor manufacturing in the United States and that government involvement that goes beyond R&D funding may be necessary. But the U.S. government has traditionally been averse to policies supporting commercialization, and the current ideological makeup of Congress dictates against anything smacking of industrial policy.

But assuming that more government help is needed, and that Congress is even willing to provide it, what form should it take? In considering this question, we decided to reexamine the SEMATECH experience. We concluded that SEMATECH met the objectives of the U.S. semiconductor companies that established it but was only a partial answer to sustaining U.S. leadership in this critical technology. Moreover, as a consortium that received half of its funds over a decade from the U.S. Department of Defense (DOD) under the rationale of supporting national security, the SEMATECH experience raises some unaddressed policy questions as well as questions about how government should approach vexing issues about future technology leadership.

The origins of SEMATECH

In the late 1970s, U.S. semiconductor firms concluded that collectively they had a competitiveness problem. Japanese companies were aggressively targeting the dynamic random access memory (DRAM) business. U.S. companies believed that the Japanese firms were competing unfairly, aided by various government programs and subsidies. They contended that these arrangements allowed Japanese firms to develop and manufacture DRAMs and then dump them on the market at prices below cost. Initially, U.S. industry responded by forming the Semiconductor Industry Association as a forum for addressing key competitive issues.

In 1987, a Defense Science Board (DSB) Task Force issued a report articulating growing concerns about the competitiveness of the U.S. integrated circuit (IC) industry. The DSB study depicted semiconductor manufacturing as a national security problem and argued that the government should address it. A key recommendation was the creation of the entity that became SEMATECH.

The Reagan administration initially opposed an industry/ government consortium, considering it inappropriate industrial policy. But Congress, concerned with what it considered to be the real prospect that the United States would cede the IC manufacturing industry to Japan, approved a bill creating SEMATECH, and President Reagan signed it into law.

From the outset, there were some concerns about SEMATECH. One was the nature of a consortium itself, which is essentially a club with members who pay to join. SEMATECH was made up of about 80% of the leading U.S. semiconductor chip manufacturing firms. But some companies, for various reasons, declined to join, and were critical of SEMATECH for two reasons. First, SEMATECH was criticized for focusing on mainstream technology and thus defining the next generation of technology based on a limited view of the world. SEMATECH decided to focus on silicon complementary metal-oxide–semiconductor (CMOS) ICs. This technology is the basis for memory and other high-volume devices that were targets of Japanese competitors. Cypress Semiconductor, a chief critic that made application-specific integrated circuits (ASICs), believed that SEMATECH supported incumbent mass market technologies, not those of more specialized producers. Second, companies that had declined to participate in the consortium argued that because SEMATECH received half of its funding from the federal government, the results of its efforts should be equally available to all. But SEMATECH adamantly maintained its view that only those who had paid their fair share should reap preferential benefits.

Another concern about the creation of SEMATECH was the limited role given to the DOD despite the national security rationale for SEMATECH and the fact that the DOD would provide 50% of the funding for the consortium. In the enabling legislation for SEMATECH, Congress ensured that the DOD would have little direct input in the project planning and activities of the organization. Congress, following the position of SEMATECH’s commercial participants, concluded simply that industry knew best what to do and how to do it.

But from the start, it was clear that the interests of the government, and especially the DOD, were not the same as those of the commercial IC industry. This is made clear in an Institute for Defense Analyses (IDA) study done for the DOD on technology areas that needed attention from a defense perspective. One highlighted area was ASIC technology, because of the DOD’s need for affordable, low-volume specialty ICs. Although ASIC technology had great commercial potential, it was unlikely that SEMATECH would pursue that technology because of its business model.

The IDA study also emphasized the need to invest in manufacturing tools, especially lithography technology, for future generations of ICs. Although industry participants in SEMATECH did not object to government investment in advanced lithography, they saw this type of effort as separate from SEMATECH. However, this raised concern about how longer-term DOD-sponsored lithography R&D would integrate with the near-term SEMATECH focus on developing technology to improve processing yields.

A third area emphasized was the need to develop non-silicon IC technology, especially that based on materials such as gallium arsenide. These technologies are especially important to defense applications that require high-speed signal processing and had, in the view of some, great commercial potential. Indeed, gallium arsenide IC technology became the critical enabling technology for cellular phones.

Thus, it was clear early on that SEMATECH, with its narrow agenda and focus on the survival of its member companies, could not, from the DOD’s perspective, sufficiently serve national security interests in IC development. The DOD needed to develop new types of devices for future capabilities, the processes needed to fabricate them, and a U.S. industrial base with a first-mover advantage, so that the DOD could reap strategic and tactical advantages resulting from the development of the new technologies. Some saw the manufacturing of these other devices as vital to national security, perhaps even more so than the standard commercial CMOS ICs emphasized by SEMATECH.

This longer-term perspective led the DOD to sponsor research in areas SEMATECH was not emphasizing, including the Very High Speed IC program, which focused on analog-to-digital converter ICs. These were not standard CMOS ICs and required different fabrication processes. The DOD also funded research on non–silicon-based ICs, particularly those using gallium arsenide, through the Monolithic Microwave IC program. In addition, under the Defense Advanced Research Projects Agency (DARPA), the Very Large Scale Integration (VLSI) program supported research on advanced IC architecture, design tools, and manufacturing tools, especially lithography.

Diverging interests

The divergent interests of the government and SEMATECH came to the forefront in the1990s over the issue of lithography technology. Lithography processing tools produce intricate circuit design patterns on semiconductor wafers. Their continued improvement has enabled IC manufacturers to shrink feature size and pack ever-increasing numbers of transistors and functionality into a single chip.

Lithography tools are extremely complex and increasingly expensive. The current leading-edge tools cost more than $50 million, and next-generation tools are projected to cost nearly $125 million. Manufacturers, however, can make only one or two leading-edge tools per month. The highest profit margins for IC products come immediately after an advance occurs in lithography technology. Once the improved lithography tools become more widely available, the ICs they produce become commodity items with thin margins. Thus, the order in which IC manufacturers get access to the most advanced tools is an important component of their profitability. Tool suppliers use this as leverage to reward their largest and most loyal customers.

The DOD through DARPA and industry through SEMATECH supported the development of advanced lithography tools by two U.S. suppliers: GCA and PerkinElmer (later Silicon Valley Group, or SVG). These companies once dominated the global lithography market but were displaced by the Japanese firms Nikon and Canon. With federal and other external funding, the U.S.-developed tools became competitive with the tools offered by Nikon and Canon. The DOD wanted U.S. IC firms to buy and use the U.S. tools, thereby supporting a U.S. semiconductor infrastructure. The leading U.S. IC firms, however, were reluctant and made it increasingly clear that they wanted the U.S.-developed technology to be available to their Japanese lithography tool suppliers. In essence, key U.S. IC firms were happy to have this technology developed but wanted to continue to use Nikon and Canon as their suppliers.

This crystallized the divergent interests of the DOD and some of the major U.S. IC firms. Some in the DOD saw it in the nation’s interest for commercial and defense purposes to have U.S.-based lithography technology capabilities. The industry leaders, on the other hand, emphasized business concerns about the ability of the U.S. lithography firms to scale production and deliver and support the tools. Further, because they had established special relationships with Nikon or Canon, they had good early access to key tools, providing them with a competitive advantage.

From the government standpoint, this raised the following question: Why did companies encourage the government to fund these U.S. tools if they were not going to buy them? SEMATECH members paid for some of the tool development, which gave them the right of first refusal but no obligation to buy. Yet U.S. lithography firms would not survive unless major U.S. companies bought their tools.

As the U.S. lithography toolmakers foundered, the U.S. government now faced the question of what, if anything, it could do with the remnants of the U.S. industry. With government acquiescence, SVG acquired PerkinElmer’s lithography business in 1990 and formed Silicon Valley Group Lithography (SVGL). SVGL proceeded to develop Perkin Elmer’s breakthrough step-and-scan technology but still struggled to attract a customer base. In 1993, it talked with Canon about sharing the underlying technology, but the U.S. government objected to any such transfer. In 2000, ASML, a Netherlands-based lithography company, a joint venture of Phillips and ASM, announced its intent to acquire SVGL for $1.6 billion. After an initial objection by the U.S. Business and Industry Council, ASML completed the acquisition of SVGL in 2001, albeit with some very specific strictures to satisfy U.S. security concerns.

ASML, with strong support from the European Union (EU), industrial collaborations through the Belgium-based Interuniversity Microelectronics Center (IMEC), and the technology that it acquired from SVGL, developed a strong customer base among IC manufacturers in Europe, as well as those emerging in Korea and Taiwan that could not gain early access to leading-edge tools from the dominant Japanese providers. By addressing this underserved market, ASML in 2002 became the global leader, with 45% market share. Through a series of technical innovations that solved major problems for the IC manufacturers, it grew to 70% market share in 2011. Canon, meanwhile, lost most of its business.

Thus, it is arguable that U.S. industrial policy in the 1980s and 1990s, coupled with that of Europe, was successful. U.S. government–funded technology helped create a highly capable lithography competitor who offered an alternative supplier to then-dominant Japanese leaders, reducing the prospects of unacceptable concentration of a critical production tool. However, the firm that successfully implemented this technology was a Dutch one, which also received strong support from European firms and technology policies and programs.

Subsequently, another issue arose in the late 1990s between the U.S. government and U.S. IC firms over lithography. For years, IC manufacturers were concerned that optical lithography would reach its technological limits and the industry would no longer be able to continue shrinking features on IC chips. One promising, albeit extremely challenging, solution lay in the development of lithography based on extreme ultraviolet light (EUV), a technology that originated at U.S. Department of Energy (DOE) laboratories.

To access EUV technology, Intel in 1997 formed the EUV LLC, which entered into a cooperative R&D agreement (CRADA) with DOE. As part of this agreement, Intel and its partners would pay $250 million over three years to cover the direct salary costs of government researchers at the national labs and acquire equipment and materials for the labs, as well as cover the costs of its own researchers dedicated to the project. In return, the consortium would have exclusive rights to the technology in the EUV lithography field of use. At the time, it was the largest CRADA ever undertaken.

Once the EUV LLC executed the CRADA, Intel announced that it intended to bring in Nikon and ASML to help develop the technology. This unprecedented access by foreign corporations to U.S. national defense laboratories became an issue for DOE and Congress. In reviewing the available options, DOE rejected the Intel proposal to partner with Nikon but allowed it to set up a partnership with ASML. Among conditions for this access, ASML had to commit to use SVGL’s U.S. facilities for manufacturing.

Originally slated for production in early 2000s, EUV tools are still not available. Today, ASML is the only company in the world with preproduction EUV tools undergoing evaluation by IC manufacturers. Analysts predict that if ASML successfully delivers commercially viable EUV lithography tools, it will expand its global market share to 80%.

U.S. policy clearly had an effect on the semiconductor industry, but drawing lessons from this experience is not a simple matter. Tensions between commercial and national security goals were never fully resolved. Although some U.S. companies benefited from federal efforts, several foreign companies also reaped benefits, and the overall gains for U.S. interests were not as broad or as long-lasting as hoped. Now the nation’s IC industry faces a new and somewhat different challenge in a different global economic environment. Developing an effective policy response is a challenge that can be informed but not guided by previous efforts.

The government’s role

The DOD’s investments in IC R&D typically aim far ahead of the trajectory of commercial R&D. In that sense, the DOD’s R&D can be considered disruptive because it often leads to technologies that are not embedded in current practice. However, the last thing that the mainstream semiconductor industry wants is anything disruptive. The existing commercial IC industry follows a highly controlled evolutionary roadmap to maintain the Moore’s Law pace of doubling the number of transistors on an IC every 18 to 24 months. Only when industry hits a wall and can no longer proceed along an evolutionary path will it consider radical change.

Today, the industry again faces such a wall. The cost per transistor on an IC, which has been declining at an exponential rate for more than four decades, has leveled off, and continuing progress is simply not economically possible with mainstream technology. Industry is counting on ASML’s EUV lithography tools to restore the pattern of reliable cost reduction, but it is not certain that these tools will be able to meet the technical and economic requirements. There are both huge engineering and basic physics challenges to the development of new technologies. The lithography tool will have to register the successive exposure layers with an accuracy within four nanometers or about 20 atoms, and at this scale even the photon intensity of the light becomes a concern. We are literally reaching the tunnel at the end of the light.

DARPA has continued to fund R&D on next-generation lithography, albeit with an emphasis on technologies that support cost-effective fabrication of the low volumes of the specialty ICs needed by the military. It has funded technologies called nano-imprint lithography and reflected electron beam lithography. DARPA’s program, however, is limited to proving technical feasibility and does not address the investment needed to take the technology to a production-worthy tool.

Taking a tool from the technology feasibility demonstration stage to a production-worthy product is a bet-the-company proposition, similar to the stakes in developing a new commercial airliner. The penalty for missing the market can be devastating. In the past, U.S. lithography toolmakers had the best technology in the world, but failed to commercialize it. Today, the United States does not even have a firm that makes lithography tools.

The U.S. government has repeatedly invested in technology development but has deliberately avoided funding commercialization efforts. The U.S. aversion to policies supporting commercialization is a stark contrast to those of other countries. The EU, for example, through IMEC, Germany’s Fraunhofer Institute, and other institutions, supports applied research and product development. This has helped companies such as ASML navigate the treacherous waters of commercialization. Japan, Korea, Taiwan, China, and others also have government-funded applied technology programs.

As the world of electronics moves into the realm of nanoscale electronics, or nanotronics, the United States has focused on proof-of-concept technology R&D programs at DARPA, including some novel but unproven nanoscale production approaches, such as nanoimprint, dip-pen, and electron beam lithography. Yet today there is no concerted, focused national R&D program addressing the nanotronic manufacturing infrastructure. Corporate consortia and regional centers, such as the Albany Nanotech Complex in New York and the associated IBM-led Semiconductor Research Alliance, have sprung up without federal funding. However, such efforts are corporation-dominated and international in focus. They do not aim at a national agenda of technology leadership.

Without a parallel effort focused on developing the high-volume manufacturing technology and infrastructure necessary to make actual products, the United States will probably not reap the rewards of its investments. U.S.-based companies that develop the technology will have to go elsewhere to manufacture. The President’s Council of Advisors on Science and Technology 2010 report on the National Nanotechnology Initiative emphasized the need to put greater emphasis on manufacturing and commercialization in a nanoelectronics research initiative, stating that “over the next five years, the federal government should double the funding devoted to nanomanufacturing.”

The United States has a long history of funding the underlying technology and manufacturing capability in areas where national security is the primary application. The situation becomes less clear in dual-use situations, in which the technology has both commercial and defense importance. IC technology is clearly dual-use, given its pervasiveness in commercial products and its criticality in most aspects of military strategic and tactical operations.

The question is, given the national security implications of IC technology, should there be a concerted U.S. policy to address nanotronics manufacturing? If the health of the U.S. semiconductor manufacturing industry was so important for national security 25 years ago that it was the rationale for creating SEMATECH, should the United States be doing something similar today? If not, what has changed? If it undertakes such an activity, should it structure it in a way that gives the federal government an active role and voice?

The DOD has a strong interest in sustaining U.S. leadership in emerging technologies that will provide needed military capabilities. But the United States also needs to foster these technologies to maintain the health of an industry that has become a key component of a modern economy.

Who would fund this effort? Is this a job for the DOD, as it was in the past? It is certainly in the DOD’s interest to see key technologies mature and for the United States to be a leader in manufacturing. On the other hand, does this warrant a broader research funding agenda to include the Departments of Commerce and Energy? We contend that this is a much broader and more profound issue for national security, encompassing economic, energy, and even environmental security. Without the robust development of nanotronic-based industries, the United States faces the prospect of losing its leading position in the broader information technology sector, with cascading effects on other industries that depend on this technology to continue boosting their productivity.

Our concern is that there is inadequate focus on and discussion of these issues. The United States still has some tremendous advantages, including companies that are leaders in the most advanced IC technologies and their production, some robust tool and equipment firms, and a strong government-funded R&D system. However, other countries see the opportunity to claim the field and are funding national-level R&D programs in manufacturing at the nanoscale. Furthermore, the economic, geopolitical, and security landscape has changed fundamentally since 1985, which adds complexities to assessing the situation and determining potential approaches to address it. Given these dynamics, we conclude that it is time for concerted discussion to determine whether nanotronics manufacturing is an urgent national and economic security issue, and if so, what should be done about it.

Global Lessons for Improving U.S. Education

The middling performance of U.S. students on international achievement tests is by now familiar, so the overall results of the latest Program for International Student Assessment (PISA) study, released in late 2010, came as no surprise. Among the 34 developed democracies that are members of the Organization for Economic Cooperation and Development (OECD), 15-year-olds in the United States ranked 14th in reading, 17th in science, and no better than 25th in mathematics. The new wrinkle in the data was the participation of China’s Shanghai province, whose students took top honors in all three subjects, besting U.S. students by the equivalent of multiple grade levels in each. Home to the nation’s wealthiest city and a magnet for its most ambitious and talented citizens, Shanghai’s results are hardly representative of China as a whole. Yet its students’ eye-popping performance seemed to highlight new challenges facing the U.S. economy in an age of unprecedented global trade.

The notion that educational competition threatens the future prosperity of the United States had already been a recurrent theme in many quarters. The Obama administration, for example, cited a link between education and national economic competitiveness in making the case for the education funding allocated through the American Recovery and Reinvestment Act, for the state-level policy changes incentivized by the Race to the Top grant competition, and for increased federal support of early childhood education.

However, the relationship between education and international competitiveness remains “a subject rife with myth and misunderstanding,” as even Arne Duncan, secretary of the U.S. Department of Education, has noted. This confusion may stem from the fact that the concept of international competitiveness is notoriously difficult to pin down. Academic economists, for example, have long criticized the view that countries in a globalized economy are engaged in a zero-sum game in which only some can emerge as winners and others will inevitably lose out. All countries can in the-ory benefit from international trade by specializing in those activities in which they have a comparative advantage. In what sense, then, does it make sense to talk about national economies competing?

These general lessons seem doubly true for education, where the mechanisms by which gains abroad would undermine U.S. prosperity are altogether unclear. Educational improvements in other countries enhance the productivity of their workforces, which in turn should reduce the costs of imports to the United States, benefitting all U.S. residents except perhaps those who compete directly in producing the same goods. At the top end of the education spectrum, growth in the number of graduate degrees awarded in fields such as science and engineering fosters technological advances from which the United States can benefit regardless of where key discoveries are made. For these and other reasons, developments such as Shanghai’s performance on the PISA, although at first glance startling, may in fact represent good news.

This is not to say that the very real educational challenges facing the United States are irrelevant to its future economic performance. On the contrary, the evidence that the quality of a nation’s education system is a key determinant of the future growth of its economy is increasingly strong. Therefore, the United States may benefit by examining past and ongoing research on educational performance across countries and considering the actions that higher-performing nations have taken that have helped their students to succeed.

Hard comparisons

Launched in 2000 as a project of the OECD, the PISA is administered every three years to nationally representative samples of students in each OECD country and in a growing number of partner countries and subnational units such as Shanghai. The 74 education systems that participated in the latest PISA study, conducted during 2009, represented more than 85% of the global economy and included virtually all of the United States’ major trading partners, making it a particularly useful source of information on U.S. students’ relative standing.

U.S. students performed well below the OECD average in math and essentially matched the average in science. In math, the United States trailed 17 OECD countries by a statistically significant margin, its performance was indistinguishable from that of 11 countries, and it significantly outperformed only five countries. In science, the United States significantly trailed 12 countries and outperformed nine. Countries scoring at similar levels to the United States in both subjects include Austria, the Czech Republic, Hungary, Ireland, Poland, Portugal, and Sweden.

The gap in average math and science achievement between the United States and the top-performing national school systems is dramatic. In math, the average U.S. student by age 15 was at least a full year behind the average student in six countries, including Canada, Japan, and the Netherlands. Students in six additional countries, including Australia, Belgium, Estonia, and Germany, outperformed U.S. students by more than half a year.

The second-rate performance of U.S. students is particularly striking given the level of resources the nation devotes to elementary and secondary education. Data on cumulative expenditures per student in public and private schools between ages 6 and 15 confirm that the United States spends more than any other OECD country except Luxembourg. Most of the higher-performing countries spend between $60,000 and $80,000 per student, compared with nearly $105,000 in the United States.

Some observers have speculated that despite the modest performance of its average students, the U.S. education system is characterized by pockets of excellence that can be expected to meet the needs of the knowledge economy. However, there is no clear evidence that educating a subset of students to very high levels is more important for national economic success than raising average achievement levels. Moreover, the United States in fact fares no better in comparisons of the share of students performing at exceptionally high levels. For example, only 9.9% of U.S. students taking the PISA math test achieved at level 5 or 6, the two top performance categories, which, according to test administrators, indicate that students are capable of complex mathematical tasks requiring broad, well-developed thinking and reasoning skills. Twenty-four countries outranked the United States by this metric. The share of students achieving level 5 or 6 exceeded 20% in five countries and exceeded 15% in another 10. In Shanghai, 50.4% of students surpassed this benchmark, more than five times the level in the United States.

Another common response to the disappointing performance of U.S. students has been to emphasize the relative diversity of U.S. students and the wide variation in their socioeconomic status. Family background characteristics and other out-of-school factors clearly have a profound influence on students’ academic achievement. The available international assessments, all of which offer only a snapshot of how students have learned at a single point in time rather than evidence on how much progress they are making from one year to the next, are therefore best viewed as measuring the combined effects of differences in school quality and differences in these contextual factors. The latter are poorly measured across countries, making it difficult to pin down their relative import.

Even so, it is difficult to attribute the relative ranking of U.S. students to out-of-school factors alone. The share of U.S. students with college-educated parents, a key predictor of school success, actually ranks 8th among the OECD countries. The typical U.S. student is also well above the OECD average, according to PISA’s preferred measure of students’ socioeconomic status.

A record of poor results

U.S. students, however, have never fared well in various international comparisons of student achievement. The United States ranked 11th out of 12 countries participating in the first major international study of student achievement, conducted in 1964, and its math and science scores on the 2009 PISA actually reflected modest improvements from the previous test. The United States’ traditional reputation as the world’s educational leader stems instead from the fact that it experienced a far earlier spread of mass secondary education than did most other nations.

In the first half of the 20th century, demand for secondary schooling in the United States surged as technological changes increased the wages available to workers who could follow written instructions, decipher blueprints, and perform basic calculations. The nation’s highly decentralized school system, in which local communities could vote independently to support the creation of a high school, provided a uniquely favorable mechanism to drive increased public investment in schooling. As economic historian Claudia Goldin of Harvard University has documented, by 1955 almost 80% of 15- to 19-year-olds were enrolled full-time in general secondary schooling, more than double the share in any European country.

The United States’ historical advantage in terms of educational attainment has long since eroded, however. U.S. high-school graduation rates peaked in 1970 at roughly 80% and have declined slightly since, a trend often masked in official statistics by the growing number of students receiving alternative credentials, such as a General Educational Development (GED) certificate. Although the share of students enrolling in college has continued to climb, the share completing a college degree has hardly budged. As this pattern suggests, both the time students are taking to complete college degrees and dropout rates among students enrolling in college have increased sharply. This trend seems especially puzzling in light of the fact that the economic returns from completing a postsecondary degree—and the economic costs of dropping out of high school—have grown substantially over the same period.

A policy agenda centered on closing the global achievement gap between U.S. students and those in other developed countries would provide a complementary and arguably more encompassing rationale for education reform than one focused primarily on closing achievement gaps among subpopulations within the United States.

Meanwhile, other developed countries have continued to see steady increases in educational attainment and, in many cases, now have postsecondary completion rates that exceed those in the United States. The U.S. high-school graduation rate now trails the average for European Union countries and ranks no better than 18th among the 26 OECD countries for which comparable data are available. On average across the OECD, postsecondary completion rates have increased steadily from one age cohort to the next. Although only 20% of those aged 55 to 64 have a postsecondary degree, the share among those aged 25 to 34 is up to 35%. The postsecondary completion rate of U.S. residents aged 25 to 34 remains above the OECD average at 42%, but this reflects a decline of one percentage point relative to those aged 35 to 44 and is only marginally higher than the rate registered by older cohorts.

To be sure, in many respects the U.S. higher education system remains the envy of the world. Despite recent concerns about rapidly increasing costs, declining degree completion rates, and the quality of instruction available to undergraduate students, U.S. universities continue to dominate world rankings of research productivity. The 2011 Academic Rankings of World Universities, an annual publication of the Shanghai Jiao Tong University, placed eight U.S. universities within the global top 10, 17 within the top 20, and 151 within the top 500. A 2008 RAND study commissioned by the U.S. Department of Defense found that 63% of the world’s most highly cited academic papers in science and technology were produced by researchers based in the United States. Moreover, the United States remains the top destination for graduate students studying outside of their own countries, attracting 19% of all foreign students in 2008. This rate is nine percentage points higher than the rate of the closest U.S. competitor, the United Kingdom.

Yet surely the most dramatic educational development in recent decades has been the rapid global expansion of higher education. Harvard economist Richard Freeman has estimated that the U.S. share of the total number of postsecondary students worldwide fell from 29% in 1970 to just 12% in 2006, a 60% decline. A portion of this decline reflects the progress of developed countries, but the more important factor by far has been the spectacular expansion of higher education in emerging economies, such as China and India. In China alone, postsecondary enrollments exploded from fewer than 100,000 students in 1970 to 23.4 million in 2006. The increase over the same period in India was from 2.5 million to 12.9 million students. In comparison, just 17.5 million U.S. students were enrolled in postsecondary degree programs in 2006.

Although these enrollment numbers reflect China and India’s sheer size and say nothing about the quality of instruction students receive, several recent reports have nonetheless concluded that the rapidly shifting landscape of higher education threatens the United States’ continued dominance in strategically important fields such as science and technology. Perhaps best known is the 2007 National Academies report Rising Above the Gathering Storm, which warned that “the scientific and technological building blocks critical to our economic leadership are eroding at a time when many other nations are gathering strength.” A follow-up report issued in 2010 by some of the authors of Gathering Storm warned that the storm was “approaching category five.” Although critics claim that the reports exaggerated the degree to which the research coming out of emerging economies is comparable to that produced by scholars based in the United States, it seems safe to conclude that in the future the nation will occupy a much smaller share of a rapidly expanding academic marketplace.

Costs of low-quality education

How concerned should the United States be about these developments? And is it the improvement in educational outcomes abroad that should motivate concern?

After all, until very recently the performance of the U.S. economy had far surpassed that of the industrialized world as a whole, despite the mediocre performance of U.S. students on international tests. Some observers have gone so far as to question the existence of a link between available measures of the performance of national education systems and economic success.

Such skepticism was not entirely misplaced. Economists as far back as Adam Smith have highlighted the theoretical importance of human capital as a source of national economic growth. For technologically advanced countries, highly educated workers are a source of innovations needed to further enhance labor productivity. For countries far from the frontier, education is necessary to enable workers to be able to adopt new technologies developed elsewhere. Because a given country is likely to be both near and far from the technological frontier in various industries at any given point in time, both of these mechanisms are likely to operate simultaneously. Yet rigorous empirical evidence supporting these common-sense propositions has been notoriously difficult to produce.

One key limitation of early research examining the relationship between education and economic growth is that it was based on crude measures of school enrollment ratios or the average years of schooling completed by the adult population. Although studies taking this approach tend to find a modest positive relationship between schooling and economic growth across countries, years of schooling is an incomplete and potentially quite misleading indicator of the performance of national education systems. Measures of educational attainment implicitly assume that a year of schooling is equally valuable regardless of where it is completed, despite the clear evidence from international assessments that the skills achieved by students of the same age vary widely across countries.

Economists Eric Hanushek of Stanford University and Ludger Woessmann of the University of Munich have addressed this limitation in an important series of papers published since 2008. Their key innovation is the use of 12 international assessments of math and science achievement conducted between 1964 and 2003 to construct a comparable measure of the cognitive skills of secondary school students for a large sample of countries. They went on to analyze the relationship between this measure and economic growth rates between 1960 and 2000 across all 50 countries for which cognitive skills and growth data are available and separately across 24 members of the OECD.

Their work has yielded several notable results. First, after controlling for both a country’s initial gross domestic product (GDP) per capita and the average years of schooling completed in 1960, they found that a one standard deviation increase in test scores is associated with an increase in annual growth rates of nearly 2%. Taken at face value, this implies that raising the performance of U.S. students in math and science to the level of that of a top-performing nation would increase the U.S. growth rate by more than a full percentage point over the long run; that is, once students educated to that level of academic accomplishment make up the entire national workforce. Second, they found that both the share of a country’s students performing at a very high level and the share performing above a very low level appear to contribute to economic growth in roughly equal amounts, suggesting that there is no clear economic rationale for policymakers to focus exclusively on improving performance at the top or the bottom of the ability distribution. Third, after controlling for their test-based measure of students’ cognitive skills, they found that the number of years of schooling completed by the average student is no longer predictive of growth rates. This suggests that policies intended to increase the quantity of schooling that students receive will bear economic fruit only if they are accompanied by measurable improvements in students’ cognitive skills.

Although these studies have offered a clear improvement over previous evidence, skeptics may wonder whether the pattern identified linking education quality and economic growth in fact reflects a causal relationship. It is of course possible that there are unidentified factors that enhance both the quality of national education systems and economic growth. Hanushek and Woessmann have performed a series of analyses intended to rule out these concerns. Although none of these tests of causation is definitive on its own, together they strongly suggest that policies that increase education quality would in fact generate a meaningful economic return.

Moreover, the magnitude of the relationship observed is so large that it would remain important even if a substantial portion of it were driven by other factors. Consider the results of a simulation in which it is assumed that the math achievement of U.S. students improves by 0.25 standard deviation gradually over 20 years. This increase would raise U.S. performance to roughly that of some mid-level OECD countries, such as New Zealand and the Netherlands, but not to that of the highest-performing OECD countries. Assuming that the past relationship between test scores and economic growth holds true in the future, the net present value of the resulting increment to GDP over an 80-year horizon would amount to almost $44 trillion. A parallel simulation of the consequences of bringing U.S. students up to the level of the top-performing countries suggests that doing so would yield benefits with a net present value approaching $112 trillion.

Yet despite ubiquitous rhetoric about education’s importance for countries competing in the global marketplace, there is no evidence that these potential gains would come at the expense of other nations. Put differently, there is no reason to suspect that U.S. residents are made worse off in absolute terms by the superior performance of students in places such as Finland, Korea, or even Shanghai. At the higher education level, U.S. universities clearly face growing competition in recruiting talented international students and faculty and will probably find it difficult to maintain their current dominance of world rankings. Yet as labor economist Richard Freeman of Harvard University has explained, “the globalization of higher education should benefit the U.S. and the rest of the world by accelerating the rate of technological advance associated with science and engineering and by speeding the adoption of best practices around the world, which will lower the costs of production and prices of goods.”

Reporting systems that make it possible to compare the performance of students in specific U.S. school districts to that in top-performing countries internationally could help to alter perceptions and broaden support for reform.

This is not to say that a continued decline in the relative standing of the U.S. education system would leave the nation’s economy entirely unaffected. As Caroline Hoxby, a labor and public economist at Stanford University, has noted, studies of the “factor content” of U.S. exports and economic growth have long documented their disproportionate reliance on human capital. This pattern suggests that the United States has traditionally had a comparative advantage in the production of goods that depend on skilled labor, which in turns reflects its historical edge in the efficient production of highly educated workers. In recent decades, U.S. companies have increasingly addressed labor shortages in technical fields by “importing” human capital in the form of highly educated immigrants, many of whom received their postsecondary training in the United States. This strategy cannot be a source of comparative advantage in the long run, however, because other countries are by definition able to import talented immigrants at the same cost. The decline in the relative performance of the U.S. educational system may therefore have adverse consequences for the high-tech sectors on which the nation has historically depended to generate overall growth. The ability of the nation’s economy as a whole to adapt in the face of such a disruption is, of course, an open question.

Policy lessons

In short, although there is little indication that education is an area in which countries are engaged in zero-sum global competition for scarce resources, education reform does provide a means to enhance economic growth and, in turn, the nation’s capacity to address its mounting fiscal challenges. Even if that were not the case, the moral argument for addressing the performance of the most dysfunctional U.S. school systems and the inequalities in social outcomes they produce would be overwhelming. What then are the lessons policymakers should draw from the growing body of evidence examining the performance of school systems across various countries?

The first and most straightforward lesson is simply that dramatic improvement is possible and that this is true even of the best-performing state school systems. Not only do many countries perform at markedly higher levels despite being at lower levels of economic development, but several of these countries have improved their performance substantially in the relatively short period since international tests were first widely administered. Nor do the international data suggest that countries face a stark tradeoff between excellence and equity when considering strategies to raise student achievement. In fact, the countries with the highest average test scores tend to exhibit less overall inequality in test scores and, in many cases, weaker dependence of achievement on family background characteristics.

A policy agenda centered on closing the global achievement gap between U.S. students and those in other developed countries would provide a complementary and arguably more encompassing rationale for education reform than one focused primarily on closing achievement gaps among subpopulations within the United States. The urgency of closing domestic achievement gaps is without question, but the current emphasis on this goal may well reinforce the perception among members of the middle class that their schools are performing at acceptable levels. The 2011 PDK-Gallup Poll shows that more than half of all U.S. residents currently assign the public schools in their local community an “A” or “B” grade, while only 17% assign one of those grades to public schools in the nation as a whole. This gap between local and national evaluations has widened considerably over the past decade, and similar data from the 2011 Education Next–PEPG Survey show that well-educated, affluent citizens are particularly likely to rate their local schools favorably. Reporting systems that make it possible to compare the performance of students in specific U.S. school districts to that in top-performing countries internationally could help to alter perceptions and broaden support for reform.

A second lesson is that reform efforts should aim to improve the quality of education available to U.S. students in elementary and secondary schools rather than merely increase the quantity of education they consume. The large economic return from the completion of college and especially graduate degrees suggests that there is considerable demand for workers who have been educated to those levels, and policymakers would be wise to address issues, such as the complexity of financial aid systems, that create obstacles to degree completion for academically prepared students. But increasing educational attainment should not be an end in and of itself. Doing so is unlikely to yield economic benefits without reforms to K-12 schooling that ensure that a growing number of students are equipped for the rigors of postsecondary work.

Another general lesson is that additional financial investment is neither necessary nor sufficient to improve the quality of elementary and secondary education. Current data clearly show that other developed countries have managed to achieve far greater productivity in their school systems, in many cases spending considerably less than the United States, to achieve superior results. Nor have countries that have increased spending levels in recent decades experienced gains in their performance on international assessments, a pattern that is consistent with the mixed track record of spending increases in producing improved student outcomes in the United States.

If countries with high-performing elementary and secondary education systems have not spent their way to the top, how have they managed to get there? Unfortunately, using international evidence to draw more specific policy guidance for the United States remains a challenge. Although it is straightforward to document correlations between a given policy and performance across countries, it is much harder to rule out the existence of other factors that could explain the relationship. The vast cultural and contextual differences from one country to the next also imply that policies and practices that work well in one setting may not do so in another. Even so, there are three broad areas in which the consistency of findings across studies using different international tests and country samples bears attention.

Exit exams. Perhaps the best-documented factor is that students perform at higher levels in countries (and in regions within countries) with externally administered, curriculum-based exams at the completion of secondary schooling that carry significant consequences for students of all ability levels. Although many states in the United States now require students to pass an exam in order to receive a high-school diploma, these tests are typically designed to assess minimum competency in math and reading and are all but irrelevant to students elsewhere in the performance distribution. In contrast, exit exams in many European and Asian countries cover a broader swath of the curriculum, play a central role in determining students’ postsecondary options, and carry significant weight in the labor market. As a result, these systems provide strong incentives for student effort and valuable information to parents and other stakeholders about the relative performance of secondary schools. The most rigorous available evidence indicates that math and science achievement is a full grade-level equivalent higher in countries with such an exam system in the relevant subject.

Private-school competition. Countries vary widely in the extent to which they make use of the private sector to provide public education. In countries such as Belgium, the Netherlands, and (more recently) Sweden, for example, private schools receive government subsidies for each student enrolled equivalent to the level of funding received by state-run schools. Because private schools in these countries are more heavily regulated than those in the United States, they more closely resemble U.S. charter schools, although they typically have a distinctive religious character. In theory, government funding for private schools can provide families of all income levels with a broader range of options and subject the state-run school system to increased competition from alternative providers. Rigorous studies confirm that students in countries that for historical reasons have a larger share of students in private schools perform at higher levels on international assessments while spending less on primary and secondary education. Such evidence suggests that competition can spur school productivity. In addition, the achievement gap between socioeconomically disadvantaged and advantaged students is reduced in countries in which private schools receive more government funds.

High-ability teachers. Much attention has recently been devoted to the fact that several of the highest-performing countries internationally draw their teachers disproportionately from the top third of all students completing college degrees. This contrasts sharply with recruitment patterns in the United States. Given the strong evidence that teacher effectiveness is the most important school-based determinant of student achievement, this factor probably plays a decisive role in the success of the highest-performing countries. Unfortunately, as education economist Dan Goldhaber of the University of Washington has pointed out, the differences in teacher policies across countries that have been documented to date “do not point toward a consensus about the types of policies—or even sets of policies—that might ensure a high-quality teacher workforce.”

Although increasing average salaries provides one potential mechanism to attract a more capable teaching workforce, there is no clear relationship between teacher salary levels and student performance among developed countries. Especially given the current strains on district and state budgets, any funds devoted to increasing teacher salaries should be targeted at subjects such as math and science, in which qualified candidates have stronger earnings opportunities in other industries, and at teachers who demonstrate themselves to be effective in the classroom. Intriguingly, the only available study on the latter topic shows that countries that allow teacher salaries to be adjusted based on their performance in the classroom perform at higher levels.

Vital national priority

During the past two decades, state and federal efforts to improve U.S. education have centered on the development of test-based accountability systems that reward and sanction schools based on their students’ performance on state assessments. The evidence is clear that the federal No Child Left Behind Act and its state-level predecessors have improved student achievement, particularly for students at the bottom of the performance distribution. Yet the progress made under these policies falls well short of their ambitious goals. Equally important, the progress appears to have been limited to a one-time increment in performance rather than launching schools on a trajectory of continuous improvement.

International evidence may not yet be capable of providing definitive guidance for closing the global achievement gap between students in the United States and those in the top-performing countries abroad. It does, however, indicate that holding students accountable for their performance, creating competition from alternative providers of schooling, and developing strategies to recruit and retain more capable teachers all have important roles to play in addressing what should be a vital national priority.

Revitalizing U.S. Manufacturing

At a recent Washington, DC, conference on the state of U.S. manufacturing, the head of one prominent economic policy think tank was asked, “How much of its manufacturing sector can the U.S. economy lose and yet still thrive?” The reply: “Really, we could lose all of it and be just fine.” Unfortunately, this view that the U.S. economy can thrive without manufacturing as a postindustrial, knowledge- and services-based economy has become all too prevalent among the Washington economic policy elite. Some even argue that the decline of manufacturing is a sign of U.S. economic strength, because it signals a thorough shift to an advanced services economy. After all, it’s only the laggard nations who still manufacture, they say.

But as explained in The Case for a National Manufacturing Strategy, a report by the Information Technology & Innovation Foundation (ITIF), it’s impossible for large economies to remain competitive without a viable manufacturing sector for five key reasons: (1) manufacturing plays a vital role in helping countries achieve balanced terms of trade; (2) manufacturing provides large numbers of aboveaverage–paying jobs; (3) manufacturing is the principal source of an economy’s R&D and innovation activity; (4) the health of a nation’s manufacturing and services sectors are complementary and inseparable; and manufacturing is essential to a country’s national security.

An increasing number of U.S. competitors, including Australia, Brazil, Canada, China, Germany, Japan, Korea, and the United Kingdom, have recognized that manufacturing remains vital to their economic competitiveness and that they cannot have a healthy manufacturing sector without a healthy base of small- and medium-sized enterprise (SME) manufacturers. They recognize that because SME manufacturers account for more than 98% of manufacturing firms in almost all economies, they form the backbone of a nation’s industrial supply chain.

Yet despite their importance, SME manufacturers lag larger manufacturers in adopting new technologies, increasing productivity, and exporting. Accordingly, an increasing number of countries have introduced and robustly funded a broad array of agencies, programs, and policy instruments to support the competitiveness, productivity, innovation, and export capacity of their SME manufacturers. These countries understand that supporting SME manufacturers’ adoption of new technologies and manufacturing processes as well as bolstering their R&D, innovation, and new product development activities have become indispensable to being an advanced industrial economy. They know that countries that do not have strategies in place to support their SME manufacturers are simply going to be left behind.

Unfortunately, the United States is lagging badly in these efforts. Lack of support for SMEs was a key factor in the precipitous decline of U.S. manufacturing during the past decade. The United States must step up its efforts to revitalize manufacturing in general and SME manufacturing in particular.

Free fall

Manufacturing’s share of U.S. gross domestic product (GDP) and employment has fallen precipitously during the past decade. Yet many argue that U.S. manufacturing is actually quite healthy and that any job losses are simply a result of superior productivity gains. Others assert that manufacturing is in decline everywhere, so that the relative decline in U.S. manufacturing is not noteworthy.

In contrast to these sanguine views, the reality is that, although U.S. manufacturing output and employment remained relatively healthy up until 2000, during the past decade the United States experienced the deepest industrial decline in world history. Some 54,000 U.S. manufacturers, including 42,000 SMEs, were shuttered. Manufacturing output, when properly measured, actually declined. Manufacturing employment fell by 33%, with the loss of 5.7 million jobs, a steeper decline than even during the Great Depression.

Official government figures suggest that U.S. manufacturing output grew just 5% during the prior decade, even as U.S. GDP grew 18%. However, that figure is inflated because it significantly overstates output from two industries: computers/electronics and petroleum/coal products. Overestimation of the output growth from those two industries masks the fact that, from 2000 to 2009, 15 of 19 aggregate-level U.S. manufacturing sectors, which account for 79% of U.S. manufacturing, experienced absolute declines in output. The vast majority of apparent growth in manufacturing output came from the computers/electronics industry, which, according to official statistics, grew 260.5%. In other words, this one sector, which accounts for just 9% of overall U.S. manufacturing output, accounted for 80% of manufacturing output growth from 2000 to 2009, even though the number of workers in the industry declined from 1.78 million to 1.09 million.

The reality, as explained in detail in The Case for a National Manufacturing Strategy, is that technical errors afflict official U.S. government measurements of manufacturing output, such that, when calculated accurately, real U.S. manufacturing output actually fell by at least 10% during the prior decade. A major cause of that decline has been a lack of investment in U.S. manufacturing. From 2000 to 2010, capital investment within the United States by U.S. manufacturers declined more than 21%, even as capital investment abroad by U.S. manufacturing firms was on average 16% higher than at home.

Likewise, the notion that manufacturing job losses primarily reflect productivity gains is also mistaken. U.S. manufacturing productivity grew at similar rates between 1990 and 1999 and between 2000 and 2009—56 and 61%, respectively—yet manufacturing employment declined 3% in the former decade but 33% in the latter. Moreover, U.S. manufacturing job losses have been extreme as compared to those experienced in peer countries. Of the 10 countries tracked by the U.S. Bureau of Labor Statistics, no country lost a greater share of its manufacturing jobs than the United States between 1997 and 2009. In fact, if manufacturing output had grown at the same rate as GDP during the prior decade, the United States would have ended the decade with 2.2 million more manufacturing jobs. Given the multiplier effect that manufacturing jobs have on the rest of the economy, which is at least two to one, had U.S. manufacturing not shrunk, there would be perhaps 6 million more Americans working today. In short, the extreme job loss in U.S. manufacturing during the past decade reflects not productivity increases but rather output declines resulting from the lack of U.S. manufacturing competitiveness and the fact that U.S. manufacturers were increasingly offshoring and investing abroad. This is not the picture of a healthy domestic manufacturing sector.

Finally, the notion that U.S. manufacturing decline is either inevitable or normal is also mistaken, as demonstrated by the fact that manufacturing is growing in many countries, including developed countries. For example, from 2000 to 2008, manufacturing output in constant dollars as a share of GDP increased by 10% in Austria and Switzerland, 14% in Korea, 23% in Finland, 32% in Poland, and 64% in the Slovak Republic. Moreover, from 1970 to 2008, Germany’s and Japan’s shares of world manufacturing output remained stable, even as the U.S. share declined by 12 percentage points, from 28.6 to 17.9%, and China’s share rose 13.4 percentage points, from 3.8 to 17.2%.

The deindustrialization of high-wage economies is not preordained. Competitors such as Germany and Japan have avoided the sharp declines in manufacturing that befell the United States in the last decade. They have done so by remaining committed to manufacturing as a core contributor to their economies and by implementing coherent strategies to boost the productivity, innovation, and competitiveness of their manufacturing sectors, including specific programs and robust funding in support of their SME manufacturers.

The growth of manufacturing extension services

Argentina, Australia, Canada, Germany, Japan, Spain, the United Kingdom, and the United States have each created formal agencies or institutions to provide manufacturing extension services to SME manufacturers. These services provide hands-on outreach mechanisms to stimulate SMEs to acquire or to improve their use of technology and to stimulate innovation. Although other countries, notably Austria, China, Korea, Sweden, Singapore, and Taiwan, don’t have analogous manufacturing extension agencies, they have implemented specific programs to support SME manufacturers.

In the United States, the Hollings Manufacturing Extension Partnership (MEP), located within the Department of Commerce’s National Institute for Standards and Technology, was founded in 1998 to work with SME manufacturers to help them boost productivity, increase profits, and create and retain jobs. MEP’s 1,300 technical experts, operating out of 60 regional centers located in every U.S. state, serve as trusted business advisors focused on solving manufacturers’ challenges and identifying opportunities for growth.

Australia’s Enterprise Connect program, launched in 2008, is a national network of 12 manufacturing centers run by the Department of Innovation, Industry, Science, and Research, which serves as the country’s primary vehicle for delivering firm-level support. Britain’s Manufacturing Advisory Service (MAS), founded in 2002 and modeled after MEP, provides technical information and specialist support to SME manufacturers through a staff of 150 operating out of nine regional centers. Canada’s Industrial Research Assistance Program (IRAP), founded in 1962, supports SME manufacturers with a staff of 230 working out of 150 offices across 90 communities. Japan’s 162 Kohsetsushi Centers, first launched in 1902 and modeled after the U.S. agricultural extension service, has a staff of more than 6,000. As a share of GDP, Japan has 15 times the number of specialists working with SME manufacturers as does the United States.

Countries’ investments in manufacturing extension services generate impressive returns and contribute strongly to broader economic and employment growth.

Countries support their SME manufacturers for four key reasons. First, they recognize SMEs as key drivers of employment and technology growth. For example, Canada’s SMEs account for 80% of new jobs and 82% of new technologies created in the country. But they also recognize that a number of systemic market failures and externalities affect manufacturing activity in general and SME manufacturers in particular that justify government intervention. Thus, the second reason governments specifically assist SME manufacturers is that they lag in adopting new technologies that would make them more productive. SMEs are less likely than larger enterprises to implement new technology, to adopt modern manufacturing processes, to invest in worker training, to adopt new forms of work organization, and to deploy improved business practices. Because of this, a substantial productivity gap exists between large and small manufacturers. This gap is apparent in virtually all countries and has been growing over time. For example, on average in the United States, value added per employee in SMEs was about 80% of that of large establishments in the 1960s. By the late 1990s, this number had fallen to less than 60% of that of large establishments. Extension services play a critical role in closing knowledge and best-practices gaps between small and large manufacturers.

The third rationale, as the European Commission’s Study of Business Support Services and Market Failure found, is that several types of market failure afflict the provision of public information and advisory services to SMEs. First, adverse selection issues arise when “inappropriate take-up of business support services occurs” because SMEs lack the scale to know the range of business support services available to them or the experience or knowledge necessary to adequately assess the value of those services or the quality of particular service providers. A second form of business support market failure arises when information services are not provided because no or only insufficient financial return can be made by private-sector firms. In fact, the UK’s extension service justifies its role precisely on the basis of addressing these two market failures.

Finally, governments support SME manufacturers because they play critical roles in supporting healthy manufacturing ecosystems, supply chains, and even entire regional economies. As large firms increase their dependence on suppliers for parts and services, the performance and capabilities of small manufacturers become critically important to the competitiveness of all manufacturers. Because the health of an economy’s large manufacturers depends on the strength of the SME suppliers in their value chain, SMEs’ competitiveness or lack thereof has externalities that affect other enterprises throughout an economy.

SME support programs have evolved toward helping SMEs move up the value chain in terms of their ability to innovate and commercialize more research- and technology-intensive products.

Traditionally, countries’ manufacturing extension programs focused on helping SME manufacturers realize continuous productivity improvements. This included encouraging new technology adoption, especially manufacturing process technologies, and boosting efficiency not just on the shop floor but throughout the business and supply chain operations. In other words, for many years, manufacturing extension programs were focused on helping SMEs improve the cost side of their business. But while retaining the original focus on boosting productivity, SME support programs have evolved toward helping SMEs move up the value chain in terms of their ability to innovate and commercialize more research- and technology-intensive products. Thus, SME support programs are increasingly focusing on supporting the growth side of the businesses, whether by helping SMEs identify export opportunities, coaching them on innovation skills and methods, or, as in a growing number of countries, directly co-funding SME manufacturers’ innovation, R&D, and new product development activities.

Jayson Myers, the chief executive officer of Canadian Manufacturers and Exporters, explains the transition this way: “Five years ago it was all about lean, quality, Six Sigma, and continuous improvement, but now it is all about innovation and new product development and finding new customers and new markets. A lot of small companies can understand process improvements, but performing R&D, retooling, understanding new customer sensing, designing products for new markets, and understanding standards requirements…these are the new challenges.”

Services provided

SME manufacturing support services can be grouped into three primary categories:

The first category—technology acceleration programs and practices—includes the core SME support functions such as promoting technology adoption by SMEs; conducting audits to identify opportunities for improvements in manufacturing and operational process; supporting technology transfer, diffusion, and commercialization; performing R&D in direct partnership with SMEs and/or providing access to research labs; and including SMEs in collaborative technology research consortia.

The UK’s MAS offers several levels of service to manufacturers. MAS’s Level 2 service is a free, one-day, onsite manufacturing review in which MAS practitioners assess the firm’s manufacturing operations and highlight opportunities to improve operational performance. Level 4 is MAS’s capstone subsidized consultancy support service, called a workout. During workouts, MAS practitioners spend up to two weeks onsite working hands-on with an SME, instilling competitive manufacturing processes, including implementing lean manufacturing processes, codeveloping value stream and process maps, teaching innovation methodologies, improving shop floor layouts and space utilization, and introducing sustainable and energy-efficient manufacturing principles.

Japan’s Kohsetsushi Centers excel at partnering with and undertaking R&D efforts directly alongside SMEs, helping them perform R&D of direct relevance to new technologies and products. For example, Kohsetsushi Center staff spends up to half their time on research, mainly on applied projects focused toward and often undertaken in direct conjunction with local industries. Small manufacturers often send one or two of their staff members to work on Kohsetsushi Center projects, providing opportunities for company personnel to gain research experience, develop new technical skills, and transfer technology back to their firms.

Austria and Germany excel at engaging SMEs in collaborative, consortia-based, precompetitive research programs organized around specific industrial technologies, such as nanotechnology, robotics, advanced materials and sensors, mechatronics, electromechanical systems, and metallurgy. Germany’s Fraunhofer Institutes and Austria’s Kompetenzzentren (Competence Centers for Excellent Technologies) undertake applied research of direct utility to private enterprise. In effect, they perform applied research that translates emerging technologies into commercializable products.

The second primary area of attention—next-generation manufacturing technical assistance—includes the identification and implementation of strategies regarding coaching innovation and growth skills; providing export assistance and training; promoting energy-efficient manufacturing practices; understanding the importance of design; and providing information about and assistance with acquiring standards and certifications.

For example, MEP has introduced a new training program, the Innovation Engineering Management System, which includes a digital tool set, online collaborative workspace, and formal curriculum to help U.S. manufacturers innovate and grow. MEP designed this program to help U.S. SME manufacturers develop skills at and confidence in commercializing new technologies.

Several countries, including Canada and the United Kingdom, have introduced programs to help SME manufacturers understand the critical importance and role of design methods and principles. For example, the UK’s Designing Demand program gives high–growth-potential SMEs 10 days of design and innovation-focused mentoring, helping them understand the value of design processes and how to specify demand projects and issue design tenders.

Boosting SMEs’ export potential is a central goal of many nations’ programs, and one way of facilitating that is by helping SMEs understand international technical standards. For example, Korea assists its SMEs in improving their reliability and boosting their exports by bearing a portion of the costs related to acquiring international standards certificates.

The third major category of assistance—technology acceleration funding mechanisms—have become increasingly popular mechanisms of supporting SME manufacturers. These include providing direct R&D grants to SMEs, loans to scale and grow the enterprise, and innovation vouchers to assist SME manufacturers with new product development and innovation efforts.

At least a dozen countries provide innovation-related funding directly to their SME manufacturers, with the United States being one of the exceptions. For example, from 2010 to 2011, Canada’s IRAP will provide a total of $238.9 million in direct innovation support to SME manufacturers. This support is provided as a nonrepayable grant averaging about $110,000 to $115,000, although it can be as large as $1 million to $2 million, for innovation activities including R&D, technical feasibility studies, prototype and process development, and developing/exploiting licensed technology. Germany’s Central Innovation Program (ZIM) supports R&D cooperation projects among SMEs or between SMEs in conjunction with universities and research organizations. ZIM provides grants of up to 50% of the research project cost for SMEs (and up to 100% for participating research institutes) up to $245,000 per R&D project. Since Germany launched the program in July 2008, ZIM has granted 13,000 awards to assist SMEs with technology R&D projects. Similar programs can be found in Austria, China, Japan, Korea, Taiwan, and the United Kingdom.

Another increasingly common support instrument is the innovation voucher. Used in Austria, Canada, Belgium, Denmark, Germany, the Netherlands, Ireland, and Sweden, these vouchers, usually ranging in value from $5,000 to $30,000, enable SMEs to buy expertise from universities, national laboratories, or public research institutes. The intent is to stimulate knowledge transfer from such institutions to SMEs, whether by enrolling them to assist SMEs with particular technical research challenges or helping SMEs implement improved innovation systems.

Funding extension services

The funding models of countries’ manufacturing extension services vary considerably. For example, the U.S. MEP is a cost-share program, whereby the federal government provides one-third of program funding ($110 million), with that contribution matched in equal part by states and recipient firms. In the UK, the federal government’s contribution is matched only by recipient firms. In contrast, in Japan, funding of the Kohsetsushi Centers comes entirely from Japan’s prefectures, and the consultative services provided to SMEs are mostly cost-free, although the use of laboratory facilities is cost-shared. In Germany, 30% of funding for the 59 Fraunhofer Institutes is provided by federal and state governments, with the remainder by private industry.

What has become increasingly apparent is that the United States is not funding its manufacturing extension service as robustly as it once did, or as aggressively as its competitors now are. In fact, as a share of GDP, the federal government invested 1.28 times more in MEP in 1998 than it did in 2009. But not only has recent federal funding of the program trailed the historical norm, it has begun to fall significantly behind the levels of funding that competitor countries provide their manufacturing extension services. Japan’s Kohsetsushi Centers received $1.67 billion in funding in fiscal year 2009. From 2010 to 2011, Germany’s government will invest $1.83 billion in its ZIM programs and $700 million in its Fraunhofer Institutes. Canada’s government will provide $264.9 million to IRAP in 2010–2011. When analyzed as a share of GDP, each year Japan invests 30 times more, Germany approximately 20 times more, and Canada almost 10 times more than the United States in their principal SME manufacturing support programs.

This is unfortunate, because the impact of countries’ investments in manufacturing extension programs on boosting SME manufacturers’ sales, R&D, and employment activity and thus contributing directly to economic growth is quite evident. For instance, a February 2011 study of MEP found that every $1 of federal investment in MEP generates a return of $32 in economic growth, translating into $3.6 billion in total new sales annually for U.S. SME manufacturers. Moreover, client surveys indicate that MEP centers create or retain one manufacturing job for every $1,570 of federal investment, one of the highest job growth returns out of all federal funds. In fact, 2009 impact data show that the MEP program has created or retained more than 70,000 jobs.

Similarly, an extensive 2010 review of the United Kingdom’s MAS found it to be one of the British government’s most successful programs, generating $6.20 of additional gross value added for every $1 of public investment between 2002 and 2009. The review also found MAS to be one of Britain’s best-performing programs in terms of job creation per government dollar invested. Likewise, a 2010 review of Canada’s IRAP program found that each $1 of public investment in IRAP resulted in a $12 impact on the Canadian economy. Moreover, a 1% increase in IRAP assistance led to an 11% increase in firm sales, a 14% increase in firm employment, and a 12% increase in firm productivity. Likewise, a 1% increase in IRAP funding to a Canadian SME manufacturer led to a 13% increase in the firm’s R&D spending and a 3% increase in its R&D staff. Taken together, the evidence supports the conclusion that countries’ investments in manufacturing extension services generate impressive returns and contribute strongly to broader economic and employment growth.

Lessons for the United States

Global best practice in SME manufacturing support has shifted from a sole focus on assisting SMEs with process and productivity improvements to supporting their R&D, innovation, and growth efforts. This has meant that countries’ manufacturing extension services themselves have had to innovate and demonstrate adaptive capability to ensure that their service offerings evolve and remain responsive to the unique needs of their country’s SME manufacturing base. The evolution of the U.S. MEP and its creation of new tools such as the Innovation Engineering Management System and the National Innovation Marketplace, a Web portal that allows SMEs to search technologies emerging from U.S. universities and national laboratories while trumpeting information about their own innovative products and technologies, conforms with international trends to assist SMEs’ innovation efforts.

However, MEP must continue to focus on helping SME manufacturers increase their technological intensity. Approximately 60% of U.S. manufacturing occurs in low-technology or medium-low–technology industries (industries in which R&D expenditures are less than 3% of sales). In contrast, about 60% of German and 50% of Japanese manufacturing occurs in medium-high–technology or high-technology industries (industries in which R&D expenditures are 3 to 5% of sales, or greater than 5% percent, respectively). In part because of this, Germany’s exports of research-intensive products are seven times greater than those of the United States.

As Rainer Jäkel of Germany’s Federal Ministry of Economics and Technology explains, “A key component of Germany’s industrial success is infusing cutting-edge technology, such as nanotechnology or advanced materials, into legacy industries such as steel or textiles. We’re good at integrating high-tech into otherwise low- and medium-tech sectors, allowing SMEs to renew themselves and find profitable niche markets.” Germany achieves this through a sophisticated model of technology creation and diffusion spearheaded by the Fraunhofer Institutes, which bring businesses and universities together to conduct industrially relevant translational research in advanced technology areas, with those advancements made available to all German industries. This is complemented by the ZIM program, which provides direct R&D funding to support the research, product development, and commercialization efforts of SME manufacturers. Thus, the United States must become much more focused on investments in industrially relevant R&D—Germany spends six times more than the United States on industrial production and technology research—and also explore mechanisms to more directly help fund SME manufacturers’ R&D, innovation, and new product development efforts.

Revitalizing U.S. manufacturing

The United States needs to take a multitude of steps to revitalize its manufacturing sectors. It should start by recognizing that a robust manufacturing sector remains vital to the country’s broader economic health. Next, it must acknowledge that a comprehensive national manufacturing strategy with a coherent set of public policies is needed to support U.S. manufacturers, large and small alike. Such policies should be organized around finance, technology investment, trade, tax, and talent.

In this difficult economic climate, many SMEs face severe capital constraints inhibiting their investments in R&D and production expansion. One vehicle Congress could create to assist SME manufacturers is a deferred investment account that would allow SMEs to make tax-deferred investments into special accounts for funds subsequently withdrawn for investments in workforce training or capital equipment acquisition. Although that would help SMEs better plan for the future, many SMEs could expand their operations now if they had sufficient access to capital. The Federal Reserve should consider relaxing some of the stringent guidance it has placed on local banks with regard to the liquidity ratios SME manufacturers must meet to be eligible for commercial loans, allowing local banks to better understand and service SMEs’ capital needs, given their particular cash flow constraints.

With regard to technology investment, Congress should expand MEP funding by at least double to $200 million annually, while retaining the 2-to-1 match. This would help close some of the U.S. SME manufacturing support gap with other countries and allow more SMEs to enjoy the benefit of MES services and the extensive benefits they produce. But the United States also needs to significantly increase direct funding support of SME manufacturers’ R&D and innovation efforts, as many competitors have done. Accordingly, Congress should direct the Small Business Administration to devote at least half its portfolio to supporting high-growth potential, high-tech firms, including a much larger share of manufacturers, with funding specifically supporting SMEs’ innovation and R&D efforts through investments such as in new capital equipment, machinery, or IT software. Further, Congress should restore long-term authorization of the Small Business Innovation Research program, through which 2.5% of federal agency research budgets is allocated to small businesses, and the Small Business Technology Transfer program, through which 0.3% of federal agency research budgets is allocated to universities or nonprofit research institutions that work in partnership with small businesses.

Policymakers should take steps to build a Fraunhoferlike network of industrially relevant, applied research institutes focused on core emerging technologies in the United States. A first step is NIST’s creation of the Advanced Manufacturing Technology Consortia (AMTech), a new public-private partnership that aims to fill a critical funding gap for early-stage technology development by improving incentives for creation of industry-led consortia supporting precompetitive R&D. But the Obama administration’s 2011 budget provides just $12.3 million in AMTech funding. AMTech’s funding should be ramped up to at least $500 million annually and support precompetitive applied research into 20 key advanced technologies. Congress should also expand support for NSF programs that work closely with industry, including the Engineering Research Center, Industry/University Cooperative Research Center, and Partnerships for Innovation programs, which together currently receive less than 2% of NSF’s budget.

With regard to trade, the United States needs to better support SME manufacturers’ export potential and their ability to compete against foreign manufacturers that are subsidized by their governments. First, Congress should increase the statutory lending authorization of the U.S. Export-Import Bank, which provides export credit financing to U.S. manufacturers, from $100 billion to $160 billion, and direct the bank to increase its statutory goal to providing at least 25% of its financing to small businesses. This is needed because competitor nations provide far more export credit financing assistance to their manufacturers. In fact, as a share of GDP, competitors such as Brazil, China, India, France, and Germany provide 7 to 10 times more export credit assistance than does the United States. Furthermore, Congress should allow the Export-Import Bank to use $20 billion in unobligated authority to lend directly to domestic manufacturing companies that are in competition with subsidized competitors and can demonstrate that the funds would support expanded manufacturer activities in the United States. At the same time, the Obama administration should expand MEP’s ExporTech export assistance program, which provides global trade and best-practice information to SME manufacturers.

With regard to tax policy, the United States needs to reduce effective corporate taxes—after Japan it has the second-highest rate among countries in the Organisation for Economic Co-operation and Development (OECD)—and simplify the tax code while expanding key incentives. Congress should expand the Alternative Simplified R&D Credit from 14 to 20% because the United States ranks just 17th in the OECD in R&D tax credit generosity. It should also institute a new tax credit for investment in new machinery and equipment, including software.

Finally, with regard to talent, state and federal policy-makers could take several steps to boost the pool of talented workers equipped with the skills demanded by SME manufacturers. In particular, Congress should boost support for community colleges, in part by increasing funding for Perkins vocational education and training programs (with states matching those investments) and in part by allowing unemployed workers to collect unemployment insurance if they are in approved training programs. Congress should also expanding funding for NSF’s Advanced Technological Education program, which supports partnerships between academic institutions and employers to improve the education of science and engineering technicians in high-technology fields at the undergraduate and secondary school levels. Further, states should increase credentialing for manufacturing workforce members by expanding the use of nationally portable, industry-recognized certifications specifically designed for the manufacturing industry.

Manufacturing will never again support 30% of the U.S. workforce, as it did in the 1970s. But it can be significantly expanded to eliminate the $800 billion trade deficit the United States experiences year in and year out. We cannot give up on manufacturing or be indifferent to the needs of small manufacturers trying to move up the value chain to produce higher–value-added products. U.S. manufacturing can remain globally competitive and be a source of millions of aboveaverage–paying jobs for U.S. citizens. But getting there will require a renewed belief in manufacturing and smart public policies that ensure that U.S. manufacturers operate in as cost-efficient an environment as possible, while having access to the world’s best technology, infrastructure, and talent.

Reducing Nuclear Dangers

Ron Rosenbaum wants us to be worried. His book How the End Begins: The Road to a Nuclear World War III is intended as an urgent warning that the terrifying dangers of nuclear weapons did not disappear when the Cold War ended two decades ago. There are still many thousands of nuclear weapons in the world—about 95% of them in the U.S. and Russian arsenals—and thousands of them are constantly poised for launch within minutes.

Rosenbaum dives deep into the many paths by which accidents, false warnings, and misperceptions could lead to a “world holocaust.” And he forces the reader to confront the deeply troubling moral implications of relying for our security on nuclear threats that, if carried out, would have the inevitable result of killing tens or hundreds of millions of innocent people. As Rosenbaum points out, that is a scale of slaughter beyond anything even Hitler ever imagined.

Unfortunately, Rosenbaum’s book is marred by mistakes, passages of pure speculation, and overheated rhetoric. Worse, he exaggerates some dangers, ignores others, and fails to explore some of the most promising pathways to reducing the dangers he describes.

The book opens with several pages of fevered speculation, centering on the October 2007 Israeli raid that destroyed a plutonium production reactor under construction in Syria. In Rosenbaum’s account, there is “little doubt” Russia detected the Israeli jets taking off (actually there is enormous doubt); Russia “may well have” warned Iran and “could easily” have made ambiguous nuclear threats to get Israel to back off; the Israelis “would likely have” relayed such threats to the United States; “and suddenly both nuclear superpowers . . . were on the verge” of being drawn into a nuclear war. This is all at some distance from reality. Syria’s action in building an illicit plutonium production reactor, North Korea’s decision to provide it, and the Israeli move to destroy it all raised troubling issues and dangers, but global thermonuclear war was not among them, contrary to Rosenbaum (and to an anonymous British official whose quote about the world at that moment being on the edge of “the bloody Book of Revelation” sets off Rosenbaum’s chain of imagining). It was quite possible, that night, that Syria might have chosen to strike back, but there was no serious chance that U.S. and Russian nuclear weapons would have been called into play.

The book’s weaknesses are unfortunate, because many of the nuclear dangers Rosenbaum wants us to focus on are very real, and despite outstanding efforts by many people, there are still crucial opportunities to address them that are not being seized.

Two decades after the end of the Cold War, it is simply insane that U.S. and Russian nuclear missiles remain poised for immediate launch, with military plans structured around launching within minutes of detecting an attack underway. Do we really want to rely on decisions made in a few minutes of terror to determine the fate of a significant fraction of the human species, when there is no longer a global struggle to the death (if there ever was) to justify such fearful risks? What if a completely convincing warning, coming from both radars and satellites, turns out to be a training tape, as occurred in 1980? What if the president is drunk or unbalanced at the crucial moment? (Rosenbaum devotes a chapter to a missile launch officer who dared to ask how he could confirm that the president was sane when he gave the launch order.)

Rosenbaum is correct in deploring President Obama’s decision, in his Nuclear Posture Review, to renege on his campaign promise to work with Russia to take nuclear weapons off quick-launch alert. The U.S. and Russian presidents both need to do this, as did President George H.W. Bush with a portion of the U.S. force in the early 1990s, as well as direct their militaries and technical experts to work out ways in which each side can be confident that the other has taken steps that would prevent it from being able to launch its missiles for hours or days (and has not secretly undone those steps later). This would be a major step in reducing nuclear dangers and in making nuclear weapons less relevant to the day-to-day conduct of international affairs.

Next, there is much to be done to shore up the global effort to stem the spread of nuclear weapons, in the face of challenges from North Korea, Iran, illicit nuclear technology networks, global terrorist groups, and more. The world needs stronger international nuclear inspections; better controls over black-market nuclear trade; strengthened security measures to keep nuclear weapons and materials out of terrorist hands; and, most important, steps to convince states they do not need nuclear weapons for their own security, including work to tamp down and ultimately resolve conflicts in South Asia, the Korean peninsula, the Middle East, and elsewhere.

This global effort has been surprisingly effective: There has been no net increase in the number of states with nuclear weapons (nine) for the past 25 years, a period that included the chaos following the collapse of the Soviet Union, the A. Q. Khan black-market network that was marketing nuclear weapons technology all over the world, and secret nuclear weapons programs in Iraq, Iran, North Korea, Libya, and Syria. (North Korea added itself to the nuclear-armed list, but South Africa became the first state to dismantle a nuclear arsenal it had built.) Indeed, there are now far more states that started nuclear weapons programs and verifiably gave them up than there are states with nuclear weapons, which means that our efforts to convince countries not to obtain nuclear weapons succeed more often than they fail, even with the few states that start down the nuclear weapons path.

Of course, past performance is no guarantee of future success, and North Korea and Iran in particular pose very serious proliferation dangers. But if we have the political will to undertake genuine political engagement and to offer incentives that convince these states that it is in their national interest to agree, it might still be possible to convince Iran not to build nuclear weapons and North Korea to dismantle or at least cap its small stockpile and avoid exporting it.

Rosenbaum, however, is so convinced that Iran is implacably bent on getting nuclear weapons, and so obsessed with large-scale nuclear war as the only danger to worry about, that he focuses on military strikes as the only way to address the Iranian program, and imagines that the only realistic option is nuclear strikes; an option virtually no one else is seriously discussing. (Rosenbaum seems to have missed the dramatic increases in the lethality of conventional weapons, which have made it possible for them to carry out many missions that once required nuclear weapons.) Rosenbaum touches on the dangers of war between India and Pakistan (the world’s most likely nuclear flashpoint) only briefly, with no proposals for risk reduction, and offers a similarly glancing treatment of North Korea.

Rosenbaum ends his book with a discussion of whether the number of nuclear weapons can ever realistically be reduced to zero. Here he seems genuinely torn, respecting those he interviews who are pushing for zero while also ridiculing the idea. He can only imagine zero after world government (possibly nuclear-armed) has been established and “a new human character” created. But men such as Henry Kissinger, George Schulz, Sam Nunn, and William Perry, along with two-thirds of the living secretaries of State and Defense, would not be pushing for zero if that is what it would take. There are many different visions of what zero might mean, some of which are an easier fit into the world as we know it. One could imagine, for example, a world in which there were zero assembled nuclear weapons, but some states still retained a small number of disassembled weapons under continuous international inspection and ready to be reassembled should anyone try to cheat on the arrangement. The barriers to such a world are large, but “an almost genetic deep change” of human nature would not be needed.

In any case, zero is a long way off, and we need to take steps now that will reduce near-term dangers and begin laying the foundation for more far-reaching actions. In the near term, it would make sense to focus, as Rosenbaum suggests (quoting proposals from Admiral Arleigh Burke, dating to the 1950s) on getting down to small numbers of highly survivable nuclear weapons. In the Fall 2009 Daedalus, Paul Doty, a participant in the Manhattan Project, offered a sensible target: reducing nuclear weapons to the point that they no longer have enough power to destroy civilization.

The events of recent years have made clear just how large the obstacles to such far-reaching change continue to be. After a prolonged struggle, the United States and Russia managed to reach agreement on the New START treaty that reduces nuclear weapons only very modestly, and the Obama administration only barely managed to convince the Senate to ratify it. The Nuclear Posture Review changed U.S. reliance on nuclear weapons only modestly, and Russia is, if anything, more reliant on them than ever. The nuclear programs in Iran and North Korea have become more dangerous every year, with little discernible progress toward stopping them. Pakistan is building up its nuclear stockpile at a furious pace, and India may be poised to follow suit. The U.S. Senate has refused to ratify the Comprehensive Test Ban Treaty, and the Conference on Disarmament remains unable to even begin negotiating a cutoff in the production of fissile materials for weapons. The International Atomic Energy Agency, the global nuclear watchdog, still has too little authority and funding, and too little political support to get much more. The nuclear security summit in Washington in April 2010 has built international support for action on improving nuclear security, but states are still refusing to agree to effective global standards that would cost money to implement, or to any form of independent review to confirm that they are fulfilling their security responsibilities.

In short, far-reaching steps to reduce nuclear dangers face huge obstacles. Rosenbaum has called out a warning, but he would have done an even greater service had he shown the way toward climbing these mountains and reaching a safer nuclear future.

Archives – Winter 2012

LEE BOOT, Insight 1234, Inkjet print of a digital composition, 24 x 36 inches, 2011.

Insight 1234

In 2010, Cultural Programs of the National Academy of Sciences engaged artist Lee Boot and the Imaging Research Center (IRC) at University of Maryland, Baltimore County to create an installation that would engage an audience in thinking about intuition and concerning the neural basis of intuitive insight. The installation titled seeintuit was first exhibited on the National Mall in October 2010 and continues to be an imaging research tool for the IRC.

The image depicted here is a reflection by Boot on the decisionmaking. The end result was an interactive multi-concepts and ideas that he and his research team explored media installation based on the research team’s findings during the creation of the seeintuit project.

Improving Spent-Fuel Storage at Nuclear Reactors

The nuclear disaster in Fukushima, Japan, which began with an earthquake in March 2011 and continues today, is casting a spotlight on nuclear reactors in the United States. At the Dai-Ichi nuclear power plant, at least one of the pools used for storing spent nuclear fuel—indeed, the pool holding the largest amount of spent fuel—has leaked and remains vulnerable. Because U.S. nuclear plants also use cooling pools for storing spent fuel, the U.S. Nuclear Regulatory Commission (NRC) formed a task force to assess what happened at the stricken facility and identify lessons for the U.S. nuclear industry. In a July 2011 report, the NRC placed upgrading the safety of storage pools at reactor stations high on its list of recommendations.

But history and scientific evidence suggest that although useful, improving pool safety will not be enough. Efforts are needed to store more spent fuel in dry form, in structures called casks that are less susceptible to damage from industrial accidents, natural disasters, or even terrorist attacks. Fortunately, money is already available to pay for this step, a situation almost unheard of in today’s harsh economic climate. Now it is up to the federal government to develop policies to make this happen, for the safety of the nuclear electric industry and the nation. There is no time to wait. It is estimated that spent-fuel storage pools at U.S. reactors, which are already jammed, will hit maximum capacity by 2015.

History of delay

Since the early days of the nuclear electric industry, the NRC’s regulations regarding storage of spent fuel have assumed that the federal government would open in a timely fashion a permanent repository for nuclear wastes. This goal was codified in the Nuclear Waste Policy Act of 1982. Until such a facility became available, the NRC would allow plant operators to store spent fuel on a temporary basis in on-site cooling pools. However, the quest for permanent nuclear waste disposal remains illusory. As a result, nuclear plant operators are storing spent fuel in cooling pools for longer periods and at higher densities (four to five times higher, on average) than originally intended.

Safely securing the spent fuel that is currently in crowded pools at reactors should be a public safety priority of the highest degree.

As the owner of the Millstone nuclear reactor in Waterford, Connecticut, observed in a 2001 report, neither the federal government nor utilities anticipated the need to store large amounts of spent fuel at operating sites. “Large-scale commercial reprocessing never materialized in the United States,” the utility, Dominion Power, said. “As a result, operating nuclear sites were required to cope with ever-increasing amounts of irradiated fuel . . . This has become a fact of life for nuclear power stations.”

U.S. reactor stations have collectively produced approximately 65,000 metric tons of spent fuel. Roughly three-quarters of the total is currently stored in pools, and the remainder is stored in dry form in casks, an inherently safer form of storage. The spent fuel stored in pools holds between 5 and 10 times more long-lived radioactivity than the reactor cores themselves hold. Because they were intended to be temporary, the pools do not have the same “defense in depth” features that the NRC requires of reactors. Even after it completed its assessment of the Fukushima disaster, the NRC has continued to allow nuclear operators to rely on cooling pools for storing spent fuel. As a result, spent-fuel pools may be destined to remain a fact of life for the indefinite future. But this possible future can and should be avoided, especially given the recent events in Japan.

Lessons from disaster

In the late afternoon of March 11, 2011, a 9.0 magnitude earthquake, followed by a 46-foot-high tsunami, struck the Dai-Ichi nuclear power site in the Fukushima Prefecture of Japan. The destruction was enormous. In a little more than an hour, offsite power was severed, backup diesel generators were rendered inoperable, and the infrastructure of wiring, pipes, and pumps necessary to maintain cooling for the four reactors and the fuel-storage pools was severely damaged.

Almost immediately, the site’s personnel became alarmed over the storage pools and shifted the remaining cooling capacity to prevent the overheating of spent fuel at reactor No. 2. However, the emergency batteries that were providing power to cool the reactor cores soon ran out. Fuel rods became exposed and began to melt, while generating large amounts of hydrogen from the rapid oxidation of zirconium contained in the cladding surrounding the nuclear fuel. In a matter of days, venting of hydrogen from overpressurized reactor vessels led to large explosions at reactors 1, 2, and 3, which experienced full meltdown. Reactor 4, which had been shut down for maintenance and its irradiated core transferred to a nearby cooling pool, also experienced an explosion that caused structural damage to the pool and leakage.

On June 18, the Japanese government reported that between March 11 and April 5, approximately 4.3 million curies of radioiodine and 410,000 curies of radiocesium had been released to the atmosphere. A more recent study estimated that almost twice as much radiocesium had been released.

In terms of land contamination, aerial radiological surveillance done by the U.S. Department of Energy between April 6 and April 29 indicated that roughly 175 square kilometers had contamination at levels comparable to those in the exclusionary zone around the reactor ruins at Chernobyl, in the Ukraine region of the former Soviet Union. Other researchers have reported that about 600 square kilometers have been contaminated to levels that at Chernobyl required strict radiation controls. Cesium-137 hot spots were found in soil by a citizens’ group in the Tokyo metropolitan area at levels comparable to those in the Chernobyl exclusionary and radiation control zones.

Tokyo Electric Power Company has yet to achieve cold shutdown at the Dai-Ichi site. The Japanese government currently estimates that it may take 30 years to remove and store nuclear and other contaminated material, at an estimated cost of $14 billion. Despite this destruction, spent fuel stored in dry casks at the reactor site was relatively unscathed.

U.S. nuclear portrait

In the United States, 104 commercial nuclear reactors are operating at 65 sites in 31 states. Sixty-nine of them are pressurized-water reactors (PWRs), and 35 are boiling-water reactors (BWRs). Thirty-one of the BWRs are Mark I and Mark II models that are built on the same basic design as those at the Dai-Ichi site. In addition, there are 14 older lightwater–cooled reactors in various stages of decommissioning.

These facilities collectively hold in their onsite spent-fuel pools some of the largest concentrations of radioactivity on the planet. The pools, typically rectangular or L-shaped basins about 40 to 50 feet deep, are made of reinforced concrete walls four to five feet thick. Most of them have stainless steel liners. (Basins without steel liners are more susceptible to cracks and corrosion.) At PWRs, pools are partially or fully embedded in the ground, sometimes above tunnels or underground rooms. At BWRs, most pools are housed in reactor buildings several stories above the ground.

Typical 1,000-megawatt PWRs and BWRs have cores that contain about 80 and 155 metric tons of fuel, respectively, and their storage pools contain 400 to 500 metric tons of spent fuel. Nearly 40% of the radioactivity in the spent fuel for both types of reactors is cesium-137, and the pools hold about four to five times more cesium-137 than is contained in the reactor cores. The total amount of cesium-137 stored in all storage pools is roughly 20 times greater than the amount released from all atmospheric nuclear weapons tests combined. With a half-life of 30 years, cesium-137 gives off highly penetrating radiation and is absorbed in the food chain as if it were potassium.

Many U.S. reactors have larger spent-fuel storage pools than found elsewhere. For example, the storage pool at Vermont’s Yankee Mark I reactor holds nearly three times the amount of spent fuel that was stored in the pool at the crippled Dai-Ichi reactor No. 4.

Permanent storage déjà vu

In January 2010, the Obama administration canceled long-contested plans to develop a permanent spent-fuel disposal site deep within Yucca Mountain in Nevada. Instead, the administration appointed the Blue Ribbon Commission on America’s Nuclear Future to address, once again, the country’s efforts to store and dispose of high-level radioactive wastes. The 15-member commission, which will report to the secretary of Energy, includes representatives from industry, government, and academia; it is co-chaired by Brent Scow-croft, a former national security adviser to two presidents, and former congressman Lee Hamilton. The commission’s charter made it clear that the Yucca Mountain site was not to be considered and that specific site locations were not to be selected. The commission provided interim recommendations in July 2011 and is expected to issue a final report in early 2012.

The challenge facing the commission is well known. In 1957, the National Academy of Sciences (NAS) warned that the “hazard related to radioactive waste is so great that no element of doubt should be allowed to exist regarding safety.” In the same year, the NAS recommended that the federal government establish deep geologic disposal as the best solution to the problem.

For more than two decades, the Atomic Energy Commission (AEC) and its eventual successor, the Department of Energy (DOE), tried and failed to identify one or more sites for geologic disposal that would be acceptable to everyone, including the states where potential sites were located. Congress eventually stepped into the fray in 1982 with the Nuclear Waste Policy Act, which set forth a process for selecting multiple sites at various geographic locations nationwide. Five years later, however, Congress terminated the site selection process, in large part because of opposition by eastern states. Congress amended the law so that Yucca Mountain in Nevada was the only site to be considered. Although Congress set an opening date for January 31, 1998, the project’s schedule kept slipping in the face of technical hurdles and fierce state opposition.

This was the situation when the Obama administration halted the controversial process and appointed the Blue Ribbon Commission. In its interim report, the commission recommended a number of amendments to the Nuclear Waste Policy Act. Among them were the following: The law should authorize a new consent-based process for selecting and evaluating sites and licensing consolidated storage and disposal facilities; allow for multiple storage facilities with adequate capacity to be sited, licensed, and constructed, when needed; and establish a new waste management organization to replace the role of the DOE with an independent, government-chartered corporation focused solely on managing spent fuel and high-level radioactive wastes. The act also should have provisions to promote international engagement to support safe and secure waste management. In this regard, Congress may need to provide policy direction and new legislation for implementing some measures aimed at helping other countries manage radioactive wastes in a safe, secure, and proliferation-resistant manner.

Even assuming that Congress promptly adopts the recommendations, however, it will probably take decades before consolidated storage and disposal sites are established. The commission pointed to the record of the Waste Isolation Pilot Project (WIPP), a waste repository developed by the DOE near Carlsbad, New Mexico, for storing transuranic wastes from defense applications. The repository began operation in 1998, 28 years after being proposed by the AEC. Moreover, WIPP faced less difficult (though still substantial) technical challenges. For example, spent fuel from commercial nuclear reactors will be much hotter than transuranic wastes, and this extra heat potentially can corrode waste containers, enhance waste migration, and affect the geological stability of the disposal site.

There is another hurdle as well. Given the inability of the current Congress to agree on routine government funding because of policy disputes, the prospects in a national election year for enacting legislation to reopen the site selection process for the storage and disposal of high-level radioactive waste are dim. These factors underscore the likelihood of the continued onsite storage of spent power reactor fuel for an indefinite period.

Given this situation, the commission concluded: “Clearly, current at-reactor storage practices and safeguards—particularly with regard to the amount of spent fuel allowed to be stored in spent fuel pools—will have to be scrutinized in light of the lessons that emerge from Fukushima. To that end, the Commission is recommending that the National Academy of Sciences conduct a thorough assessment of lessons learned from Fukushima and their implications for conclusions reached in earlier NAS studies on the safety and security of current storage arrangements for spent nuclear fuel and high-level waste in the United States.”

Emphasis on pool safety

Until the NAS completes its study, if it agrees to do so, the bulk of current attention is focused on the NRC’s analysis of the Fukushima disaster. As in Japan, U.S. spent-fuel pools are not required to have defense-in-depth nuclear safety features. They are not covered by the types of heavy containment structures that cover reactor vessels. Reactor operators are not required have backup power supplies to circulate water in the pools and keep them cool in the event of onsite power failures. Reactor control rooms rarely have instrumentation keeping track of the pools’ water levels and chemistry. (In one incident at a U.S. reactor, water levels dropped to a potentially dangerous level after operators simply failed to look into the pool area.) Some reactors may not have the necessary capabilities to restore water to pools when needed. Quite simply, spent-fuel pools at nuclear reactors are not required to have the same level of nuclear safety protection as required for reactors, because the assumption was that they would be used only for short-term storage before the rods were removed for reprocessing or permanent storage.

In its interim report, the NRC task force recognized these shortcomings and recommended that the NRC order reactor operators to:

  • “. . . provide sufficient safety-related instrumentation, able to withstand design-basis natural phenomena, to monitor key spent fuel pool parameters (i.e., water level, temperature, and area radiation levels) from the control room.”
  • “. . . revise their technical specifications to address requirements to have one train of onsite emergency electrical power operable for spent fuel pool makeup and spent fuel pool instrumentation when there is irradiated fuel in the spent fuel pool, regardless of the operational mode of the reactor.”
  • “. . . have an installed seismically qualified means to spray water into the spent fuel pools, including an easily accessible connection to supply the water (e.g., using a portable pump or pumper truck) at grade outside the building.”

Improving pool safety is certainly important. For decades, nuclear safety research has consistently pointed out that severe accidents could occur at spent-fuel pools that would result in catastrophic consequences. A severe pool fire could render about 188 square miles around the nuclear reactor uninhabitable, cause as many as 28,000 cancer fatalities, and cause $59 billion in damage, according to a 1997 report for the NRC by Brookhaven National Laboratory.

If the fuel were exposed to air and steam, the zirconium cladding around the fuel would react exothermically, catching fire at about 800 degrees Celsius. Particularly worrisome are the large amounts of cesium-137 in spent-fuel pools, because nearly all of this dangerous isotope would be released into the environment in a fire, according to the NRC. Although it is too early to know the full extent of long-term land contamination from the accident at the Dai-Ichi station, fragmentary evidence has been reported of high cesium-137 levels as far away as metropolitan Tokyo. The NRC also has reported that spent-fuel fragments were found a mile away from the reactor site.

Without a shift in NRC policy, reactor pools will still hold enormous amounts of radioactivity, far more than provided for in the original designs, for decades to come.

The damage from a large release of fission products, particularly cesium-137, was demonstrated at Chernobyl. More than 100,000 residents from 187 settlements were permanently evacuated because of contamination by cesium-137. The total area of this radiation-control zone is huge: more than 6,000 square miles, equal to roughly two-thirds the area of New Jersey. During the following decade, the population of this area declined by almost half because of migration to areas of lower contamination.

In addition to risks from accidents or other untoward events caused by either natural events or human error, another threat looms as well. In 2002, the Institute for Policy Studies helped organize a working group to perform an in-depth study of the vulnerabilities of spent-fuel reactor pools to terrorist attacks. The group included experts from academia, the nuclear industry, and nonprofit research groups, as well as former federal government officials. The group’s report, Reducing the Hazards from Stored Spent Power-Reactor Fuel in the United States, which I coauthored, was published in 2003 in the peer-reviewed journal Science and Global Security. We warned that U.S. spent-fuel pools were vulnerable to acts of terror, and we pointed out that the resulting drainage of a pool might cause a catastrophic radiation fire that could render uninhabitable an area much larger than that affected by the Chernobyl disaster.

Going dry for safety

Our study group recommended that to reduce such safety hazards, all U.S. reactor operators should take steps to store all spent fuel that is more than five years old in dry, hardened storage containers. The casks used in dry storage systems are designed to resist floods, tornadoes, projectiles, fires and other temperature extremes, and other unusual scenarios. A cask typically consists of a sealed metal cylinder that provides leak-tight containment of the spent fuel. Each cylinder is surrounded by additional steel, concrete, or other material to provide radiation shielding to workers and everyone else. Casks can be placed horizontally or set vertically on a concrete pad, with each assembly being exposed to an open channel on at least one side to allow for greater air convection to carry away heat. In hardened dry-cask storage—the safest available design for such systems—the casks are enclosed in a concrete bunker underground.

We also made other recommendations, such as installing emergency spray cooling systems and making advance preparations for repairing holes in spent-fuel pool walls on an emergency basis. The German nuclear industry took these same steps 25 years ago, after several jet crashes and terrorist acts at nonnuclear locations.

The NRC and nuclear industry consultants disputed the paper, and as a result, Congress asked the NAS to sort out the controversy. In 2004, the NAS reported that spent-fuel pools at U.S. reactors were vulnerable to terrorist attack and to catastrophic fires. According to its report: “A loss-of-poolcoolant event resulting from damage or collapse of the pool could have severe consequences . . . It is not prudent to dismiss nuclear plants, including spent fuel storage facilities, as undesirable targets for terrorists . . . Under some conditions, a terrorist attack that partially or completely drained a spent fuel pool could lead to a propagating zirconium cladding fire and release large quantities of radioactive materials to the environment . . . Such fires would create thermal plumes that could potentially transport radioactive aerosols hundreds of miles downwind under appropriate atmospheric conditions.”

The NAS panel also concluded that dry-cask storage offered several advantages over pool storage. Dry-cask storage is a passive system that relies on natural air circulation for cooling, rather than requiring water to be continually pumped into cooling pools to replace water lost to evaporation caused by the hot spent fuel. Also, dry-cask storage divides the inventory of spent fuel among a large number of discrete, robust containers, rather than concentrating it in a relatively small number of pools.

The NRC has at least heard the message. In March 2010, the commission’s chair, Gregory Jaczko, told industry officials at an NRC-sponsored conference that spent fuel should be primarily stored in dry, hardened, and air-cooled casks that will meet safety and security standards for several centuries. Yet today, only 25% of the spent fuel at U.S. reactors is stored in such systems, and the NRC has not taken strong steps to encourage their use. Nuclear reactor owners use dry casks only when there is no longer enough room to put the waste in spent-fuel pools. Without a shift in NRC policy, reactor pools will still hold enormous amounts of radioactivity, far more than provided for in the original designs, for decades to come.

Money at hand

In our original study, we estimated that the removal of spent fuel older than five years could be accomplished with existing cask technology in 10 years and at a cost of $3 billion to $7 billion. The expense would add a marginal increase of approximately 0.4 to 0.8% to the retail price of nuclear-generated electricity.

In November 2010, the Electric Power Research Institute (EPRI) released its own analysis of the costs associated with our recommendations. The group concluded that “a requirement to move spent fuel older than five years (post reactor operations) from spent fuel pools into dry storage would cause significant economic . . . impacts while providing no safety benefit to the public.” EPRI concluded that the cost for the early transfer of spent fuel storage into dry storage would be $3.6 billion—a level near the lower end of our estimates. This increase, EPRI said, would be “primarily related to the additional capital costs for new casks and construction costs for the dry storage facilities. The increase in net present value cost is $92-95 million for a representative two-unit pressurized water reactor; $18-20 million for a representative single-unit boiling water reactor; and $22-37 million for a representative single unit new plant.”

EPRI further expressed doubt that the industry would be able to meet demand needs for sufficient numbers of new casks, which the group estimated would require a “threeto four-fold increase in dry storage system fabrication capability.” Our study found, however, that two major U.S. manufacturers could increase their combined production capacity within a few years to about 500 casks per year, a level sufficient to meet projected needs.

The EPRI study also argued against our proposal by maintaining that the recommended actions would increase nuclear plant workers’ exposures to radiation. Upon further examination, EPRI’s estimate would result in a 4% increase in the collective radiation exposure to workers over the next 88 years. This increase in worker doses is not an insurmountable obstacle if there is greater use of remotely operated technologies in the handling of spent fuel assemblies and casks.

Of course, even though our estimates suggest that the added costs of moving to dry-cask storage will not be overly burdensome, individual reactor owners will need to pay them. Here is where the NRC can play a vital role by adopting policies that will allow for the costs of dry, hardened spent-fuel storage to be taken from the electricity rates paid by consumers of nuclear-generated electricity. The Nuclear Waste Policy Act established a user fee to pay 0.1 cent per kilowatt-hour to cover the search for and establishment of a high-level radioactive waste repository, but the law did not allow these funds to be used to enhance the safety of onsite spent fuel storage.

As of fiscal year 2010, only $7.3 billion had been spent of the $25.4 billion collected through user fees, leaving $18.1 billion unspent. This sum could more than pay for the dry, hardened storage of spent reactor fuel older than five years at all reactors. Safely securing the spent fuel that is currently in crowded pools at reactors should be a public safety priority of the highest degree. The cost of fixing the nation’s nuclear vulnerabilities may be high, but the price of doing too little is far higher.

Forum – Winter 2012

“Paying for Perennialism” by Sarah Whelchel and Elizabeth Popp Berman (Issues, Fall 2011) calls attention to an important area of research that, if successful in its goals, will enhance agricultural sustainability in the face of the growing world demand for food and the many challenges of a changing climate. Although past research to develop perennial grains has been slow-going, today’s genomics-based tools are enabling breeders to work much faster and attempt ambitious projects not previously possible.

The article understates the role that federal research is playing to move perennialism forward. Since the early 1900s, research has provided new tools to identify ideal traits in plants and more efficiently breed them into crops. The U.S. Department of Agriculture’s (USDA’s) Agricultural Research Service (ARS) has not only had a formative role in the recent developments and applications of agricultural genomic technology, but has contributed to a wide range of other advances, helping keep our farms productive and our food system safe and secure. This research complements and supports related research by public, private, and foundation partners.

Although the USDA is facing the same budget challenges felt across the nation, researchers are certainly not “walking away from work on perennials,” as the article suggests. There are formidable challenges to developing crop perennial types with winter hardiness, substantial yields, pest protection traits, and desired end-use qualities. The technical ability to dissect the genetic basis of perennialism and to apply breeding advances that are working so successfully in annual crops is only now becoming possible. At ARS, we are initiating research on the application of new technologies, including whole genome sequencing, genome-wide association studies, and rapid genetic selection methods, to perennial improvement.

ARS plant geneticists such as Ed Buckler (co-located at Cornell University) are leading the effort to make perennial corn production a reality. Among a broad array of crop genetic improvement projects, Buckler and his team are working to dissect perennialism by exploiting new genomic information and inexpensive DNA sequencing.  They are also initiating experiments with Tripsacum, a genus closely related to the corn genus, Zea, to clone the genes needed for winter tolerance in the U.S. Midwest.

In Raleigh, North Carolina, ARS maize geneticist James Holland is working to design a breeding scheme to develop perennial corn, exploring the possibilities of intercrossing domesticated corn and its perennial relative wild teosinte.

In North Dakota, Fargo-based ARS scientist Jason Farris is discovering domestication genes in wheat and Brett Hulke is evaluating perennials in the USDA sunflower germplasm collection for disease resistance and working with crop breeders to introduce these genes into cultivated sunflower.

Work on perennials is also happening at other ARS laboratories across the country, in places such as Lincoln, Nebraska; Griffin, Georgia; and Kearneysville, West Virginia. While improving crops through genomics and breeding techniques is the first step in getting new varieties into the hands of producers, research will also be needed to determine how improved perennials respond to different agronomic practices and natural resource management to ensure that the potential represented in genetically improved crops is actu-ally achieved while ensuring sustainable production.

Underlying all crop improvement research are the conserved genetic resources in the National Plant Germ-plasm System, which includes perennial grain accessions. This extensive USDA collection, which importantly involves our land-grant university partners, is used by plant breeders across the nation and world to enhance the potential of crops, with benefits to our food security, food safety, nutrition, and the environment—far beyond the “well-worn path of agricultural research and production” stereotype of maximizing yields at all cost, as suggested in the article.

USDA research on perennialism is but one part of a much larger portfolio of agricultural science that is taking a multipronged approach to solving critical food, agricultural, natural resource, and sustainability challenges faced by the United States and the world. Conducted and/or supported by ARS and its sister USDA agency, the National Institute of Food and Agriculture, the broad agricultural science portfolio of USDA and its partners is coordinated by USDA Chief Scientist Catherine Woteki.

Within this system, particular roles of ARS are to conduct long-term, high-reward research supporting a diversity of production system approaches and engage in precommercial, foundational research where the private sector is not involved.

There is always more that can and must be done to advance this research. In the face of a growing global population and growing demand for food, sustainability will require the kinds of technological innovation that only result from dedication and coordination. Achieving sustainability also requires sustained investment, and now more than ever agricultural research needs continued and enhanced public support. Smart investments in agricultural research today will pay dividends to our world tomorrow.

ED KNIPLING

Administrator

Agricultural Research Service

U.S. Department of Agriculture

Washington, DC


Wes Jackson and I share a common theme, enjoying life in our seventh decade. And we both come from Midwestern farm stock, he from Kansas and I from Iowa. Wes, however, grew up in wheat country, and I was in corn country. Maybe this is why I did not catch Wes’s dream of perennialism earlier in my professional career spanning over five decades of soil science and related fields.

It was only when I moved beyond the halls of peer-reviewed academia to direct the Leopold Center for Sustainable Agriculture that I could see Wes’s forest. Now it is obvious that a perennial agriculture has a place in row crop agriculture, even in Iowa, where almost all the cultivated land is in corn and soybeans, and heaven help the poor farmer who might suggest otherwise at the local coffee shop.

Neither Wes nor I would, I think, advocate a 100% perennial landscape if we are serious about food crop production and maintaining economic viability. But the current barren landscape would greatly benefit by patches of perenniality that will also provide food and income. In fact, that was the agriculture of old, the one of Grant Wood paintings. Pasture dotted the hilly land, alfalfa (a three- to four-year rotation legume perennial) supported dairy herds, and beef herds and trees lined the streams. Fence rows that kept the neighbor’s cattle from straying were filled with diverse plants harboring beneficial insects.

Except for a few pockets of sanity, those landscapes are gone and probably will not return. Driven by economics, the big industrial farm stranglehold on agriculture has pushed soil erosion over tolerable limits, loaded the streams with sediment and nitrate, and depopulated the countryside. We have created an agriculture so risky that when things go wrong, as they often do in a world of changing climate, agriculture is too big to fail and must be a major part of the federal farm bill.

Perennials could fill a huge void here. They would add diversity, both financial and biological. If used wisely, proper perennial crops would greatly lower erosion. Carbon sequestration would be greatly enhanced. Benefits would be huge beyond the costs. But it is obvious that big federal research dollars will not go to perennial crops, no matter how much pleading is done and appeals to common sense are made by well-meaning folks. But that does not mean that there are not pockets of excellence out there in the research plots and labs of the land grant universities and the USDA.

Visionaries such as Wes Jackson are needed in this world even more than ever. But we are not training visionaries nor allowing them even to dream. Instead, they publish or perish, pile up grant dollars to fill an ever-deepening portfolio black hole, and are rewarded with who’s who plaques. So who will be the dreamers of the future? The National Science Foundation should try to identify them now and get them headed on the road to saving our world, because academia is not doing it very well.

DENNIS KEENEY

Professor Emeritus, Iowa State University

Senior Fellow, Institute for Agriculture and Trade Policy

Minneapolis, Minnesota


Improving S&T policy

In “Science Policy Tools: Time for an Update” (Issues, Fall 2011), Neil F. Lane is correct in pointing out that our policies for managing science are in need of refurbishment, largely having been designed for a world that existed over a half-century ago. Perhaps the greatest change since that time has been characterized by Francis Cairncross, writing in The Economist, with the words “distance is dead.” Ironically, this phenomenon, also known as globalization, was itself brought about by advancements in science and technology.

Nearly all the major problems confronting the world today depend heavily on science for their solution. These range from ensuring quality health care to the provision of energy; from preserving the environment to defending against terrorism; and from building the economy to supplying food and water to all the world’s citizens.

But in the halls of our nation’s capital, one is far more likely to encounter a lobbyist for the poultry producers of America than anyone involved in science. Further, as Lane notes, the media is much too busy providing entertainment to address long-term pursuits such as science. (Of course, it could be noted that no one has told scientists that they cannot enter the political arena.)

In decades past, U.S. industry supported a considerable portion of the nation’s basic research, but today’s marketplace demand for short-term gains has all but eliminated that commitment (exhibit one: Bell Labs). Similarly, our nation’s universities have in the past been strong underwriters of research; however, these institutions increasingly face alternative financial demands. The federal government has thus become the default source for funding any research endeavor characterized by high risk, long duration, and results that may not accrue to the performer or funder but rather to society as a whole. But with an exploding deficit, federal funds for research are increasingly difficult to find.

A future shortage of scientists (and engineers) in the United States thus appears highly unlikely, but only in part because of the competition for funds. More significant is the fact that we have created a self-fulfilling prophecy. U.S. industry, for example, has discovered that it can move its research abroad, just as it did its manufacturing. That is where most of its customers are going to be anyway. Further, following a parallel philosophy, our universities are now expanding abroad. In the case of industry, with its newly created network of research laboratories around the world, when it finds some aspect of operating in the United States, such as export regulations, too onerous, it can simply bypass the U.S. laboratories and perform the work in its own facilities overseas.

The problem, of course, is that for many decades 50 to 85% of the growth in U.S. gross domestic product has been attributable to advancements in science and engineering. Given that only 4% of the U.S. workforce is composed of scientists and engineers, it can be argued that every 1% of that group is supporting some 15% of the growth of the overall economy and on the order of 10% of the increase in jobs.

The National Academies’ Rising Above the Gathering Storm study (I chaired the committee that produced the report) concluded that the two highest priorities to preserve America’s competitiveness are to repair our K-12 education system and to increase our investment in basic research. However, as Lane highlights, our system to do the latter is fragmented, and the former seems immune to all attempts at improvement.

A year ago, when testifying before Congress seeking funds to support these two goals, I was asked if I were unaware that our nation faced a budget problem. My answer was that I had been trained as an aeronautical engineer and that during my career I had worked on a number of new aircraft that were too heavy to fly—but never once did we solve the problem by removing an engine.

Research (and engineering) and education are the engines that drive our economy and promise to solve many of the other challenges we face as well. Neal Lane’s proposals would strengthen the hand of those few individuals now in our government who are trying mightily to strengthen the nation’s research endeavors. We should listen to him.

NORMAN AUGUSTINE

Bethesda, Maryland

The author is the retired chairman and chief executive officer of Lockheed Martin Corporation.


Neal F. Lane puts forth recommendations to the science and technology (S&T) policy community that call for increased integration, innovation, communication, and partnerships. He notes that the model laid out by Vannevar Bush in Science: the Endless Frontier has led to tremendous S&T accomplishments, and he rightly asserts that an update to our traditional S&T policy paradigm is long overdue. It is time to build on Bush’s visionary model in ways that reflect present-day challenges. Lane observes that although the S&T policy community has embraced the realities of increased complexity and rapid change in its discourse, it has not, in any systematic way, taken on the kinds of new thinking and conceptual frameworks that are required to address them.

Lane’s recommendation to more systematically integrate S&T activities across the federal government is a good one. Even within existing frameworks, the importance of this kind of coordination is being recognized. For instance, the National Science and Technology Council’s Committee on Environment, Natural Resources, and Sustainability is looking into mechanisms for increased interagency coordination to advance sustainability goals. The National Academies are also conducting a study entitled “Sustainability Linkages in the Federal Government” to identify linkages across federal S&T domains that are not traditionally incorporated into decision-making. In addition, within the Environmental Protection Agency’s (EPA’s) own research enterprise, steps have been taken to pursue cross-cutting goals, leverage expertise, and break down traditional scientific silos into a small number of integrated, transdisciplinary, sustainability-focused areas.

High-risk, high-reward R&D will also be necessary to reach beyond risk management and incremental improvement toward applicable, sustainable solutions. Transformative, disruptive, leapfrog innovations are critical to advancing scientific progress and competitiveness in the United States. The EPA’s Office of Research and Development has begun to facilitate and incentivize innovative R&D by appointing a chief innovation officer and launching an internal innovation competition, and, along with several other federal agencies, is engaging in open-source innovation challenges that solicit research solutions in exchange for cash awards. In addition, cross-sector partnerships have tremendous potential to bring about positive change through S&T. Lane’s government-university-industry model and other partnership schemes are ideal for incorporating multiple perspectives to increase R&D effectiveness and degrees of freedom in innovative solutions development.

Finally, Lane characterizes the public disconnect with scientific research as perhaps “the greatest threat to the future of the country’s research system.” This is not an overstatement. Members of the public are both the beneficiaries and sponsors of federal R&D. They need and deserve to understand how scientific discoveries affect their quality of life and foster U.S. progress. Although the media play an important role in communicating to the public, the S&T policy community must not sit on the sidelines. It is our job to explain the effects and significance of our research, science, and technology activities. To this end, communication skills should be considered essential for every scientist if the national research endeavor is to continue to thrive.

PAUL T. ANASTAS

Assistant Administrator, Office of Research and Development

Science Advisor, U.S. Environmental Protection Agency

Washington, DC


The globalization of scientific research

Caroline S. Wagner’s “The Shifting Landscape of Science” (Issues, Fall 2011) is to be commended for its recognition of an important and undeniable trend: the globalization of S&T affairs. We have started to shift from a reliance on so-called “national systems of innovation” to an emphasis on a series of new, globally networked systems of knowledge creation and exploitation. The concept of national innovation systems that was developed out of the research of leading scholars such as Richard Nelson, as well as the S&T policy team at the Organisation for Economic Co-operation and Development (OECD), has now become largely obsolete. Yet, as Wagner suggests, despite the fact that the data support the notion of this strategic transformation, the U.S. government refuses to make any significant adjustment in its policy mechanisms to accommodate the new R&D world of the 21st century. In some ways, the situation is even worse than Wagner suggests; as the activities of the world’s leading multinational firms clearly demonstrate, globalization of R&D activity has become a competitive imperative. U.S.-based multinational corporations, in particular, are establishing overseas R&D installations as a rapid pace; and not simply in other OECD nations, but in new places such as China and India. According to the latest data from the Chinese government, there are now more than 1,250 foreign R&D centers in operation in the People’s Republic of China. Far from being simply focused on local products and services, many of these R&D units are working on projects tied to the global marketplace.

So what is driving this steadily expanding push to globalize research and build new networked structures across the world? It is the rise of a new, dynamic global talent pool that has shifted the focus of overseas expansion by companies and even universities from a search for cheap labor and lower costs to a desire to harness the growing reservoir of brainpower that is popping up from Jakarta to Mumbai and from Dalian to Singapore. Supported by massive new investments in higher education as well as R&D, many countries that were once S&T laggards are emerging on the international scene as critical partners and collaborators. What gives these efforts at human capital enhancement even more momentum is that they are being complemented by significant financial investments in new facilities and equipment. In addition, many of these nations have benefitted substantially from their ongoing efforts to integrate domestic programs and initiatives with deeper engagement with countries across the international S&T system. The Chinese case is highly illustrative in this regard, as the Chinese government has built a multifaceted array of international S&T cooperation channels and relationships. In spite of lots of verbiage about promoting indigenous innovation, China has been one of the biggest beneficiaries of globalization and shows no signs of closing the door on its highly productive set of foreign S&T relations.

These developments clearly leave the United States in a potentially disadvantaged position vis-à-vis its hungry international counterparts. One of the clearest examples of the U.S. refusal to understand what is occurring in these other countries involves the recent restrictions put on the White House Office of Science and Technology (OSTP) by Congressman Frank Wolf (R-VA), who apparently believes that the United States has gotten too cozy with China’s S&T establishment. Through budgetary legislation engineered by Wolf, OSTP and NASA are currently restricted from fully engaging with China, putting in jeopardy the mutually beneficial S&T cooperation relationship that has been built by the two countries during the past three decades. Wolf ’s actions reflect an antiquated perspective that simply ignores the deep level of interdependence that currently exists in S&T affairs as well as in other aspects of the Sino-American relationship. Simply stated, there is no major global S&T problem in place today that will not require close cross-border collaboration between the United States and China, whether it is climate change, the search for new alternative energy supplies, or the efforts to combat threats to the global health system.

It is time for the U.S. government to wake up to the new realities highlighted by Wagner. If we do not reorient our policies and perspectives to these new pressing realities, we stand likely to become an also-ran nation in the ever-intensifying race for sustained scientific leadership and technological competitiveness. This will not happen tomorrow or the next day, but it will no doubt be part of a process of long-term decline in the efficacy of America’s once vibrant, highly productive S&T system.

DENIS SIMON

Vice-Provost for International Affairs

University of Oregon

Eugene, Oregon


Caroline Wagner’s article was a thoughtful and forward-looking commentary on the current debate about international competition versus collaboration in S&T. I agree with her thesis that “Science is no longer a national race to the top of the heap: it is a collaborative venture into knowledge creation and diffusion.” Friendly competition is healthy and even desirable for elevating the overall quality standard and advancing the S&T frontiers. An appropriately balanced combination of competition and collaboration will only accelerate the pace of discovery and innovation, especially in the current situation in which total world investment in S&T and the total number of S&T students worldwide are increasing.

Wagner advocates “an explicit U.S. strategy of global knowledge sourcing and collaboration” and suggests creating strategically focused, multilateral government programs. Indeed, the initial step toward that goal is well underway, and the U.S. government is taking the leadership role, as described in an editorial written by Subra Suresh, the director of the National Science Foundation (NSF), in the August 12, 2011 Science. (The views in this letter are my own and do not necessarily represent those of my employer, the NSF.)

Recognizing disparate scientific merit review as a fundamental barrier to multilateral collaborations, the NSF, on behalf of the United States, will host a global summit on merit review in May 2012 to “develop a foundation for international scientific collaboration.” Heads of research councils from approximately 45 countries are expected to attend the summit. It is hoped that this summit will lead to a long-term “virtual Global Research Council to promote the sharing of data and best practices for high-quality collaboration.” Success of the summit will lay a foundation for global knowledge sourcing, which will lead to the kind of multilateral collaborations that Wagner promotes in her article.

As other countries around the world have increased their S&T investments, U.S. dominance appears to be waning in comparison. However, this does not mean that U.S. excellence in S&T is declining in absolute terms. The United States is still the destination of choice for the best and brightest students from countries such as China, India, and South Korea. One of our challenges is to encourage U.S. students to go abroad to acquire global perspectives. To this end, NSF has partnered with funding agencies in Australia, China, Japan, New Zealand, Singapore, South Korea, and Taiwan, and annually supports 200-plus U.S. graduate students in study abroad. These students spend 8 to10 weeks in one of the seven countries conducting research and building a lifelong network with students and researchers in the host laboratory. Expanding the program to other countries is under consideration.

While strategically participating in international S&T collaboration to leverage intellectual and infrastructural resources across the globe, the United States must continue to invest in fundamental research in order to stay competitive in multilateral collaborations, and more importantly, to ensure new discoveries that will lead to totally new technologies that we cannot even imagine today.

MACHI F. DILWORTH

Director, Office of International Science and Education

National Science Foundation

Arlington, Virginia


Carolyn S. Wagner’s article makes useful points about the benefits to the United States of tapping burgeoning sources of foreign scientific knowledge by fostering and participating in more international research collaboration. But some important flaws in her assumptions indicate that Washington will need to pursue these goals more carefully and discriminatingly than her essay appears to recognize.

Although no one could reasonably object to the author’s goal of augmenting U.S. scientific wherewithal with knowhow generated abroad, the payoffs to the United States from this cooperative strategy will surely be more modest than Wagner suggests and the risks significantly greater.

Wagner’s first dubious assumption concerns the role of scientific knowledge in the world of nation-states and their interactions. Economists may view knowledge, in Wagner’s words, as “non-rivalous because its consumption or use does not reduce its availability or usefulness to others.” But history teaches unquestionably that knowledge is also power. Any collaborative policies must be subordinated to U.S. security and closely related economic interests. Therefore, and especially given the continuing U.S. science edge, collaboration must be tightly limited with mercantilist countries that simply don’t view international commerce as a positive sum (such as virtually all of East Asia and much of Europe), as well as with likely geopolitical rivals (such as China and Russia). This means not only that much critical U.S. knowhow must remain securely under U.S. control, but that Washington cannot allow the temptation to develop an economically rational global division of scientific labor prevent or stunt the development even of certain national capabilities that duplicate foreign expertise.

Wagner’s assumptions about the United States’ relative global science position and its future also seem vulnerable. If current trends continue, the nation’s East Asian competitors could remain flush with resources to finance expanded science and technology development. But given the still-considerable linkage between these economies and their best final customer—the financially challenged United States—their own continued rapid progress is far from assured. Moreover, the foreseeable future of both private- and public-sector science funding in crisis-ridden Europe—the third big global pool of scientific ex-pertise—appears to be even grimmer than it is in the United States.

Finally, Wagner apparently accepts an assumption about the worldwide proliferation of scientific knowledge that is as widespread as it is incomplete. Obviously, as the author writes, considerable and inevitable foreign catch-up with U.S. capabilities has characterized the post–World War II period. But the scientific rise of China and India in particular has been fueled largely by the policy-induced off-shoring activities of U.S.- and other foreign-owned multinational companies. Eliminating these firms’ incentives to arbitrage foreign subsidies and regulatory vacuums and ultralow costs for even skilled labor, especially in tandem with a raft of better domestic policies, holds much greater potential to bolster the domestic S&T base than collaborative programs that meet the prudence test.

ALAN TONELSON

Research Fellow

U.S. Business and Industry Council

Washington, DC


Catastrophe insurance

Howard Kunreuther’s and Erwann Michel-Kerjan’s “People Get Ready” (Issues Fall 2011) reviews a number of critical challenges that undermine progress in making the United States more disaster-resilient, while the nation and the rest of the world seem to be entering an era of increasingly frequent and increasingly devastating catastrophes. Climate change is seen as a key factor in provoking weather-related disasters, particularly hurricanes and severe coastal storms. Just this year, many New England states experienced severe flooding from, of all things, a tropical storm that roared up the Atlantic coast, first manifesting itself as Hurricane Irene.

As noted in the article, recent studies conducted by my own center at Columbia University confirm some of the unrealistic citizen perspectives on disasters that impede the public’s motivation to “get prepared.” For instance, we found that 62% of Americans believe that, in the event of a major disaster, first responders would be on the scene to help them “within a few hours,” and nearly one in three feels that it would take less than an hour.

Dramatic increases in population density in disaster-prone areas, along with fragile, deteriorating infrastructure, have been an inevitable consequence of a rapidly growing population looking for natural beauty and/or economic opportunity. It is perfectly understandable that people are drawn to the normally calm climate and natural appeal of communities such as south Florida, the Carolinas, the Gulf Coast or the spectacular vistas of northern California’s rocky coastline. The calculus driving where-to-live decisionmaking, however, is rarely much affected by an objective assessment of disaster risk.

Some 80 million Americans live in communities at significant risk with respect to earthquakes. This past May, to test local and regional response capabilities, the federal government conducted a National Level Exercise hypothesizing major seismic activity along the 150-mile New Madrid fault, which runs through the middle of the country, putting some five states at considerable risk. Although the final report is not completed, my observations revealed substantial challenges in readiness for a large-scale catastrophe.

And besides potential weather and geological calamities, of course, there are a myriad of risks related to the built environment. Many of the nation’s 104 nuclear power plants lack sufficient safeguards to significantly reduce the risk of Fukushima-like catastrophes. What about trains loaded with dangerous chemicals rolling through unsuspecting communities? And the nation’s infrastructure, from electrical grids to levees and bridges, is increasingly be-ing recognized as disconcertingly fragile, putting many communities at considerable risk. Fixable? Yes, but at a cost estimated by the American Society of Civil Engineers to be in the range of $2.7 trillion.

Still, the fact remains that 310-plus million Americans have to live somewhere, and because it is virtually unavoidable, a substantial percentage of us live in or near an area of definable risk. Kunreuther’s and Michel-Kerjan’s focus on how we need to rethink how we approach preparedness, risk mitigation, and insurance coverage is relevant and important. But making the United States substantially more disaster-resilient will require more than innovative approaches to insurance reform, mitigation strategies, and building codes. One way or another, citizen engagement across all socioeconomic strata and in every cultural and ethnic community will remain a high priority if disaster resiliency is a central goal.

IRWIN REDLENER

Director, National Center for Disaster Preparedness

Professor of Population and Family Health

Mailman School of Public Health

Columbia University

New York, New York


The article by Howard Kunreuther and Erwann Michel-Kerjan provides evidence from psychology and behavioral economics to help explain why individuals underinsure against catastrophic risk and fail to mitigate against disasters. In response, they recommend multiyear insurance tied to property and multiyear loans for mitigation. Options to encourage or fund more mitigation can be very cost-effective in reducing losses; a Congressional Budget Office (CBO) study found that the Federal Emergency Management Agency’s mitigation program delivered about $3 in benefits per $1 spent. (As a CBO employee, the views in this letter are mine alone, and not necessarily those of the CBO.) The authors’ proposals could also potentially reduce the need for federal disaster assistance. Barriers to implementing their recommendations may exist, including state regulation of rates and polices, and the details of the insurance contract will matter, so policymakers may continue to consider additional options to expand private coverage.

In addition to the psychological factors discussed by the authors, government policies may also contribute to underinsurance and too little mitigation against catastrophic losses. At the state level, the regulation of premiums and insurance coverage leads to high-risk policyholders not paying the full cost of their risk, which both reduces incentives to mitigate losses and leads to subsidies from taxpayers and lower-risk policyholders in state residual pools for catastrophic risks. Those policies contribute to overdevelopment in high-risk areas.

At the federal level, the government provides not only flood insurance but also various forms of implicit catastrophic insurance. After a disaster, Congress generally provides extensive federal assistance to individuals, businesses, and state and local governments to help cover uninsured losses and assist in economic recovery. After Hurricane Katrina, the CBO estimated that additional federal spending for hurricane-related assistance, together with various forms of disaster-related tax relief, would cost about $125 billion from 2006 to 2010. These types of assistance reduce financial hardship and help stimulate the economy after a disaster, but they also discourage individuals and businesses from taking steps to mitigate future losses and from seeking private market solutions for financing those losses.

Although they are not directed at natural disasters, federal mortgage guarantees, which covered about 95% of new residential mortgages in the first half of 2011, expose taxpayers to natural disaster risk. Enforcing the existing insurance requirements (including flood coverage) on those mortgages could reduce costs, and policymakers could also weigh the benefits and costs of requiring earthquake insurance for some policyholders.

Christopher Lewis and others have proposed that the federal government auction reinsurance to insurers and state-sponsored programs with the goal of improving their ability to provide coverage. Auctions might reduce the problem of underpricing federal insurance and crowding out private supply, particularly if the contracts were limited to the highest layers of risk. But a reinsurance program would also probably impose costs on taxpayers; the federal government generally has difficulties in efficiently managing insurance risk.

DAVID TORREGROSA

Analyst

Congressional Budget Office

Washington, DC


The article by Howard Kunreuther and Erwann Michel-Kerjan was of great interest to me and our 14,000 members. Not only are disaster costs from natural hazards in the nation increasing, but the risk of those hazards is increasing even faster. It is essential that we help those living at risk to understand the risk, take responsibility for it, and take actions to reduce their risk.

Everyone involved in the issue of fire risk works together so that nearly everyone with a home or structure in the United States has insurance for fire. This includes banks, insurance companies, realtors, and property owners. Yet in a majority of cases, those exposed to natural hazards such as floods and earthquakes, who are much more apt to experience a loss from those events than from fire, do not perceive this risk and do not insure against it or take steps to reduce their risk. In the meantime, as the article points out, we continue to allow development in areas at risk from natural hazards, so the consequences of flood and earthquake events are building rapidly. Risk is not only the probability of an event happening, but the consequences (or costs) if it does. But the banking, insurance, realty, and other development industries do not promote insurance for natural hazards.

A significant factor is pointed out in the article—that the federal taxpayers are picking up more and more of the costs of natural disasters, which means that communities, developers, and other decisionmakers can gain the benefits of at-risk development while letting federal taxpayers pay for the consequences through disaster relief. Disaster relief is not just funding from the Federal Emergency Management Agency, but also from the Department of Transportation to rebuild roads, bridges, etc., and from Housing and Urban Development, the Environmental Protection Agency, U.S. Department of Agriculture, and many other federal agencies that provide funding or grants after a disaster.

The article provides many good suggestions that should be considered to reverse the increased costs and human suffering from natural hazards. The most effective measures to reduce future risk rest with local and state governments through land-use and building codes. All of us need to support such actions and support a sliding cost-share system for disaster relief to reward communities and states who do more to reduce or prevent the problem, instead of continuing to throw more federal taxpayer money at communities and states who continue to allow massive at-risk development.

LARRY A. LARSON

Executive Director

Association of State Floodplain Managers

Madison, Wisconsin


New approach to cybersecurity

The message in “Cybersecurity in the Private Sector” (Issues, Fall 2011) by Amitai Etzioni is clear: The private sector is evil, and if the federal government would only do its job and regulate, our cyber systems would be secure.

But Etzioni acknowledges that the Department of Homeland Security is not up to the job, although he suggests that’s because they use private-sector equipment and contractors. But one only need recall the WikiLeaks fiasco, when an unsupervised government employee with a Lady Gaga CD accessed masses of classified data and released it on the Internet, to realize that being a government employee doesn’t confer extra morality or security expertise.

The name, blame, shame campaign both misunderstands what we are dealing with and misdirects what we need to do about it. We are well beyond hackers, breaches, and perimeter defense. The serious attacks we face come from highly organized, well-funded, sophisticated, state-supported professionals (mostly from China), who successfully compromise any system they target.

Most cyber systems are substantially overmatched by their modern attackers. The solution is not to blame the victims. Currently, all the incentives favor the attackers. Attacks are relatively cheap, profitable, and difficult to detect. Defense is a generation behind the attackers, it’s hard to show quantified returns on investment from prevention, and successful criminal prosecution is almost nonexistent. This doesn’t mean that we have no defense, but we need to create a new system of defense.

The traditional regulatory model, constructed to deal with the hot technology of two centuries ago—railroads—is a bad fit for this problem. Regulations can’t keep up with rapidly changing technology and attack methods. U.S. regulations only reach U.S. companies, whereas the problems are international. And regulating technology impedes innovation and investment, which we cannot afford.

The Internet Security Alliance (ISA) has suggested an alternate model, the Cyber Security Social Contract, based on the public utilities model, wherein policymakers achieved a social goal (universal service) by providing economic incentives to the private sector (guaranteeing the rate of return on investment).

The ISA model suggests retaining existing regulation for industries where the economics are baked into the regulation (such as utilities). For the non-regulated sectors (information technology, manufacturing, etc.), we create a menu of market incentives (insurance, liability, procurement, etc.) to encourage greater security investment.

This modern, pragmatic, and sustainable approach, which Etzioni ignores, has received broad support. The Executive Summary to President Obama’s Cyber Space Policy Review both begins and concludes by citing the Social Contract. ISA white papers filling out the idea are cited four times more than any other source in the president’s document. These principles were also the primary basis of the House Republican task force report on cybersecurity released in October 2011.

A broad array of private-sector trade associations representing software providers, hardware providers, corporate consumers, and the civil liberties community published a detailed white paper in March 2010 that also endorsed this approach.

It is backward-looking policymakers and think-tankers who are holding progress in cybersecurity hostage to a 19th-century regulatory model that can’t address this 21st-century problem.

LARRY CLINTON

President

Internet Security Alliance

Silver Spring, Maryland


More focus on occupational certificates

Brian Bosworth’s “Expanding Certificate Programs” (Issues, Fall 2011) shines light on an important and often neglected area of labor market preparation. According to the Survey on Income and Program Participation (SIPP), fully 18% of workers have a certificate from a business, vocational, trade, or technical postsecondary program, and a third of these people also have a two- or four-year degree. Of the 20% of associate degree graduates with a certificate, 65% got their certificate first, 7% got it at the same time they got their degree, and 28% got their certificate after getting their associate degree.

As Bosworth shows, certificates are particularly useful for hard-to-serve populations, such as minorities, low-income adults, and young people who didn’t do well in high school. The advantages of these programs include being shorter, offering more-focused learning, and being flexibly scheduled.

The programs can also adapt more quickly to changing market demand for specific skills and fields.

Like any education/training program, there is variation on economic returns depending on the field of study. We feel that there needs to be constant monitoring of earnings of graduates to ensure that students have the best information to align their interests and talents with occupations that are growing and that pay well. Another crucial factor is placement. In our analysis of SIPP data, certificate holders who are in occupations related to their training earn 35% more than those not working in their field.

Bosworth’s presentation of strategies for success gives clear guidelines on how to structure programs to maximize student completion and transition to successful labor market outcomes. There is a lot of talk about the need for more postsecondary educational attainment. All too frequently, people view this as increasing our rate of bachelor’s degree graduates. Although this is a reasonable goal, four-year degrees are not for everyone. The subbaccalaureate programs that result in two-year degrees and/or certificates are an important option for many students and need to be promoted just as much as bachelor’s programs.

ANTHONY P. CARNEVALE

STEPHEN J. ROSE

Georgetown University

Center on Education and the Workforce

Washington, DC


Brian Bosworth’s excellent article is an important contribution to the growing conversation about college completion and the labor market value of postsecondary credentials. He correctly points out that we have failed to recognize the value of certificate programs, particularly in high-value career fields with strong wages, which allow students to gain the credential and enter the workforce in a shorter period of time. This is a timely article as many states grapple with increasing the number of individuals holding some type of postsecondary credential.

Bosworth correctly argues that a certificate with good labor market value is the only ticket for certain populations to a good job and opportunity for a quality life. In Tennessee, as in many states, students come to institutions of higher education underprepared for collegiate work and also often have demands on their lives that prohibit full-time attendance in pursuit of the degree. Many adult students are unable to commit to four to six years of collegiate work in order to complete the degree.

A recent study by the Georgetown University Center on Education and the Workforce underlines the urgency for Tennessee. Between 2008 and 2018, new jobs in Tennessee requiring postsecondary education and training will grow by 194,000, while jobs for high-school graduates and dropouts will grow by 145,000. Between 2008 and 2018, Tennessee will create 967,000 job vacancies representing both new jobs and openings due to retirement; 516,000 of these job vacancies will be for those with postsecondary credentials. Fifty-four percent of all jobs in Tennessee (1.8 million) will require some postsecondary training beyond high school in 2018. The need for Tennesseans with postsecondary credentials is great. Certificates offer a tremendous opportunity.

But, as Bosworth states, not just any certificate will suffice, and certainly not only those delivered in the traditional structure. He argues that how we deliver such certificate programming has an even greater chance of ensuring completion for those adults who are busy with life and have many demands on their time and resources. His recommendations of the use of block scheduling, embedded student support and remediation, and cohort-based models are a major step forward in understanding successful structures for the types of students in our postsecondary institutions today.

His call for action to make this happen at all levels is important. In Tennessee, recent legislation requires the use of block-scheduled, cohort programs in our community colleges as a means to increase the number of those credentialed to obtain employment. We are taking this a step further and focusing some of our work on increasing the number of certificates of a year or longer that are delivered via this strategy. We believe that the data over the next couple of years will support the success of this effort. Of course, students already are telling us that this approach provides the only way that they could ever attend college. That speaks volumes to my mind.

PAULA MYRICK SHORT

Vice Chancellor for Academic Affairs

Tennessee Board of Regents

Nashville, Tennessee


Certificates that demonstrate significant occupation-related competencies and that are valued in the labor market are clearly an underdeveloped aspect of the college completion strategy. As Brian Bosworth points out, postsecondary certificate programs that are a year or longer in duration generally have good labor market payoff, and these longer-term certificates may be an important route to better employment and earnings for many Americans, particularly working adults and low-income and minority youth. Greater attention should certainly be paid by policymakers and opinion leaders to occupational certificates that can be completed fairly efficiently and that respond effectively to local employer needs.

But, as Bosworth notes, several pitfalls must be avoided. First, the goal cannot simply be more certificates: If states generate more short-term certificates requiring less than a year of training, few completers are likely to see any earnings gains. And the trends are troubling. According to the American Association of Community Colleges, in the past 20 years, community college awards of certificates of less than a year’s duration rose by 459%, while awards of certificates of a year or more rose 121%.

Needless to say, if minority and low-income students disproportionately choose or are steered to certificates with less economic payoff, the result may be more completers but little economic value for the graduates or society. Again, the trends give reason for concern. From 1990 through 2010, the percentage increase in short-term credentials earned by minority community college students was almost more than two times that of whites for blacks (770% compared to 440%) and three times for Hispanics (1337% versus 440%).

One important policy implication is that states need to track certificate students more carefully, so they have a better idea of who is enrolling in and earning what certificates, and so the labor market outcomes for recipients of different occupational certificates are well documented.

Bosworth ends his article with recommendations for how community colleges can implement evidence-based career programs. The federal government’s commitment of $2 billion in Trade Adjustment Assistance Communitgy College Training Grants can give these programs a big boost.

RICHARD KAZIS

Senior Vice President

Jobs for the Future

Boston, Massachusetts

Reducing Oil Use in Transportation

The public’s interest in reducing oil consumption has ebbed and flowed for decades, first prompted by the supply shocks of the 1970s and persisting today because of concerns about the buildup of greenhouse gases (GHGs) in the atmosphere and the cost of securing the world’s oil supplies. Today, the consumption of gasoline, diesel fuel, and other petroleum products in the transportation sector accounts for more than 70% of national oil demand. Yet when policies are pursued to reduce transportation’s petroleum use, the focus is almost exclusively on regulating the energy and emissions performance of cars and trucks. Motivating consumers to care about fuel economy and reduce energy-intensive vehicle use is often talked about but seldom acted on, because it is considered simply too difficult or disruptive.

An unprecedented regulatory effort is under way to boost the fuel efficiency of the nation’s cars and trucks, potentially doubling their fuel economy in terms of miles per gallon (mpg) within two decades. Without comparable increases in the price of gasoline and diesel fuel, mandated improvements in vehicle fuel economy will significantly lower the fuel “price” of driving, perhaps by as much as 50%. It is reasonable to ask whether this effective price decline will further erode consumer interest in purchasing vehicles with even higher fuel economy, and perhaps even to drive their vehicles farther and more often as vehicle operating costs decline.

These risks may be worth taking for the aggregate savings in fuel promised by the early mandated increases in fuel economy, but their repercussions warrant consideration. Vehicle use is sure to go up over time, if for no other reason than increased population and economic growth, and will require increasing investments in the already heavily used road system. In the present budget environment, it is likely that this will entail new ways of funding highway construction. Currently, road and other projects are paid for with revenues generated by taxes on fuel consumption. But these revenues have been declining and will continue to do so because of fast-rising fuel economy. Whether “cheaper” driving will cause even more driving is debatable, but it will certainly do little to encourage interest in alternative modes and may reinforce the pattern of dispersed and decentralized metropolitan development that is so dependent on the automobile.

Because automobiles account for two-thirds of transportation’s oil demand, they must be a target of any meaningful energy policy. But targeting them is much more easily said than done. Automobiles are tightly woven into the fabric of everyday life. The nation’s fleet of nearly 250 million cars and light trucks accounts for 85% of all miles traveled; the average household uses its vehicles to cover more than 20,000 miles per year.

Cars and light trucks dominate local travel. Although commuting to and from work is often considered the principal use, in actuality fewer than one in five trips made by private automobiles is for commuting. More than three-quarters of trips are made for shopping, running errands, and chauffeuring family members. Shopping trips alone account for more person-trips by automobile than the journey to and from work. For longer-distance travel, the car is also dominant, accounting for 95% of person-miles on trips up to 500 miles, and more than 60% of person-miles on trips between 500 and 750 miles. Not until distances from origin to destination exceed 750 miles do airlines account for a higher share of total person-miles of travel.

There are many reasons for the automobile’s supremacy. Not only do cars offer utility in providing door-to-door transportation service, but they also confer schedule flexibility, can carry multiple people and their cargo at little extra cost, and can be used for local travel when arriving at the final destination. The automobile offers the traveler privacy, protection from inclement weather, and a place to temporarily hold and secure belongings. With an automobile, travelers can make multiple stops en route, combining trips to and from work with shopping and other errands. No other mode of transportation comes close to offering such flexibility combined with the ability to cover large areas.

In nearly the same manner as automobiles dominate personal travel, trucks dominate freight movement. For hauling goods locally and over medium distances, trucks are the only practical option. They also provide door-to-door service, which saves on the labor-intensive transfer of cargo from one mode to another. Whereas many bulk and low-value commodities are still moved domestically by rail and water, these modes are used mainly for longer-distance line-haul and container movements. For shipping distances of less than 500 miles, trucks remain dominant for nearly all kinds of cargo. For shipping high-value goods, the reliability and security of trucking are critical attributes irrespective of shipping distance. Moreover, all modes of freight transportation, whether air, rail, or water, rely on trucks for picking up and delivering shipments to their final destinations. Consequently, trucks account for 80% of the petroleum used for freight transportation, representing about 20% of transportation’s total petroleum demand.

Thus, two modes of transportation, cars and trucks, are by far the main consumers of petroleum, accounting for more than 85% of transportation’s total. The next largest transportation user is the domestic airline industry, which uses not quite 10%. All other transportation modes combined, including passenger and freight railroads, public transit, and domestic waterways, account for the remaining 5% of transportation petroleum use. The relatively small amount of fuel used by these modes stems not so much from their energy-efficiency characteristics as from their limited usage. On a passenger-mile basis, for instance, intercity passenger rail (mainly Amtrak) is more energy-efficient by about 25 to 35% than its chief competitors, automobiles and airplanes. But passenger railroads serve only about 500 stations nationwide and account for less than 1% of total passenger miles. And their growth potential is limited because there are relatively few passenger-dense travel corridors of 100 to 500 miles, which are the most suitable markets for regular and higher-speed intercity rail.

Likewise, even though public transit is crucial to the functioning of many metropolitan areas, it accounts for fewer than 3% of all person-trips nationally and less than 1% of total passenger miles. Moreover, fixed-route transit bus operations, which make up most transit service, tend to be fuel-inefficient because these vehicles often run with few riders for a good portion of the day. When filled to near capacity, buses are very efficient. The average transit bus, however, carries fewer than 10 passengers per mile driven, despite being able to accommodate 40. As a result, the average transit bus uses about 25% more energy per passenger mile than the average passenger car.

The dominance of cars and trucks for transportation is often attributed to the fact that most of the country’s social and economic activity now takes place in metropolitan areas that are decentralized and spread out. These spatial patterns are poorly suited to providing the high passenger volumes needed to support fixed-route transit and have made center-city rail stations far less convenient than they once were. Public investments in highways during the past half-century, and particularly the building of the interstate system, are often credited with spurring the suburbanization of the country’s metropolitan areas. Because cities have been spreading out for centuries, this cause-and-effect relationship is often debated. The reality, however, is that the highway system is in place and durable. The dispersed built infrastructure that it serves is extensive and seemingly desired by Americans. Although it may be possible to change this au-tomotive-oriented landscape at the margins, reshaping it fundamentally will take many decades.

Energy policy implications

The scale and scope of U.S. dependence on motor vehicles explain why certain policies to curb transportation’s energy use have proven to be so sustainable and why others can only be described as anathema to policymakers. By far the most significant of the former is the Corporate Average Fuel Economy (CAFE) program, which requires automakers to sell vehicles achieving certain mpg averages. Because it accepts the dominance of the automobile, this longstanding program has been described as an attempt to “civilize” it.

Federal legislation calls for the CAFE standard to reach 35 mpg for cars and light trucks combined by 2020, from current separate levels of 30.2 mpg for cars and 24.1 mpg for light trucks. New federal GHG performance standards are expected to cause the 35 mpg threshold to be reached four years sooner (because most automotive GHG emissions derive from gasoline use), and federal regulators are in the process of planning more aggressive GHG performance standards that will boost the average mpg by more than 4% per year to nearly 55 by 2025. And for the first time, fuel efficiency standards are being put in place for medium- and heavy-duty trucks.

A doubling of automotive fuel economy levels in less than 20 years has no precedent. Whether these regulatory ambitions can be achieved at a reasonable cost and with cars and light trucks that have performance, styling, reliability, and size attributes that are acceptable to consumers remain to be seen. If not, the public may demand changes in the program to slow the rate of increase. It is not a coincidence that for all of the 1990s through much of the 2000s, CAFE standards remained essentially unchanged while the real price of gasoline continued to fall. Although carmakers had the means to increase fuel economy, consumers expressed little interest in it, nor did they demand it from their elected officials making energy policy.

In this regard, it bears noting that as vehicles become more fuel-efficient, each incremental gain in mpg will save less fuel than the last increment. For example, boosting fuel economy from 20 to 30 mpg will save more than 165 gallons of gasoline per year if a vehicle is driven 10,000 miles. Adding another 10 mpg to a car already obtaining 30 mpg will save only 83 gallons per year. From the standpoint of the owner, investing in increasingly more efficient vehicles provides a smaller and smaller fuel-saving return. If the vehicle cost or performance sacrifice to achieve each increment of mpg is greater than the last, the net return becomes even smaller.

There is also a longstanding debate about how sensitive drivers are to the aforementioned fuel price of driving. If fuel operating costs go down, motorists can be expected to do at least some additional driving. The direction of this relationship is indisputable and often is referred to as the “rebound” effect, because any added driving will counter some of the expected fuel savings from improved fuel economy. At issue is the size of the rebound effect. As incomes go up, however, the value of time becomes a critical factor in decisions about whether to drive more or less. In fact, there is evidence that this income effect is causing fuel costs to have less effect on miles driven than they once did. Estimates of the rebound effect vary, but most suggest that every 10% reduction in the real fuel cost of driving will cause vehicle miles of travel (VMT) to increase 1 to 3% over the longer term as drivers adjust their behaviors, for instance, by moving even farther away from their workplaces and other destinations.

Still, even if the rebound effect is minimal, demographic-induced growth in VMT will counter some of the fuel-saving impact of efficiency standards. During the last three decades of the 20th century, VMT increased an average of 2 to 3% per year, as the number of households grew dramatically and Baby Boomers and women entered the workforce in droves. No one expects such rapid growth to resume anytime soon, because the demographics of the country have changed. VMT are expected to increase about half as fast during the next two decades, on the order of about 1.5% per year. Even this smaller rate of growth, however, would mean that more than a quarter of the fuel saved from an aggressive 4% annual increase in vehicle standards will be countered by the upward trend in driving.

Difficult policy choices ahead

Given the political durability of the approach, it is difficult to argue against relying on vehicle efficiency standards as a practical and effective means of curtailing transportation’s petroleum use and GHG emissions. It is important to keep in mind, however, that there is no track record of these standards being increased year after year at a fast pace.

It is difficult to envision a scenario in which policymakers could ever generate public support for higher fuel taxes without offering a very compelling plan for use of the revenues.

The “civilizing” policy of mandated higher fuel economy has thus far proven to be acceptable to consumers when it does not lead to significant sacrifices in the affordability, utility, and performance of vehicles. Maintaining this balance goes a long way toward explaining why CAFE has remained central to federal energy policy for more than 30 years, while at the same time having a history of long periods of little or no change being made in the mpg standards. Based on this history, it is reasonable to question whether too much stock is being put in a policy approach that focuses on requiring the supply of efficient vehicles and not enough on policies to promote their demand.

Apart from whether the mandated mpg and GHG improvements will be sustainable in the absence of a financially motivated consumer, one must ask whether the reductions in energy use and GHG emissions from this policy will be enough. The answer depends on how deep the cuts need to be. If the concern is to control GHG buildup, then the answer is probably no. Scientific analyses and models indicate a need to stabilize atmospheric concentrations of GHGs by the middle of this century, leading to estimates that worldwide GHG emissions will need to be reduced by up to 80% by mid-century. Although it is not possible to ascribe a share of the reductions that must come from U.S. transportation, simply tempering growth in this sector’s emissions will make it necessary to achieve much larger cuts in other energy-using sectors.

The unavoidable reality is that in order to achieve deep reductions in petroleum use and GHG emissions within four decades, growth in driving will need to be slowed even more, or low-carbon energy sources will need to be broadly adopted even sooner. Despite the new GHG performance standards for automobiles, low-carbon fuels are not likely to have significant effects for some time. There are good reasons why petroleum accounts for more than 95% of transportation’s energy consumption. Not only are gasoline, diesel, and other petroleum-based fuels relatively inexpensive to buy, they are dense in energy, as is required for vehicles having limited space to store energy. Many opportunities remain to increase the efficiency of gasoline- and diesel-powered vehicles. It will be hard for any alternative energy sources to compete against the combination of efficiency gains in vehicles powered by internal combustion engines in an environment of cheap gasoline. Although encouraging the development and use of low-carbon energy sources makes sense, it may not pay substantial dividends for several decades.

Yet on the scale of difficulty, significantly curbing growth in VMT does not seem any more promising than greatly reducing the carbon content of transportation’s energy. For years, governments have subsidized public transit, in part to provide a means of mobility to those who do not have access to cars and to alleviate traffic congestion by offering more transportation choices during peak travel periods. The effectiveness of these investments for saving energy has long been debated. As a practical matter, even tripling transit ridership levels across the country would produce only a small dip in national VMT. And motivating more people to use these and other travel alternatives in an automotive-friendly environment will require much more than simply providing the transportation service or infrastructure. Its supply will need to be accompanied by a demand for its use.

Economists have long argued that a financial motivation in the form of higher fuel prices through taxation (in the absence of market-driven fuel price increases) is the most certain way to motivate people to conserve fuel through a varied set of means. In addition to prompting consumer interest in the much more efficient vehicles mandated by regulation, sustained higher fuel prices will induce other fuel-saving behaviors by individuals and businesses over time. Motor carriers, for instance, will reduce travel speeds and truck idling, seek more direct routing, use their vehicle capacity more intensely, and partner with railroads to provide line-haul trailer and container movements. Individuals will be more inclined to combine vehicle trips, carpool, walk, bicycle, ride transit, and forego some low-value travel. Over the longer term, both businesses that ship goods and households will factor higher fuel costs into their decisions about which modes to use; when substitutes for transportation make sense, such as telecommuting and Internet shopping; and where to locate relative to workplaces, customers, suppliers, and other destinations. This multipronged response is what makes higher fuel prices so effective as a means of controlling transportation energy use in a timely and economically efficient manner.

To simply mention fuel taxes, however, is to invite weariness or skepticism at best. Although fuel taxes have long played a key role in financing the nation’s transportation infrastructure, in the United States they are not used for the explicit purpose of fuel conservation. The federal tax on motor fuel is 18.4 cents per gallon. The average state and local tax adds up to about 30 cents per gallon. Although state and local taxes sometimes change, the federal tax has not changed since 1993, when it was increased by 4.3 cents. Meanwhile, the inflation-adjusted value of the tax has declined by more than 20%. In 1993, federal taxes accounted for about 15% of the retail price of gasoline. Today they account for only 5%. As a practical matter, fuel taxes in the United States have a minor effect on fuel conservation and are increasingly becoming inadequate to pay for the basic upkeep of the transportation infrastructure.

One rationale for using fuel taxes to fund highways is that they are an indicator of system use. The more one drives, the more one contributes to the financing of the system. Historically, growth in VMT led to increased fuel use and more fuel tax revenues. The combination of increasing vehicle fuel economy and stable tax levies means that these highway revenues could decline even as user demands on the system continue to rise. And because of public and political resistance to raising taxes, other user fees, such as tolls and fees per mile driven, are being considered to supplement fuel taxes to generate revenue and in some cases to alleviate congestion. In 2007, a report by the congressionally created National Surface Transportation Infrastructure Policy and Revenue Study Commission concluded that revenues required to meet the nation’s highway infrastructure needs were equivalent to $0.60 to $1.00 per gallon of fuel consumed. To help close this gap in needs versus revenues, the commission recommended that the federal motor fuel tax be increased by $0.05 to $0.08 per gallon annually for 5 years and then adjusted regularly for inflation. Two years later, having observed no political interest in recommendations to boost fuel taxes, a second national commission urged the creation of a new transportation finance system that would use more targeted tolling and direct user fees.

Need for innovative policymaking

Fuel taxes should not be cast aside as being forever impractical. To be sure, other forms of road-user pricing, such as tolls and mileage charges, present many of the same implementation challenges. Drivers do not particularly favor paying tolls over fuel taxes, whereas mileage charges introduce concerns associated with their administration and the potential for government monitoring of private travel. In fact, because cars are becoming more fuel-efficient, the time may be right to reconsider the practicality of raising fuel taxes. As fuel economy improves, higher fuel taxes will not necessarily boost the fuel cost of driving but rather hold it steady. A 4% annual increase in fuel prices, for instance, would not increase the fuel cost of driving if fuel economy grows comparably. In return, the revenues to pay for transportation infrastructure would be preserved, and consumer interest in higher fuel economy would be maintained.

But to exploit this opportunity will require innovative policymaking that provides something tangible to drivers in return. As an example, it will require that consideration be given to ideas such as providing consumers and businesses with rebates of a significant portion of the total revenues generated by fuel taxes. The resulting higher fuel prices will prompt conservation, while the rebates will return money to consumers and the economy. The revenue from higher fuel taxes can also be promoted as a means to offset other taxes or to provide essential government services. And of course, a convincing case must be made that these revenues are needed to pay for the transportation infrastructure that Americans and the national economy require. Indeed, it is difficult to envision a scenario in which policymakers could ever generate public support for higher fuel taxes without offering a very compelling plan for use of the revenue.

The emphasis on vehicle efficiency standards to the exclusion of most other policies is an inherently practical policy choice and one that will probably pay for fuel-saving dividends during the next 20 years and longer if increases can be sustained. Whether this approach is strategically the right choice—one that is aligned with longer-term policy goals— warrants more careful consideration. Fuel taxation has long been the “third rail” of energy policy, and there is no reason to believe this will change dramatically in the near term. But the policy debate should not be allowed to continue along these lines. We argue that fuel economy standards and fuel taxation are in many respects complementary policies. Not only can they be combined, they will increasingly need to be combined for many practical and strategic reasons.

From the Hill – Winter 2012

R&D funding picture continues to be mixed

On November 18, President Obama signed into law the first two bills governing spending for fiscal year (FY) 2012, which began October 1. As is to be expected in a difficult budget environment, with cuts becoming the norm, the outcome for R&D funding in a number of agencies was mixed. However, there was some good news: During the conference session designed to deal with differences between the House and Senate bills, two agencies—the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST)— ended up with more money than in either separate bill.

In the Commerce, Justice, Science, and Related Agencies appropriations bill, NSF received $7 billion, $173 million or 2.5% more than in FY 2011. But this funding level was still well short of the president’s request of $7.8 billion, which would have been consistent with his plan for doubling the agency’s budget. NIST was funded at $751 million, a $1 million or 0.1% increase. However, the bill funds only one of the Industrial Technology Services (ITS) programs, the Hollings Manufacturing Extension Partnership. Funding was eliminated for the Advanced Manufacturing Technology Consortium, the Baldrige Performance Excellence Program, and the Technology Innovation Program.

The National Aeronautics and Space Administration received $17.8 billion in the bill, a $648 million or 3.5% decrease. The House bill eliminated funding for the James Webb Space Telescope, but the conference sided with the Senate and funded the project at $530 million.

The National Oceanic and Atmospheric Administration (NOAA) fared the least well in the bill with an estimated R&D investment of $620 million, $28 million or 4.3% less than in FY 2011 and $116 million or 15.8% less than the president’s request. The administration had proposed the establishment of a National Climate Service within NOAA, and although the proposal was supported by the Senate, the conference sided with the House and did not fund the new line office.

The Agriculture, Rural Development, Food and Drug Administration, and Related Agencies appropriations bill provides $17.5 billion for the Department of Agriculture (USDA), which is $72 million or 0.4% less than FY 2011 and $1.8 billion or 9.4% less than the president’s request. The projected USDA R&D investment in the bill is $1.9 billion, $120 million or 6% less than last year and $136 million or 6.7% less than the request. The Agricultural Research Service, the USDA’s intramural funding program, is funded at $1.1 billion, a $39 million or 3.4% cut from FY 2011. The National Institute for Food and Agriculture (NIFA), the USDA’s extramural funding program, received $1.2 billion, a $13 million or 1% cut from last year. The Agriculture and Food Research Initiative, NIFA’s competitive funding program that has seen gains in recent years and was slated for an increase under the president’s request, was funded at $264 million, the same level as FY 2011.

Other appropriations bills, including the one that funds the National Institutes of Health, were still being worked on when this issue went to press.

In October, the Senate Appropriations Committee released the draft text for its Interior and Environment appropriations bill. The bill would fund the Department of the Interior’s U.S. Geological Survey at $1.064 billion, $20 million or 1.8% less than FY 2011. The Environmental Protection Agency’s Science and Technology account would receive $809 million, $4 million or 0.5% less than last year, but $54 million more than the House.

The House Appropriations Committee released a draft Labor, Health and Human Services, and Education appropriations bill in late September that would fund the NIH at $31.8 billion, the level of the president’s request and a $1 billion increase over FY 2011. The bill does not mention the proposed new National Center for Advancing Translational Sciences, but it does contain provisions on the number of new and competing grants that NIH must fund (a minimum of 9,150 across the entire NIH) and the balance between extramural and intramural support (a 90-10 split).

House committee examines implications of Russian rocket crash

On October 12, the Space and Aeronautics Subcommittee of the House Committee on Science, Space and Technology held a hearing on the recent crash of a Russian rocket and its implications for the future of the International Space Station (ISS) and space flight.

Because of the cancellation of the Constellation program, the National Aeronautics and Space Administration (NASA) will depend for the foreseeable future entirely on Russia to ferry supplies, fuel, and astronauts to and from the ISS. So when an unmanned Russian Soyuz rocket—the same type of rocket slated to be used for future manned flights—crashed into the mountains of Siberia earlier in 2011, it raised concerns about the strength of the Russian space program and its relationship with NASA.

In his opening remarks, Rep. Steven Palazzo (R-MS) inquired about NASA’s involvement in the Russian space program’s investigation of the rocket failure. He also asked about the level of risk this setback introduced for the U.S. astronauts already onboard the ISS and the research they are conducting.

William Gerstenmaier, associate administrator of NASA’s Space Operations Mission Directorate, assured the committee that the Russian space program was extremely cooperative in conducting their investigation, even allowing NASA to conduct their own independent study. The cause of the recent crash was determined to be contamination in the fuel lines, probably introduced during late-stage inspections. This is an issue of quality control, he said, not a system failure that indicates a problem with the Soyuz rocket type, which has been flying successfully for many decades. He said he is confident that the current three-person crew aboard the ISS is safe. In addition, he said that NASA has contingencies in place to return them to Earth and operate the station remotely if necessary. Research is continuing, he said, although a larger crew would allow for additional studies. He concluded, “If we all work together, the ISS will continue to be an amazing facility that yields remarkable results and further benefits for the world.”

Lieutenant General Thomas P. Stafford, chairman of the NASA Advisory Council Task Force on ISS Operational Readiness, echoed Gerstenmaier’s claim that the Russian space program is competent and highly cooperative, posing no threat to the future of U.S. participation in the ISS. Stafford, who has worked closely with the Russian space program since the era of the Soviet Union, said, “I can attest to their thorough and complete approach to problem solving and to their robust manufacturing and test program philosophy.”

The hearing also touched on the future of commercialization of U.S. space flight. Rep. Dana Rohrabacher (R-CA) asked if NASA’s dual focus on partnering with Russia and encouraging domestic commercial projects could create problematic competition in the future. Gerstenmaier insisted that NASA will continue to have both needs, and that he does not expect that commercialization will make NASA’s partnership with Russia redundant.

Senate committee votes to overhaul controversial education law

In a bipartisan 15-7 vote on October 20, the Senate Committee on Health, Education, Labor, and Pensions voted to overhaul the controversial Elementary and Secondary Education Act, often referred to as No Child Left Behind (NCLB).

Above all, the bill would eliminate the contentious adequate yearly progress standard that is used to measure gains in proficiency through the use of standardized test scores. In September, the Obama administration offered states the opportunity to seek waivers for the math and reading standards, and more than 30 states have applied.

Committee Chairman Tom Harkin (D-IA) and Ranking Member Mike Enzi (R-WY) said that the reauthorization was long overdue and that numerous compromises were made in the spirit of advancing a feasible, bipartisan bill.

An amendment submitted by Sen. Jeff Bingaman (D-NM) and cosponsored by Sens. Patty Murray (D-WA) and Richard Blumenthal (D-CT) would provide funding to ensure the integration of new technology into classrooms and the education system in general. Sen. Lisa Murkowski (R-AK) supported the amendment, saying that it helped to address her concern that the education sector is not taking advantage of the predominance of technology in the lives of today’s youth. She said, “We have to deliver education to kids in a way that is relevant and captures their attention.” Despite Enzi’s concerns about potential difficulties in implementing the program in rural schools with small budgets, the amendment was agreed to by a voice vote.

Sen. Michael Bennet (D-CO), along with co-sponsors Blumenthal and Murray, introduced an amendment to establish an Advanced Research Projects Agency – Education (ARPA-ED) within the Department of Education. ARPA-ED would pursue “breakthrough research and development in education technology” and provide “effective use of technology to improve achievement for all students.” The creation of ARPA-ED was initially proposed in President Obama’s FY 2012 budget request with an initial request of $90 million. The Bennet amendment would fund ARPA-ED from the already existing Investing in Innovation fund at an amount not to exceed 30% of its budget. The amendment was adopted by voice vote.

The Senate’s comprehensive approach to the legislation is in contrast to the House, which has been approaching reauthorization of the education law in a piecemeal fashion.

House members debate new National Ocean Policy

The House Natural Resources Committee held two hearings in October that focused on President Obama’s new National Ocean Policy (NOP), which was created by an executive order signed on July 19. Republicans on the committee expressed some concerns about the policy, including whether it would increase restrictions on ocean, coastal, and inland economic activities. Committee Democrats largely supported the new policy.

The NOP creates a National Ocean Council tasked with improving the stewardship of the nation’s ocean and coastal resources and streamlining the more than 140 regulations currently in place. Under the Council, regional planning bodies will be created to implement coastal and marine spatial planning (CMSP), which the order defines as “a comprehensive, adaptive, integrated, ecosystem-based, and transparent spatial planning process, based on sound science.”

The main goal of the NOP is to increase the consistency of the decision-making and regulatory processes and allow the United States to manage its ocean and coastal resources to maximize benefit from all of the desired uses. The NOP has 10 objectives ranging from promoting ecosystem health and the use of the best available science in decisionmaking to maintaining maritime cultures and promoting public understanding. It divides the United States into nine regions, with each region having a regional planning body tasked with a CMSP for that region.

At the first hearing, held October 4, a series of criticisms of the NOP, summarized on the committee Web site, were raised by Chairman Doc Hastings (R-WA), witnesses, and representatives from multiple coastal states. Their concerns centered on the lack of congressional approval or legislative authority for the creation of a NOP. The critics were uncomfortable with a perceived underrepresentation of stakeholders and local entities and said that ocean zoning would restrict access to ocean and coastal resources in unacceptable ways.

Witnesses at the October 26 hearing countered these arguments, noting that the NOP invites state, tribal, and local entities to participate in the regional planning bodies. They said that CMSP is distinct from ocean zoning and that, in any event, ocean zoning is not mandated under the NOP.

Ranking Member Ed Markey (DMA) and Rep. Sam Farr (D-CA) defended the NOP, citing bipartisan support for a comprehensive ocean plan during the past 10 years and the support of 22 coastal states that have expressed a need for a NOP. They argued that the NOP would improve communication and coordination between stakeholders and, by harmonizing regulations, reduce regulatory uncertainty and encourage investment.

At the October 26 hearing, the committee heard from representatives of the administration: Nancy Sutley, chair of the Council on Environmental Quality and co-chair of the National Ocean Council, and Jane Lubchenco, administrator of the National Oceanic and Atmospheric Ad-ministration (NOAA).

In their opening remarks, Sutley and Lubchenco both stressed the importance of ocean and coastal resources to the entire nation and the need, recognized by both the current and the previous administrations, for an integrated approach to the management of those resources. Lubchenco pointed out that more than half of the U.S. population lives in coastal regions and 60% of gross domestic product is related to ocean, coastal, and Great Lakes resources. Lubchenco said that the current chaotic regulatory system discourages investment in the development of ocean and coastal resources, saying that “The National Ocean Policy creates order out of chaos.” Sutley noted that members of industry, fishermen, and the U.S. Navy and Coast Guard have supported a CMSP and also emphasized that CMSP does not constitute ocean zoning.

The Republican Committee members’ questions were largely comprised of demands to know what the administration saw as the statutory authority to implement the NOP. They also resisted acknowledging the distinction between ocean zoning and CMSP. Several committee members aggressively sought assurances that the NOP would not further restrict the use of ocean and coastal resources.

Although both parties on the committee recognize the need for a revised regulatory structure with regard to ocean and coastal resources, strong disagreements remain over whether the NOP is the appropriate way to proceed. Sutley and Lubchenco argued that the NOP would create information-sharing bodies that would better inform decisionmakers.

Federal science and technology in brief

  • On November 2, the Senate Committee on Commerce, Science and Transportation approved several bills, including the Harmful Algal Blooms and Hypoxia Research and Control Amendments Act of 2011, which calls for a national strategy and implementation plan to address harmful algal blooms and hypoxia (inadequate oxygen supply to cells and tissues). Similar legislation was approved by the House Science, Space, and Technology Committee in July.
  • The Food and Drug Administration (FDA) released a blueprint for biomedical innovation featuring seven steps: rebuilding small business outreach services; building infrastructure to drive and support personalized medicine; creating a rapid drug development pathway for targeted therapies; harnessing the potential of data mining and information sharing; improving the medical device review process; training new innovators; and streamlining FDA regulations. “Our innovation blueprint highlights some of the initiatives FDA will be implementing that [new scientific] opportunities are translated into safe and effective treatments that can help keep both American patients and American industry healthy and strong,” said FDA Commissioner Margaret Hamburg.
  • The President’s Council on Jobs and Competitiveness released an interim report titled “Taking Action, Building Confidence.” The report groups its recommendations into five initiatives: (1) investing in infrastructure and energy development; (2) encouraging entrepreneurship and accelerating the number and scale of young, small businesses and high-growth firms; (3) fostering investment, within the United States; (4) simplifying regulatory review and streamlining project approvals; and 卌 ensuring U.S. talent to fill existing job openings as well as to boost future job creation. The Council plans to address the major factors underpinning national competitiveness in its year-end report.
  • The House passed by voice vote legislation that would prohibit U.S. airlines from complying with a European Union (EU) climate change law that will soon subject any airplane flights into or out of an EU airport to the European cap-and-trade restrictions on greenhouse gas emissions. The EU’s high court is examining the legality of the proposal, which is also opposed by other countries. The Senate is not likely to take up the measure.
  • • On September 26, the National Institutes of Health released a revised policy on managing conflicts of interest (COI) in the initial peer review of NIH grant and cooperative agreement applications. The revised policy is particularly intended “to facilitate reviews that involve multi-site or multi-component projects, consortia, networks, aggregate data sets, and/or multi-authored publications.” The policy covers both federal employee and non-federal members of scientific review groups, including mail reviewers. Non-federal members, in particular, may not participate in the review of an application if they have “a real COI or an appearance of a COI with [the] application.” Bases for COI can include “employment, financial benefit, personal relationships, professional relationships, or other interests.” The new policy applies to all applications submitted for the September 25, 2011 deadline and thereafter.

“From the Hill” is adapted from the newsletter Science and Technology in Congress, published by the Office of Government Relations of the American Association for the Advancement of Science (www.aaas.org) in Washington, DC.

Better Skills for Better Jobs

In 2008 and 2009, the U.S. economy shed more than 8 million jobs; since 2009, the economy has created only about 2 million. Most economists expect the labor market to continue to recover slowly from the Great Recession over the next several years. But the quantity of jobs is not the only concern. The nation also needs to focus on creating more high-quality jobs and providing workers with the skills necessary to perform those jobs.

Too many U.S. workers lack the education necessary to thrive in a modern economy and the specific skills required to compete for high-paying jobs. In today’s more competitive product and labor markets, employers will create such jobs only if the productivity of their workers can potentially match their higher levels of compensation. Doubtful about the productive potential of their workers, employers often choose to compete based only on low costs rather than on better worker performance.

Instead, the United States should make it easier for employers to create and fill good jobs with highly productive workers. To do so, it needs to create and fund more-coherent and more-effective education and workforce-development systems. These systems should place their primary emphasis on providing more assistance to at-risk youth, both in school and out, and also to adult workers who are disadvantaged. Furthermore, these programs should take advantage of the latest evidence on effective training to maximize their impact.

Rigorous economic research has arrived at a consensus that education and training programs that are clearly targeted toward firms and sectors providing well-paying jobs tend to be successful in raising participant earnings. Studies using randomized controlled trial (RCT) evaluation techniques, the gold standard of empirical evidence, have highlighted the importance of linking training programs with employer and labor market needs. Now is the time to apply this insight to public programs.

To raise the employment and productivity of U.S. workers, I propose a new federal competitive grant program that funds evidence-supported training programs at the state level. At a cost of roughly $2 billion per year, the program would underwrite a range of efforts aimed at educating workers for jobs in firms that pay well and in growing industries. Rather than reinventing the wheel, this program would build on the efforts already made in many states to integrate those states’ education and workforce systems and to target key sectors on the demand side of the labor market more effectively.

Grants would be awarded to partnerships between secondary and postsecondary institutions, employers from key industry sectors, workforce agencies, and intermediaries. The grants would fund a range of evidence-based educational and training activities for workers that have low incomes or limited education. The grants also could help students by funding support systems such as career counseling activities or child care and could be used to provide technical assistance or tax credits to firms that create well-paying jobs and fill them through appropriate workforce strategies.

These activities would not only help generate education and workforce systems that are more effective, but also encourage states to integrate these systems with their economic development activities. These funds would be used to leverage existing and potentially new private and public sources of funding, and would encourage the efficient use of funds in a sustained manner over time. Evidence from rigorous evaluations suggests that such investment could potentially generate benefits several times as high as the investment itself. Although the program is designed primarily to create better-skilled workers and more well-paying jobs over the longer term, it could also help reduce the nation’s current high unemployment rate in the next few years.

Lagging skills

The education and training of the U.S. workforce have not kept pace with the growing demand for skills in the labor market, leading to earnings stagnation and growing inequality. Nearly 25% of today’s young people fail to finish high school, much less obtain a postsecondary credential of some kind. Given the very high return to education in the U.S. labor market, the groups that lag behind in educational attainment suffer low earnings over their entire working lives.

Better skills will pay off only if companies create well-paid positions for highly productive workers. But firms might not choose to create a socially optimal number of high-quality jobs on their own because of a variety of market failures. For one thing, many employers have very limited knowledge of different compensation and human resource options that might generate highly productive workers who are well compensated. Furthermore, the ability of employers to choose the “high road” might be constrained by the quality of workers whom they perceive to be available for hiring, in terms of basic and occupational skills. And employers might be reluctant to invest their own resources in training workers for a variety of reasons, beginning with the reality that the newly trained worker could move to a different company.

Furthermore, the locus of the “good jobs” is changing, with many fewer available in manufacturing and more appearing in professional and financial services, health care, construction, and even the high end of retail trade. Furthermore, the decline in good job availability in manufacturing is concentrated among the least-skilled workers, whose employment there declined dramatically; in contrast, employment in manufacturing for workers in the highest-skilled quintiles has declined only slightly.

Fortunately, the data show that good jobs are not disappearing in general. If anything, the number of jobs in the highest quintile of quality actually grew during the period from 1992 to 2003, but most of the high-paying jobs require a strong set of basic cognitive or communication skills, or both. Except in the professional and financial services sectors, many of these jobs do not require a four-year college diploma, but they generally do require some kind of postsecondary training and certification.

Positions in health care often include a variety of nursing categories as well as technicians. In construction, positions usually include the skilled crafts (electricians, plumbers, carpenters) that can be filled through apprenticeships or other training models. In manufacturing, positions often include not only engineers but also machinists, precision welders, and other highly skilled workers; and in a variety of other sectors, workers with some college or an associate’s degree enjoy relatively high earnings. These positions include managerial and professional jobs, jobs in the STEM fields (science, technology, engineering, and math), health care, sales, and office support jobs, and even jobs in blue-collar fields. Research by Anthony Carnevale and colleagues shows that a significant share of workers with occupational licenses or certificates as well as those with associate’s degrees earn more than the median worker with a bachelor’s degree in key fields.

Programs designed to improve the skills and productivity of U.S. workers, if they work carefully with targeted employers and industries, could fill some vacant jobs that currently exist and perhaps encourage employers to create more jobs over time.

Despite the value of the skills required for these jobs, certain well-documented problems in the education and workforce systems mean that too few workers make investments that would allow them to fill these well-paying jobs. For example, many students currently attend two-year or four-year institutions but achieve too little there to improve their labor market outcomes. Dropout rates are extremely high, especially in community colleges, where many youth and adults—especially those from minority or low-income communities—are stuck in remedial classes from which they never emerge and are completely separated from the classes that could provide relevant occupational training. As a result, most community college students never earn even an occupational certificate, much less an associate’s degree. Data from the American Association of Community Colleges indicate that 12.4 million students attended community college in the fall of 2008, about 7.4 million for credit, yet fewer than a million associate’s degrees or certificates were awarded in the 2007–2008 school year.

In Germany and elsewhere in Europe, training that helps workers prepare for good labor market opportunities is delivered through high-quality career and technical education (CTE). Such systems have not developed in the United States, at least partly because of historical controversies over “tracking” minority students away from college. But at its best, CTE would not deter students from attending postsecondary institutions and might indeed be structured to better prepare and encourage more students to do so.

Little career guidance, especially guidance based on local or state labor market data, is provided to high-school or community college students. Indeed, it is often not until after entering the labor market and then becoming unemployed that many disadvantaged workers are provided their first valuable career guidance. Such guidance is provided quite cost-effectively to workers at more than 3,000 One-Stop offices around the country, funded through the U.S. Department of Labor’s Workforce Investment Act (WIA) in the form of “core” and “intensive” services plus limited training. However, local workforce boards, which disperse funds provided through WIA, do not always effectively represent the employers with the best-paid jobs and with strong demand in growing industries, and are not always integrated with state and local economic development efforts.

Even when college students know that earnings and labor market demand are strong in certain fields, such as nursing or health technology, they often find only limited instructional capacity in these areas in many colleges. This might be because there are few incentives for institutions to meet labor market demand, or because their per-student subsidies from state governments do not depend on degree or certificate completion rates or on what kinds of credentials students earn.

As a result, not only do too few workers obtain certificates and degrees, but those obtained are often not well matched to labor market demand in key sectors. Under these circumstances, when employers create high-paying jobs at the middle and high ends of the skill spectrum, they often have some difficulty filling them with skilled workers. Indeed, the job vacancy rate has averaged 2.2 to 2.3% over the past year, which is relatively high, given an unemployment rate of more than 9%. Even in sectors such as manufacturing, where vacancy rates are not high overall, the ratio of vacancies to new hires is striking, suggesting that employers have some difficulty filling vacant positions.

All of this suggests that programs designed to improve the skills and productivity of U.S. workers, if they work carefully with targeted employers and industries, could fill some vacant jobs that currently exist and perhaps encourage employers to create more jobs over time. The programs should thus help reduce unemployment and job vacancies in the short term while also raising worker earnings in the longer term.

Evidence of effectiveness

One path to creating good jobs for disadvantaged workers involves raising their skills and productivity to make them more attractive to potential employers. A rigorous body of evidence suggests that certain education and training efforts can be cost-effective for addressing these issues, even when brought to substantial scale. Whereas the overall evi dence on the cost-effectiveness of job training for disadvantaged workers in WIA and elsewhere is at least modestly positive, there are some particularly strong examples that demonstrate the effectiveness of education and training that target well-paying jobs on the demand side of the labor market and that are coordinated with employers there. The best studies have demonstrated results from these programs using experimental methods from RCTs, and some fairly persuasive nonexperimental evidence also exists. RCT studies are important because they allow researchers to compare the labor market outcomes of those who receive training to the outcomes of those who do not, to demonstrate the benefits and costs of each intervention.

The most important recent study is the Sectoral Employment Impact Study of three major programs in Boston, New York City, and Milwaukee, Wisconsin, conducted by Public/Private Ventures. The evaluation used random assignment methods to test for program impacts on workers’ subsequent earnings, and it found that three to six months of well-targeted training generated large impacts on earnings in all three programs in the second full year after random assignment. Net impacts on earnings were about $4,500 per participant over the 24-month period after random assignment, with about $4,000 in the second year, once training was completed. Direct costs of the program were estimated to be about $6,000 per worker. Assuming that the large earnings gains persist into the third year, the program is clearly cost-effective.

The study’s authors have attributed the programs’ success to the close relationships between employers, training providers (which are sometimes but not always community colleges), and the intermediaries who coordinate their efforts. Improved earnings were the results of higher employment rates and higher wages for training participants. Because the three programs evaluated are moderately large, the evaluation demonstrates that effective programs can potentially be brought to scale.

A random assignment study of Year Up, a sectoral training program for out-of-school youth, found that it achieved similar results. The program trained youth from 18 to 24 years old from low-income urban neighborhoods for jobs in the information technology sector in New England, New York City, and elsewhere. The year after the program took place, the treatment group reported earnings that were on average $3,461 higher than the control group due to higher hourly wages.

Several other efforts that provide occupational training plus work experience to students in key sectors have generated impressive estimated results. Regarding a successful example of CTE in high school, a random assignment evaluation of the Career Academies found large increases in the earnings of young men, especially those deemed at risk of dropping out of school, even eight years after random assignment. The Academies focus on particular sectors of the economy, combine high-quality general academic instruction with occupational training, and provide critical work experience in those sectors to students. The Academies did not “track” students away from postsecondary education; the postsecondary enrollment rates of these students were no lower than those of students in the control group.

Thus, we now have rigorous experimental evidence on highly cost-effective programs for in-school youth, out-ofschool youth, and disadvantaged adults. All of these programs provide education or training that closely targets well-paying employers or economic sectors, where outreach to and active engagement of employers is a major part of the training process. This evidence supports nonexperimental evidence on effective training programs that suggests, for instance, that apprenticeships have also increased earnings in some evaluation studies, as have various state-level programs providing incumbent worker training.

It is important to note that all of these relatively successful programs have been in operation for many years and have developed strong curricula and links to the business community that might not be easily replicated in a short time period. Furthermore, they focus on workers who are disadvantaged and who have strong enough basic skills and education credentials to successfully handle moderately technical training. These successes probably can be replicated in other settings over time, but only with appropriate screening of candidates and careful development of their occupational training curricula and ties to employers.

Demand projections often have some degree of error. Therefore, state plans should also indicate the extent to which the education and training provided are general and likely to be portable across sectors if unanticipated demand shifts occur.

A few other education or employment programs in community colleges and low-income neighborhoods that have undergone evaluation (with varying degrees of rigor) also deserve mention. The strongest evidence, based on RCT research designs, shows positive effects on educational outcomes from the Opening Doors community college interventions, which include merit-based financial aid; the structuring of “learning communities” of students; and certain kinds of mandatory counseling on educational outcomes. A study of community colleges, using nonexperimental evidence from a program in Washington state that integrates developmental (or remedial) education with occupational training, known as Integrated Basic Education and Skills Training (I-BEST), shows positive effects on educational outcomes. And the Youth Opportunities program at the U.S. Department of Labor, which provided grants to 36 low-income communities to develop and coordinate local educational and employment services for youth, generates some positive effects on both educational and employment outcomes in these sites. Thus, a range of studies has demonstrated the potential to improve educational outcomes at community colleges and to build systemic efforts to provide employment services.

From research to practice

Given the strong recent evidence on the efficacy of job training that carefully involves employers and considers labor demand for disadvantaged populations, there is clearly some strong potential to raise the skills of workers. Raising those skills would allow some currently low-skilled workers to fill existing jobs and also would help create new employment opportunities if employers respond to a more productive set of workers by creating more well-paying jobs for them.

The goal is to encourage the creation of more-effective education and workforce systems that include evidence-based training models and are more responsive to employers who create good jobs. Given current and future budget constraints, any new public expenditure should be designed primarily to improve the efficiency of resources already in the system, but some important categories of services would also benefit from greater support. These expenditures should build on encouraging efforts that have been developed in several states and leverage other existing sources of funding.

Accordingly, I propose that the federal government create and fund a new competitive grants program to support the building of education and workforce development systems aimed at filling well-paying jobs in key economic sectors. Grants would go primarily to the state-level partnerships, though some small number would also be provided at the federal level to partnerships in some key sectors, such as health care, which would support state-level efforts around the country in these sectors. Some might also go directly to regional efforts at the substate level, although the states would decide how to incorporate regions into their efforts.

The idea for such competitive grants is not new. In fact, a somewhat similar idea has been embodied in legislation that has already passed the U.S. House of Representatives as a potential amendment to WIA and has also been proposed in the Senate. The Strengthening Employment Clusters to Organize Regional Success (SECTORS) Act of 2008, passed by voice vote in the House that year, calls for grants of $4 million to $5 million to be made to industry or sector partnerships, although no new funding for services was provided. Senator Patty Murray (D-WA) has recently proposed the Promoting Innovations to Twenty-First Century Careers Act, which embodies somewhat similar ideas for state and regional partnerships.

The proposal described here, however, would be much broader in scope, would target the states, and would provide new funding for services as well as the organizational infrastructure of “partnerships.” In that way, it might be more like President Obama’s originally proposed American Graduation Initiative for grants to states and community colleges, which now receives a little funding under the Trade Adjustment Assistance Community College and Career Training (TAACCCT) program.

The grants, which would be administered jointly by the U.S. Departments of Education and Labor, would begin in the first several months as planning grants but then would evolve into grants that fund both services and system building within two years of the program’s launch. Overall, the programs should be funded at the level of roughly $2 billion per year for at least five years. Renewal of these grants would be allowed only if the grantee could provide evidence of strong results.

Grants would generally be awarded on the basis of the following mandatory criteria designed to model successful training programs:

  • The inclusion of key partners, including community colleges and other education or training providers, industries or large employers with strong labor demand and good jobs, local workforce development agencies, and intermediary organizations with strong links to employers or industries
  • The targeting of disadvantaged workers
  • Responsiveness to labor market and employer needs
  • Funding of key direct supports and services to students, workers, and employers, as identified below
  • The extent to which other sources of public or private funding are leveraged, as part of efforts that will be sustainable over time
  • The strength of the evidence on which the training and educational models are based
  • The strength and rigor of evaluation plans

Industry and employer partnerships. To begin, states would need to create new or strengthen existing partnerships among postsecondary education institutions (as well as high schools providing high-quality CTE), employers or their associations in key economic sectors, workforce agencies (such as state and local workforce investment boards), and perhaps other nonprofit institutions at the state or local levels that serve as intermediaries in these efforts. The evidence reviewed above suggests that the involvement of employers is critical and that the more successful programs use intermediaries that have long-term relationships with employers.

Key employer and industry partners would be drawn from sectors where jobs generate good pay and benefits per average level of education and where employment growth is projected to be strong over time, using newly available administrative labor market data at the state and local levels. Industry associations would be particularly important partners, because it is hard to build systemic efforts with individual employers. But impressive models in which particular employers have reached out to education providers to build “career pathways” for high-school and college students could be replicated and brought to greater scale. For instance, IBM has recently helped build the Pathways in Technology Early College High School (P-TECH) program in Brooklyn, and Pacific Gas and Electric (PG&E) has started the PowerPathways skill development program in conjunction with local community colleges in California.

Targeted trainees and sectors. During the planning process, states would be required to systematically identify underemployed groups of workers, including but not limited to disadvantaged youth and adults, who might benefit from new sectoral or career pathway models at different levels of skill. States also must identify the sectors where demand is likely to remain strong and likely to generate well-paying firms and jobs. Intermediaries with strong ties to those employment sectors should also be included in the planning stage. These could include community-based nonprofits, associations of employers, and workforce development organizations, among others.

Of course, demand projections often have some degree of error, especially since labor demand can shift in directions that are not easily predicted from recent trends. Therefore, state plans should also indicate the extent to which the education and training provided are general and likely to be portable across sectors if such unanticipated demand shifts occur. The best plans will also include funding or technical assistance for employers who might need modest retraining either for newly hired or incumbent workers who do not exactly fit their current skill needs. Thus, state plans should provide for occupation- and industry-specific training, as well as for mechanisms that generate flexible responses to unanticipated demand shifts.

Broader measures to support employment-based training. The grants would be used to stimulate responsiveness to the labor market at two- or four-year colleges. For instance, the grants could be used to expand high-quality CTE programs in high school and career counseling at colleges, and to encourage educational institutions to expand instructional capacity in high-demand areas, based on labor market data. Indeed, states could be rewarded for tying their subsidies for community colleges to rates of certificate or degree completion, especially in sectors of strong demand. The integration of developmental or remedial education with occupational training could be encouraged, along with other proven efforts to reduce dropout rates.

Some funds would be available to pay for tax credits or technical assistance to well-paying employers that participate in sectoral training programs and other efforts to upgrade their incumbent workers. A model for this technical assistance might be the Manufacturing Extension Partnership program, which helps manufacturers upgrade workplace performance and productivity. More broadly, states should indicate that their education and workforce systems are also part of broader economic development plans to assist industry development and employment growth, especially in geographic areas that are currently underserved.

Funding direct services for trainees. Grants to states would then pay for some direct service provision that is not already available to Pell grantees and other lower- or middle-income postsecondary students. These services could include tuition payments for coursework leading to certification in the relevant fields by both prospective and incumbent employees who are not eligible for Pell Grants; stipends for paid work experience under apprenticeships, internships, and other forms of college work-study in these fields; and supportive services, such as child care for low-income parents. Small federal programs that already provide such funding, such as the Child Care Access Means Parents in School program or the Job Location and Development Program, which provides off-campus paid work to students under the Federal Work Study program, could be effectively expanded and perhaps even incorporated into such efforts.

Promoting sustainability through leveraging of other existing funding sources. States would receive grant money only if they provide better services to students and better incentives to institutions as part of lasting systemic plans to improve the matching of less-educated or disadvantaged workers with good jobs over time. To encourage plans that will be more lasting, states would have to generate plans to sustain their efforts over time, using other public and private sources of funds.

The new program should leverage other recent and current funding efforts, especially if the states can indicate how they are building on the progress generated from those other efforts. For instance, in addition to the TAACCCT program, the proposed fund could complement activities funded by the U.S. Department of Labor through recent competitive grant programs such as the High Growth and Emerging Industries Job Training Initiative and the Workforce Innovations for Regional Economic Development grants to regions. It could also complement the efforts of several national foundations, such as the National Fund for Workforce Solutions, and others aimed at community colleges and states to improve degree completion rates as well as career pathways to local labor markets. Examples of these initiatives include Achieving the Dream, Shifting Gears, and Breaking Through. It would build on activities already begun in many states to more closely link their education and workforce activities (including those funded by WIA) to economic development, and also build on major new workforce initiatives such as the No Worker Left Behind program recently implemented in Michigan. That program provides training funds to displaced workers who are being trained in community colleges for jobs in industries where high future growth is expected.

Most important, the grants could encourage much better use of the enormous sums of federal money that the Obama administration recently invested in the Pell Grant program. They also could promote better use of very large state subsidies to public colleges by raising certificate or degree completion rates among grant recipients that are well matched to good jobs in the labor market. Thus, this program would not duplicate other efforts but build on them. The grants would encourage states to combine currently disparate and uncoordinated funding efforts into more effective education and workforce systems that are better matched to state and local labor market demand.

Proposed plans for grant applications should leverage private funding sources. Indeed, since employers would benefit to some extent from these programs, they should be willing to contribute some modest funding, perhaps through their industry associations or through dedicated funds from state payroll taxes.

Implemented in this fashion, the program could generate the kinds of lasting systemic changes at the state level that apparently have been induced by other federal grant programs recently, such as the Race to the Top fund in K-12 education or the expansions of unemployment insurance eligibility under the Unemployment Insurance Modernization Act provisions in the recent federal stimulus bill.

Evidence base and evaluation. The criteria provided above are in part based on the evidence about what creates a successful training program, but the state plans should explicitly indicate the extent to which their proposals reflect evidence of cost-effectiveness based on rigorous research analysis, such as the best studies cited above.

The capacity to conduct rigorous evaluations of their own programs at both the institutional and state levels would be required for grant applicants to receive funding. Where specific programs are being set up or expanded, experimental evaluations based on RCT would be considered most appropriate. Alternatively, states could also generate nonexperimental evaluations using appropriate methods, either for specific programs and policies or for their overall efforts more broadly. The ability of grant applicants to conduct evaluations should be evaluated by contractors selected by the Departments of Labor and Education. Renewal of these grants would at least partly depend on the extent to which evaluation evidence indicates success in expanding employment opportunities and earnings for the targeted groups.

Caveats

It must be emphasized that any new grant program should not be used to reduce formula funding right now for WIA. Given the extent to which WIA funds have already been drastically cut over the past years and decades and how tight those resources are for the cost-effective local employment services and training that they now fund, it is important that these new grants constitute a net addition of resources and not further cannibalize important existing programs.

Another concern is whether the current fiscal environment will allow for even the modest new expenditures that I propose above. On the one hand, with proposals for large cuts in federal discretionary nondefense spending, and in particular for job training, now being advanced, it might not be an auspicious time to propose increases. On the other hand, recent evidence suggests that expenditures in education are not quite as vulnerable to cuts at the federal level as are other discretionary expenditures, and those tied to job creation and employer needs might be less vulnerable to cuts if they enjoy some bipartisan support, especially from major employers and industry associations.

It might be possible to reallocate some of these funds from other employment and training funds. One possible source of funding for new competitive grants is revenues from H-1B visa fees. H-1B visas are granted to immigrant workers with high skills. The federal government through the Department of Labor uses the revenues from these visas for training U.S. workers. If alternative funding is not available, the cost of the program might be scaled back initially and ramped up slowly as successes become more apparent and political support grows over time.

Finally, one cannot ignore the short- and long-term weakness of the U.S. job market. Insufficient aggregate demand and uncertainty seem to be limiting overall job creation and the country’s recovery from the Great Recession, and new technologies and global forces might slow job creation over the longer term. This proposal is not designed to address a broader set of problems that seem to be deterring employers from creating large numbers of jobs, as they did in the 1980s and 1990s.

The need for enhancements of worker skills and of the quality of jobs created remains, however, and perhaps becomes even stronger in a tepid labor market. And the ability of those markets to absorb workers with higher skill levels and higher pay over the longer term should not be doubted, even when aggregate employment outcomes are disappointing.

Blueprint for Advancing High-Performance Homes

Given the economic devastation caused by the housing industry collapse in the United States and the years remaining until full recovery, it may seem out of touch to be talking about meeting 21st-century needs through high-performance homes that can deliver comfort and well-being to the average homeowner at levels rarely reached before. Yet the housing problems that the nation faces are systemic and do not lend themselves to short-term fixes. Carbon emissions from the housing sector are at several times sustainability levels. Homes use excessive amounts of water. Average indoor air quality needs to be improved. There are safety and security problems. When homes do not meet the needs of elderly and handicapped people, medical costs increase. Homes are unaffordable to a growing segment of the population.

Furthermore, most homes, unlike electronics or cars, will be here decades from now, because they evolve more often than they are replaced. Thinking about homes holistically over their life cycles is the only way to reach their potential.

Basic knowledge of how to do this is not the problem. Since the 1970s, federally funded and private-sector research programs, along with overseas advancements, have improved understanding of most aspects of building performance. The basic science behind high-performance buildings is well understood, and a few top firms in architecture, engineering, and construction have created high-performance buildings, including homes that come close to economic, environmental, and social sustainability. The problem is moving this knowledge from larger and higher-priced buildings and the building elite to the broader market of moderately priced new home construction and retrofits, where costs are paramount and knowledge of high-performance principles is less prevalent.

Still, tough times are also innovative times, and when past ways of doing business were not working well, the nation and its people have traditionally looked for change. To-day’s economic pause is an ideal time to think about how to optimize the housing sector, by reducing costs through eliminating wasteful practices, by adopting best practices from manufacturing, and by unleashing the power of information technology. This is not an easy task, because the housing industry and the housing stock are quite diverse. But it is not an impossible task, and there are some clear reforms that can move the U.S. housing sector much of the way to achieving its potential.

What is high performance?

High performance is what people should expect from buildings, where we spend about 90% of our lives. The Whole Building Design Guide, a project of the federally chartered National Institute of Building Sciences, defines a high-performance building as being cost-effective over its entire life cycle, environmentally sustainable, safe, secure, functional, accessible, aesthetic, and productive (see www.wbdg.org).

For homes, achieving safety and security means withstanding foreseeable disasters. Functionality requires meeting residents’ expectations and needs. Accessibility means meeting the needs of the elderly and disabled through universal design. Aesthetics relate to a home’s desirability and ease of resale. Productivity depends on appropriate day lighting and a healthy indoor environment.

Cost-effectiveness is measured on a life-cycle basis, taking into account the initial purchase price, operational costs, improvements, and long-term maintenance and repairs. In high-performance construction or renovation, the initial design is done with future enhancement in mind. Building science dictates the most appropriate design, materials, technologies, and site orientation. Precise, cost-effective execution follows through engineering, construction, and operations. Once high performance is achieved, a home becomes as cost-effective to own as it is to purchase, energy costs are minimized if not eliminated, building performance is maximized, and the quality of life for building occupants is enhanced.

For environmental sustainability, maximizing energy efficiency is a key. Because some homes will never achieve peak performance in energy efficiency, reaching overall environmental sustainability across the residential sector will require that other homes reach net zero energy (NZE) or become net energy producers. NZE residences use site-generated renewable energy to offset all fossil fuel use. The Home Energy Rating System Index, which measures energy use on site, scores NZE homes at 0. By comparison, the average existing single-family residence scores 120. Among other more energy-efficient home types, a home that meets the 2004 International Energy Conservation Code scores 100, a home that attains an Energy rating scores 85 or lower, and a home that meets the current level of the U.S. Department of Energy’s (DOE’s) Builder’s Challenge scores 70.

Relearning lessons

The United States experienced a concerted effort to improve energy efficiency in buildings, including homes, in the years after the 1973 OPEC oil embargo. But in the early 1980s, fossil fuel prices declined, domestic energy research plummeted, and only the visionaries of high performance pushed forward. It took the U.S. Green Building Council’s LEED rating system and certification program in the late 1990s to reengage the public. LEED is a comprehensive rating of a building’s potential environmental impact, but is only a first step toward high performance. It currently treats deep energy efficiency and most nonenvironmental high-performance goals as optional and does not emphasize building performance. Still, LEED is a remarkable success story that has established a U.S. market for better buildings, including homes, and a formula for addressing environmental sustainability and potentially high performance that nearly everyone can understand and undertake.

The Sustainable Buildings Industry Council and various councils established under the National Institute of Building Sciences have been leaders in integrating high-performance residential building attributes and in showing that simultaneously considering all aspects of a building’s performance leads to synergies and savings in new construction and renovation. Some LEED award winners have used advanced manufacturing techniques and quality assurance to move beyond environmental sustainability toward high performance. For instance, one award winner in Maine now offers new NZE homes with other high-performance attributes at only a slight premium over conventional construction. However, homes this high in quality are still the exception, and understanding of how to do a high-performance retrofit trails progress in high performance in new homes worldwide.

Typical home builders need easy-to-use design tools that adapt a standard high-performance or high-performance–ready design to site-specific characteristics and the likely preferences of buyers.

In several parts of Europe, new high-performance homes are now moving from niche market to mainstream. There is movement in Japan as well. Europe’s most energy-efficient new homes are near NZE, while taking seriously indoor air quality, water use, and other high-performance values. As Europe learned energy-efficient building design from the United States in the early 1970s, the United States can now learn about how to transition to high-performance–related standards, financing, business practices, and building components from Europe.

Consider the Swiss experience. About 15 years ago, what eventually came to be called the Minergie Association introduced the MINERGIE standard as a trademarked marketing label for low-energy–consumption buildings. Over time, additional voluntary MINERGIE standards were developed for passive solar buildings, along with a green option and an NZE option.

As the Swiss gained experience, they brought design and construction costs for new MINERGIE buildings, including residences, close to the costs of conventional construction. Currently, 25% of new construction across Switzerland meets the standards, and they are now influencing the mandatory building codes in some Swiss cantons. Banks are offering slightly lower rates to homeowners for MINERGIE buildings, consumer demand is increasing, and resale values of such homes now exceed those of conventional buildings of similar size and quality. In the retrofit market, however, MINERGIE has only a 1% share.

Making it happen

High-performance homes are available in the United States when money is not an object. The question is how high-performance can be adapted so that it is available in the rest of the housing market, where price is of utmost importance and buyers cannot afford everything they want. High performance will come when consumers feel they must have it and can afford it. Getting to that point requires streamlined new ways of doing business for the construction industry, for related real estate professionals, for those who service homes, and for residents. These changes must be structured to make high performance in homes easier to achieve and more affordable. Once that happens, those homes should be recognizable as superior and gain prominence in the market.

With this goal in mind, there is a set of eight actions that can accelerate movement toward high performance. They are:

Develop precise methods and analytic tools for estimating present value and life-cycle costs and for achieving high performance over time. A home is a very long-term asset that changes over time. Home purchases are based on quality and price, and existing homes are improved when it makes financial sense to do so. High-performance homes should be inherently more valuable to consumers than conventional homes, because they are designed to improve over time, focused on user needs, and cheaper to operate than conventional buildings. But current methods of appraising and pricing rarely recognize this value.

Therefore, for homes with high-performance features to reach their market potential, there need to be an objective, common methodology and related tools to calculate as precisely as possible a home’s true present value, the effect on that value of specific features and improvements, and the likely costs to own and operate the home. Mortgagers could then more accurately estimate the effect of lower operating costs on a mortgagee’s ability to handle a larger mortgage payment, builders could accurately estimate which energy efficiency and renewable energy upgrades make economic sense, and purchasers would have a better understanding of how specific upgrades increase their home’s value. The first generation of software designed to make this analysis is emerging in the banking industry and will soon begin testing for both new construction and retrofits. If successful in the banking industry, this valuation method could be extended to appraisers, architects, engineers, and insurers.

Mortgagers’ risk is also reduced when actual performance approaches design performance. Some people in the mortgage industry are considering requiring that mortgagees provide them access to real-time energy and water use and other performance data that will be generated by smart home technology, and then conditioning the lowest interest rates on specific performance levels. The same data streams could give real estate professionals, appraisers, and potential buyers a more precise picture of a home’s actual performance and condition to use in pricing and purchase decisions.

Current construction practices create waste and increase the cost of future upgrades when they do not anticipate how a home will be modified over time. In contrast, a “high-performance–ready” strategy can be adopted as the first step toward high performance and would be modified in stages as technology improves and becomes more affordable. The foundation and other parts of the building envelope that cannot be upgraded would be designed initially at a high-performance level, including incorporating passive solar techniques and high R values where appropriate. If active solar is initially too expensive at a specific location, but prices will fall, the building should be properly oriented for solar and wired to permit easy future installation. For components whose replacement is predictable, such as windows, lighting, and certain appliances, the anticipated life of the initial component should be factored into construction budget allocations. If a home addition or an occupant aging in place is foreseeable, preparing for easy expansion or modification in the initial design could reduce waste and make the eventual attainment of high performance easier. A highperformance–ready strategy could lead to repeat customers for production builders if they periodically offer retrofit packages to entire subdivisions at a wholesale price. Highperformance–ready principles should also be applied to major retrofits.

Leverage savings from energy efficiency and improved operations to pay for home improvements. Because most of today’s current housing stock will exist for the foreseeable future, financing the incremental improvement of existing homes is essential, and redeploying currently wasted funds is the key to this happening. The Energy Information Administration projected that electricity and fossil fuel costs in the residential sector in 2011 would be more than $250 billion. Most of these costs would not be incurred by high-performance homes. Other wasted money can be found in the water bills and unnecessarily high home maintenance costs. Collectively, wasted funds make up the largest pool of money now available to incentivize high-performance–ready homes, and they are largely in the hands of homeowners. A McKinsey & Company study in 2009 concluded that if the barriers to using energy savings for reducing building-related carbon emissions could be overcome, energy savings would pay for all cost-effective conversions through 2020. The barriers to actually tapping these funds, however, are formidable and differ depending on whether a home is tenant- or owner-occupied and whether the owner has good credit.

Home improvement loans informed by better analytic tools are the most direct financing method for harvesting the waste, but only if the increased value and reduced operating costs from improvements are allowed to be considered in lending decisions. Bipartisan legislation was recently introduced in the U.S. Senate to permit these changes. Loans work only if homeowners are creditworthy. During the recession, banks and credit unions have cut back on lending for construction, reduced loan-to-value ratio requirements, and approved only two out of five home improvement loan applications, but it can be hoped that these trends will reverse as the economy improves.

Energy savings performance contracts are designed for building owners who want to use a middleman. They enable owners to contract with private-sector energy service companies (ESCOs) that pay capital costs related to energy efficiency improvements in exchange for the contractual right to most of the resultant energy savings for up to 25 years. ESCOs use licensed, certified, and bonded professionals and defined retrofit packages. As of March 2010, energy savings performance contracts were saving the federal government an estimated $11 billion annually, of which the government retained $1.4 billion. ESCOs also work with state and local government, universities, schools, and hospitals. Only occasionally have energy savings performance contracts been in communities for homes and small commercial markets. ESCO requirements that all tax credits, depreciation, and any assets provided under the contract belong to the equipment owner reduce the desirability of these contracts to homeowners.

Property assessed clean energy (PACE) programs can reach homeowners who are less creditworthy. Municipalities float bond issues that fund 15- to 20-year energy improvement loans that are repaid through additions to property tax bills. Secured through a lien, the obligation to repay automatically transfers to the buyer when the home is sold. The increased monthly payments generally are offset by reduced energy bills. However, because PACE agreements put the taxing authority ahead of mortgagers for repayment, Fannie Mae and Freddie Mac oppose such programs. This has greatly slowed residential PACE programs. Also, homeowners tend to favor home improvement loans over PACE agreements, because PACE interest rates are higher and closing costs are generally at least 5%. Homes with tax or contractor liens are not eligible for PACE.

Some utilities run on-bill financing programs that can be geared to moderate-income homeowners who are unlikely to get bank financing for retrofits. The programs enable energy efficiency purchases on terms similar to energy purchases. Borrowing is done by a state or local government; utilities bill for pre-retrofit levels of service, and the loan balance is reduced by the amount of payment for energy not used. States sometimes subsidize these programs by absorbing any losses from unpaid debts or interest charges they incur. These programs probably would be more prevalent if utilities were permitted to count energy efficiency loans toward renewable portfolio standards that require utilities to satisfy a portion of energy demand with nonfossil energy.

In short, there is huge amount of money available for financing retrofits from energy savings, but the mechanisms for releasing that potential are not yet widely available. Programs that enable homeowner participation without their becoming energy or finance experts, and that will show homeowners why moving toward high performance is in their financial interest, are especially important. Although some progressive communities are expanding financing programs to cover major energy efficiency upgrades, combining these programs with high-performance–ready concepts has not yet occurred.

Bring modern design tools and manufacturing techniques to home construction. Typical home builders cannot be expected to have a research scientist’s knowledge of how to achieve high performance. They need a basic understanding of the principles of high performance, easy-touse design tools that adapt a standard high-performance or high-performance–ready design to site-specific characteristics and the likely preferences of buyers, and the skills to execute the designs. Retrofit will be more complicated, but developing standardized retrofit packages that produce obvious benefits and can be carried out by workers and even homeowners with modest skills will be key to achieving success. The National Institute of Standards and Technology (NIST), the National Institute of Building Sciences, universities, and DOE’s national laboratories are all potential innovators here.

Current use of manufacturing techniques in housing is far different from the days of cheap housing built under permissive codes established by the Department of Housing and Urban Development. Increasingly, large segments of building are coming preassembled to building sites, and high-end builders are experimenting with computer-aided design and computer-aided manufacturing–based housing. As the residential building industry’s use of digital design and information technology becomes more sophisticated, the potential for a more totally integrated process involving design, construction, and commissioning will be increased. This will mean workforce changes. Because the transportation and installation of factory-built homes and subsystems affect quality, environmental performance, and energy performance, they are factors in achieving optimal design. The skill levels of equipment operators, installers, assemblers, and finishers need to be considered in the planning and design processes.

Computer-aided design and construction is already showing its potential to deliver a cheaper, yet more precisely built product. It improves communication among the client, builder, fabricators, and operators, thereby decreasing mistakes and waste while making it possible to deliver a higher-quality and more profitable product much faster than conventional site construction. Although automated fabrication design and the fabrication of building components currently are concentrated at the high end, “mass customization” of affordable, high-performance homes and performance-based upgrades should eventually lead to higher quality throughout the sector and ease the transition of ideas and technology from lab to marketplace. Many more machine-fabricated homes have been built in Europe than in the United States, but the practice is spreading domestically, especially in custom design of wooden buildings.

Encouraging early high-performance adopters will create models for others and accelerate the time when widespread public demand for high-performance homes will occur.

The renewal in 2010 of the America COMPETES Act authorized NIST’s Manufacturing Engineering Partnership program to bring its manufacturing expertise to the construction sector. This concept is now being piloted in Philadelphia. NIST should make it a priority to use its network of 50 state partnerships to accelerate the adoption of modern manufacturing techniques throughout the building industry. DOE should also work with home improvement stores, which supply and support small contractors, to make sure that the stores’ product selections and the guidance they give contractors are consistent with high-performance–ready concepts.

Encourage experimentation and demonstration in government programs. Local clusters of high-performance and high-performance–ready homes are needed to stimulate the development of regulatory procedures, supply chains for advanced components, improved design tools and codes, markets for creative financing, and businesses that support the operation and maintenance of high performance. Encouraging early high-performance adopters will create models for others and accelerate the time when widespread public demand for high-performance homes will occur. In Europe, change toward highly efficient homes happened first in specific locales and then spread when consumers saw the advantages of advanced housing.

Technologically savvy communities, including university towns, are good candidates for early adoption. Community organizations and nongovernment organizations, such as Habitat for Humanity, can also spread knowledge of best high-performance practices to affordable and low-income housing. At the national level, home builders, bankers, and real estate agents often represent the status quo, but in local communities they can be a force for change. The Department of Housing and Urban Development, Fannie Mae, and Freddy Mac should be reviewed for opportunities to demonstrate and promote high-performance and high-performance–ready building. The Department of Defense, which has hundreds of thousands of residential units, a need to cut costs, and an expressed commitment to energy efficiency, is also a prime candidate for leading cutting-edge demonstrations.

Focus building research on high performance. Robust applied research programs in the public and private sectors are needed to fill current high-performance knowledge gaps, including in such areas as building materials that perform reliably over time, measurements and standards that will back up performance, and tools that more accurately predict and verify building performance initially and over time. DOE, with its multidisciplinary national laboratories and relationships with leading universities, has deep knowledge of energy efficiency and renewable energy systems, materials, and sensors. An important part of the DOE effort is the Builders’ Challenge, in which participating builders aim to be able to cost-effectively produce new NZE homes anywhere in the nation by 2030. Some home builders are already ahead of where they expected to be at this time. NIST has a long history in building and fire research and metrology, advanced manufacturing, and computer software standards, and shares smart grid responsibilities with DOE. NIST also is the main technical backup for building standards and code developers. These and other agencies have important roles to play in advancing the state of the art; staying current on overseas research; advocating for up-todate building codes and standards; and making high-performance buildings, including homes, easier to construct and operate. Information dissemination through nationwide networks, including NIST’s Manufacturing Engineering Partnership program, the Department of Agriculture’s Cooperative Extension Service, and various private-sector organizations, is also necessary.

Achieving sustainability will require millions of small steps, so the federal government needs to measure progress and provide continuity.

Keep residential building codes and related standards technologically current and use “reach codes” to articulate a long-term vision. Building codes ensure a minimum level of public health and safety in residential and commercial buildings. The International Code Council (ICC), formed in a 1994 merger of regional model code-writing organizations, draws its membership from a cross-section of industry and government officials related to the building industry. It has issued and regularly updates a compatible set of codes covering various aspects of residential building design and construction, including the International Energy Construction Code and the International Green Construction Code.

The country’s legally binding residential codes are primarily enactments by state and local governments of ICC codes, modified to local needs. More than 20 states have enacted the most recent code; most other states have earlier versions in place. A handful of states encourage voluntary compliance with ICC codes or have no statewide codes in place. Most areas with no code in place are rural, because cities generally adopt codes even if their states do not. Occasionally, Congress has conditioned aid on having up-todate codes in place. The most recent code revisions’ requirement of a 30% increase in energy efficiency in new residential construction is proving to be a driver for increased energy efficiency. Some jurisdictions’ building codes go beyond minimum ICC requirements. Some have enacted voluntary reach codes to provide guidance to those aiming for high performance or NZE.

High-performance–ready concepts would be taken more seriously if ICC codes were revised to reflect the best affordable technology and the most current versions were routinely adopted and updated in all jurisdictions. Both building codes and reach codes should cover home performance, including monitoring, inspection, maintenance, and ease of repair, alteration, and upgrade. They also should consider the impact of home construction and modification on neighboring buildings’ attainment of high performance, including solar and renewable access. Reach codes are needed to articulate high-performance end goals and to provide guidance for the first movers toward high performance. Codes and voluntary standards will probably need to be regularly revised as more smart grids and smart homes are introduced.

Codes are developed by consensus, and some regular participants instinctively favor the status quo. The federal government and other advocates for change have increased their participation in recent code revision cycles. This trend toward representation of all interested parties must accelerate for codes to achieve their potential.

Coordinate and measure overall progress toward high performance. Preparing for widespread deployment of high-performance residential buildings is a complicated process that involves many actors. Expertise in the various parameters of high performance is scattered among several agencies. Expansion of President Obama’s October 2009 Executive Order 13514 on federal environmental sustainability to establish a central focus for high-performance policy in the White House would help in making sure that key steps are taken in a timely and coordinated fashion.

The order now requires each federal agency to appoint a designee to carry out the order, to produce a Strategic Sustainability Performance Plan, and to report annually to the Council on Environmental Quality on progress toward high-performance sustainable buildings and NZE buildings. Expanding the order’s definition of sustainable buildings to cover all high-performance attributes and broadening the scope of reporting to cover all federal activities related to high-performance buildings would put in place most of the pieces needed for the federal government to keep track of overall progress toward high-performance new construction and retrofits.

Make achieving high performance in home operations easier. The best-designed building cannot achieve high performance without proper building operation. Because energy use routinely varies by 25% or more in identical homes, occupant education and behavioral change are important. Students from an early age should be taught about natural resources and how their behavior affects those resources’ use. Adults also need aids, such as energy labeling of appliances and homes, to understand the probable operational costs of their purchases, along with smart meters to show the actual contribution of appliances and home features to monthly energy bills. Striving for goals such as reducing energy use by 20% by 2020 can also help spread knowledge of potential and actual energy use.

However, no amount of education will completely overcome human tendencies to avoid hassles and take shortcuts. Automation, annual inspection, third-party vendors, and accountability are needed to guarantee high performance. The buildingSMART Alliance, a council of the National Institute of Building Sciences, envisions Building Information Modeling (BIM) standards becoming a common platform throughout the life cycle of commercial facilities, including multifamily housing (see www.buildingsmartalliance.org). Buildings would be planned, designed, constructed, commissioned, and renovated using open BIM standards that allow data gathered in those phases to be accessed during operation, monitoring, maintenance, or repair. A scaled-down version of BIM would be desirable in the rest of the residential sector as well. In Germany, the Passivhaus Planning Program already is enabling more “average” home designers to achieve high-performance energy efficiency. Data-driven modifications may extend beyond the individual building and its operation as the existing home site, neighborhood, and community all digitize.

Sophisticated third-party monitoring systems that continuously watch the performance of an entire residence and its individual components for energy use, environmental quality, safety, security, and user preferences are natural technological extensions of the variety of security and maintenance services many homeowners now purchase. Given recent trends, it is expected that home energy applications analogous to phone apps will become an integral part of future homes. Such apps will give alerts, originated by new generations of sensors and remote monitoring systems, when human intervention is necessary for maintenance or repairs, to reduce peak or overall energy use, and to improve quality of life in ways unforeseeable today. It is an important part of the high-performance–ready concept to think about how control and sensor systems will be upgraded as technological advances occur and how predictable upgrades to future home energy uses, such as electric vehicle charging, can be made easily.

At the same time, the natural systems that support the built environment should not be overlooked. Site orientation and design strategies that use or block the Sun for passive heating and cooling and prevailing winds for natural ventilation are among the oldest methods for designing higher-performing homes, and they also can be designed not to be dependent on human behavior.

Meeting the challenge

The challenge, then, is to think high performance and high performance–ready for new home construction and for retrofits, focusing on what is available and marketable in improvement packages today while laying the groundwork for future complete conversion to high performance when it makes financial sense with or without government subsidies. In this interim period, efforts should center on continuing the research, doing the demonstrations, upgrading the financial and design tools, regularly revising the building codes, thinking strategically, and modernizing construction practices. There also needs to be a sustained effort to understand how performance advances in larger and more costly buildings can be simplified for use in the residential market. Significant pieces of the puzzle are falling into place, but more in high-end and new construction than in the retrofit of the average existing home.

Achieving sustainability will require millions of small steps, so the federal government needs to measure progress and provide continuity. With all parties acting together to lay the proper groundwork, it should be possible that by the end of the Better Building Challenge in 2030, monitoring systems, renewable energy technologies, energy storage systems, and other components will all have come down in price, the smart grid will be operational, and smart appliances and home energy-saving applications will be commonplace. The nation would then be well on the way to sustainability in new housing. Completing the conversion to high-performance homes would then be a deployment exercise that will largely be carried out in the private sector and retrofit market. It will involve customizing knowledge of high performance to the wide variety of existing homes, many at the low end where the economics are more challenging and subsidies are likely to be necessary.

The challenges are great, but so are the benefits, and the consequences of doing nothing are so great that there is no choice but to try.

The Climate Benefits of Better Nitrogen and Phosphorus Management

Nearly four decades have passed since the phrase “global warming” first appeared in a scientific journal. Writing in Science in 1975, geochemist Wallace Broecker warned that rising atmospheric carbon dioxide (CO2) levels would result in a world climate unprecedented in modern human history. Now, as Broecker’s forecast is becoming a reality, we can no longer just debate ways to slow climate change; we must figure out how to live with it. Although much of the work in this area has focused on the carbon cycle, expanding our focus to other elements, especially nitrogen and phosphorus, can make a positive contribution. By providing more fertilizers to farmers in some of the world’s poorest nations and reducing nitrogen and phosphorus losses to the environment in developed and rapidly developing ones, we could reduce some of the risks of a changing climate. At the same time, a more efficient, less polluting relationship with the global nitrogen and phosphorus cycles would mitigate a host of other environmental challenges, increase food security, improve human welfare, lessen some national security concerns, and probably save money.

A century ago, world leaders were asking how they would be able to feed a fast-growing population. At the time, the potential for food growth was constrained by finite reserves of nitrogen and phosphorus that could be readily accessed for crop fertilizers. Only two generations later, the situation was entirely different. Widespread implementation of the Haber-Bosch process—an industrial means for converting the limitless pool of atmospheric N2 into usable forms of nitrogen, including fertilizer—had released much of the world from nitrogen constraints on crop growth. In parallel, the ability to locate and mine reserves of phosphorus rose markedly. In combination with revolutions in plant breeding and genetics, these developments formed the foundation for the Green Revolution, rapidly increasing world food production.

Feeding people is a good thing, but our ability to transform the nitrogen and phosphorus cycles has had startling and unsustainable consequences. In just the past two generations, humans have shifted from modestly to dominantly affecting global nitrogen and phosphorous cycles. Although N2 in the atmosphere is unreactive and phosphorous in rocks is unavailable to organisms, we now create more reactive nitrogen every year than all natural processes on land combined, and we have tripled the rate at which biologically available phosphorus enters ecosystems. Much though not all of that new nitrogen and phosphorus becomes fertilizer. Substantial amounts are also used for making other industrial goods, such as plastic and nylon. Further, billions of pounds of additional reactive nitrogen are created inadvertently as a byproduct of fossil fuel combustion. These changes represent massive and unprecedented reorganizations of two element cycles on which all life depends.

Not surprisingly, these reorganizations have brought unintended consequences. Excess nitrogen and phosphorus in the environment cause diverse environmental ills, many of which directly affect human health and welfare. Beyond the effects on climate discussed below, these include air pollution, acid rain, marine and freshwater eutrophication, biodiversity loss, and the stimulation of some invasive species. Freshwater eutrophication carries a multibillion-dollar price tag in the United States alone. Some estimates suggest that safe planetary levels of nitrogen and phosphorus have already been exceeded, with long-term consequences for humanity.

Of course, there is a major upside to our domination of the global nitrogen and phosphorus cycles. Billions of people depend on the ability to make and disseminate mineral fertilizers, and a reasonable future for humanity must include the continued creation of such fertilizers at substantial rates. However, we are a long way from achieving an equitable, efficient, and sustainable use of nitrogen and phosphorus in agriculture, and we are not close to reducing nitrogen and phosphorus pollution to tolerable levels.

In wealthier countries with modern forms of intensive agriculture, large fractions of nitrogen and phosphorus applied to fields, often more than half, never make it into the crop itself. Concentrated animal feeding operations create additional inefficiencies as nitrogen and phosphorus in animal feed are transported to feed lots, but the nitrogen and phosphorus in animal excreta are not returned to crops. The explosion of the biofuel industry has caused a similar redistribution of these elements to areas near refineries in a way that has nothing to do with food production. Inefficient nitrogen and phosphorus use in agriculture, along with industrial pollution, underpin the environmental challenges listed above.

Although the sources of excess nitrogen and phosphorus in the environment are similar, phosphorus, unlike nitrogen, remains a finite, diminishing, and irreplaceable resource, and one that is concentrated in just a few countries. Although the extent of readily accessible phosphorous reserves remains debated, it is clear that in a business-as-usual scenario, the United States will become increasingly dependent on foreign sources of phosphorus, many of which lie in nations that may be unstable and/or pose challenges to foreign policy and national security. Sooner or later, the United States and the world will need to become far more efficient in their use of phosphorus or lose the ability to maintain high rates of food production at reasonable cost.

Thus, even without the prospect of climate change, we need to shift the ways we interact with and manage the nitrogen and phosphorus cycles. Climate change multiplies this concern, because many of the environmental effects of excess nitrogen and phosphorus, as well as food insecurity in poorer countries, are likely to worsen under a rapidly changing climate. Fortunately, maintaining the benefits of our nitrogen and phosphorus use while greatly reducing the unwanted consequences does not require phantom technologies or massive social upheaval. We can begin to improve agricultural nutrient-use efficiencies and reduce industrial forms of nitrogen and phosphorus with current knowledge and technology and without suffering major economic blows. Similarly, we know that increased access to nitrogen and phosphorus fertilizers in regions such as sub-Saharan Africa can lessen food scarcity and initiate cascading social, economic, and environmental benefits.

Climate effects

Pursuing more equitable and efficient nitrogen and phosphorus use has clear environmental, socioeconomic, and national security benefits. Could improving our management of the nitrogen and phosphorus cycles also contribute to climate change mitigation or adaptation? For the most part, climate mitigation is a question about nitrogen: Because reactive nitrogen exists in many atmospheric forms, it has multiple and counteracting effects on the radiative balance of the atmosphere. The major warming effect is via increased emissions of nitrous oxide (N2O), a greenhouse gas that is 300 times more potent than CO2. On the cooling side, human-created reactive nitrogen can form aerosols that reflect the Sun’s energy back to space. Moreover, airborne nitrogen compounds that are produced by agriculture, transportation, and other industrial sectors can fertilize nearby forests, thereby removing CO2 from the atmosphere.

Nitrogen’s cooling effects have prompted some observers to say that human acceleration of the nitrogen cycle may be beneficial. However, when all of the warming and cooling effects of nitrogen are calculated, they appear to largely cancel each other out in the short term. At best, recent estimates suggest a small net cooling effect, but such effects will diminish as any boost in forest production saturates with time, and because the effective contribution of N2O to climate warming is forecast to double or more by 2100. Thus, continued release of excess nitrogen to the environment will probably accelerate climate change with time and will also lead to the formation of more ozone holes in this century.

Concerted efforts to reduce nitrogen and phosphorus pollution from industry, improve the efficiency of their use in agriculture, and enhance their availability for use in fertilizer in food-insecure regions would have multiple benefits, including a reduction of climate risks.

However, this focus on the rate of climate change misses the larger picture. Because climate change is already a reality and is certain to continue under any scenario, an assessment of its risks must include not only the pace of change but the inevitable effects. When viewed in this fashion, excess nitrogen and phosphorus in the environment add to the risks and clearly provide opportunities for mitigation and adaptation.

Consider air pollution. Tropospheric ozone (O3,or smog) is a pollutant with widespread negative consequences for human health and crop production; it is also a greenhouse gas. Atmospheric nitrogen oxide (NOx) concentrations regulate the formation of tropospheric O3, and so does temperature. Using business-as-usual scenarios for reactive nitrogen creation and CO2 emissions, several projections suggest that O3 -related human mortality and crop damage will rise sharply in the next few decades, especially in tropical and subtropical regions where rising temperatures and rising NOx concentrations will interact synergistically to produce more O3.

But what if we reduced NOx emissions? U.S. experience shows this can be done. During the past decade, NOx concentrations have fallen, largely because of Clean Air Act regulation of industrial and transportation emissions. So far, those reductions have reduced but not eliminated O3 risks in the United States, because NOx is emitted by other sources and because rising temperatures have erased some of the gains associated with NOx reductions. But if we can continue to reduce NOx —by targeting industrial emissions and improving agricultural efficiencies—then at some point the effect of temperature won’t matter, because high O3 levels cannot occur in the absence of substantial NOx concentrations. Thus, in the case of O3, the best way to reduce or remove the threat that warming-enhanced O3 poses to human health—its climate change risk—is almost certainly via the mitigation of nitrogen pollution.

Managing the nitrogen cycle to reduce smog mitigates some climate forcing and reduces the risk that climate change will worsen air quality. Other examples primarily involve risk reduction. For example, excess nitrogen in the atmosphere also forms another regulated class of air pollution known as fine particulate matter (PM). Not only do the chemical reactions that can lead to PM formation go faster at higher temperatures, PM’s lifetime in the air depends on rainfall. Shift toward a drier climate, as is forecast for significant portions of the United States, including most of those with the highest rates of population growth, and PM risks will worsen. In this case, nitrogen is not the only contributor to PM formation, but reducing nitrogen emissions could help. And although dollars are a poor metric for evaluating the benefits of improved health, they still offer perspective: In 2002 terms, the annual costs of nitrogen-related air pollution in the United States were conservatively estimated at $17 billion, with most of the cost attributed to a shortening of human lives.

Move from air to water and similar examples can be found. Nitrogen- and phosphorus-driven freshwater and marine eutrophication has major socioeconomic consequences that include lost livelihoods, reduced property values, damage to fisheries, loss of recreational opportunities, and several health risks. As with air pollution, evidence suggests that human-driven climate change will, on average, worsen eutrophication in freshwater and marine systems. The reasons are complex and system-specific, but in general a warmer climate means increased stratification of water bodies, decreased oxygen-holding capacities, higher potential loading of nitrogen and phosphorus, greater concentration of nitrogen and phosphorus in regions that will become hotter and drier, shifts in biological processes that can elevate the risks, or all of these. Thus, without changing current trajectories, the effects of eutrophication will spread and worsen in the coming decades. But widespread eutrophication cannot occur without enough nitrogen and phosphorus loading to aquatic systems. Lower that input and another climate risk is reduced or removed, even with the climate warming rapidly.

The potential benefits extend to food security. The link between ground-level ozone and crop damage mentioned above is one example, but there are many others. Not everything about a warmer world will be negative; in agriculture, for example, there are likely to be winners and losers across nations and regions. However, most forecasts suggest that the biggest hazards affect those who can least afford increased risk: developing nations in tropical and subtropical regions that are already struggling to secure an adequate food supply. Here, a concerted effort to extend the benefits of the Green Revolution to those who have missed out—Africa being the most notable example—could be a rapid and substantial counterweight to the growing threat of climate change. That goal is not just a moral imperative. The Defense Department, the State Department, and other U.S. entities focused on foreign policy and national security list climate change as a growing concern. Climate-related threats to basic human needs such as clean water and food can interact with social unrest and conflict, with consequences that spread well beyond the borders of the affected nations. Enhancing access to nitrogen and phosphorus fertilizers in Africa and other regions that do not now have enough is one step toward increasing food security and reducing the risk of social disruption. Although too much nitrogen and phosphorus poses threats to environments and society in much of the world, not enough nitrogen and phosphorus is a major threat and a significant offset to climate risk in the poorest countries.

In addition, a warming climate poses threats to biodiversity, clean water, and the health of coral reefs and other near-shore marine ecosystems, as well as accelerating the spread of parasitic and infectious diseases. Pollution with nitrogen, phosphorus, or both also carries risks for all of these sectors. Unlike ground-level O3 or eutrophication, nitrogen and phosphorus are generally not the major agents of risk, but lowering their release to the environment would lessen the multiple stresses that alter ecosystems and affect human well-being.

Overall, the connections between nitrogen, phosphorus, and climate are not just about net effects on rising temperatures or changing precipitation patterns. When expanded to consider the need to adapt to a changing climate, it is clear that business as usual with nitrogen and phosphorus will enhance our risks and make adaptation to climate change more difficult. However, concerted efforts to reduce nitrogen and phosphorus pollution from industry, improve the efficiency of their use in agriculture, and enhance their availability for use in fertilizer in food-insecure regions would have multiple benefits, including a reduction of climate risks. We believe that the second scenario is feasible and would provide multiple benefits to society, from local to global scales.

Admittedly, the threats and opportunities of altered nitrogen and phosphorus cycles pose some unique challenges for human society. Unlike the risks from fossil fuel CO2, where it is possible and ultimately necessary to envision a shift to energy systems that are carbon-free, food production requires nitrogen and phosphorus, and we must enhance natural supplies of these nutrients to meet world food demands. Thus, managing the nitrogen and phosphorus cycles sustainably becomes a classic optimization problem, one that emphasizes waste reduction while enhancing food quality, quantity, and accessibility.

Fortunately, the opportunities for doing so abound and in many cases are already under way. In the case of U.S. agriculture, yields have continued to rise in recent decades while fertilizer use has remained steady; the efficiency of nitrogen and phosphorus use has improved. Better efficiencies have been achieved in multiple ways, ranging from the use of precision agriculture technology to optimally timed fertilizer additions and crop demand, to comparatively low-tech solutions such as the use of cover crops that reduce nutrient losses. More can be achieved still. Using known management and technology solutions, nitrogen and phosphorus losses could be cut dramatically in some U.S. food systems without altering yields. Reaching very high efficiencies would not be easy, because it would require a combination of strategies that mix better on-field management with changes in incentive structures, crop types, and dietary preferences. However, loss reductions in the range of 30 to 50% could be achieved in many systems without these more significant transformations in the food system. That level of cuts would make a substantial dent in the downstream and downwind effects of excess nitrogen and phosphorus. The opportunities for improvement are even greater in rapidly developing economies such as China, which now uses much more nitrogen and phosphorus fertilizer much less efficiently than either the United States or Europe, and at a much higher cost in pollution and human health. As fertilizer, especially phosphorous fertilizer, becomes scarce, greater efficiency in its use will only sharpen a country’s competitive edge in the global economy.

On the industrial side, the Clean Air Act demonstrates that regulatory policies can reduce pollution without any compelling evidence for the kinds of economic trauma sometimes anticipated. The need to regulate NOx and other pollutants spawned the development of new technologies that can scrub emissions at a higher rate and lower cost. In some cases, combining regulatory with market-based solutions may achieve even greater reductions, but regardless of the instruments used, the mechanisms for further improvement clearly exist and should be pursued. Given the well-demonstrated health consequences of nitrogen-related air pollution, along with the additional risks posed by a changing climate, now is not the time to relax emission controls.

In Africa, work by the Millennium Villages Project (MVP) and others has shown that improving access to nitrogen and phosphorus fertilizers can make a substantial difference in human well-being. Perhaps most notably, the village-scale improvements first demonstrated by the MVP catalyzed the government of Malawi to enhance access to nitrogen fertilizer and improved seed varieties in a policy targeted at its poorest farmers. The result was a jump in food production that took Malawi from years of food shortage to being a net exporter of grain. The United States, other countries of means, and private entities should focus more of their food security efforts on helping such policies become widespread.

But progress in nitrogen and phosphorus management will need to go beyond just implementing already known policies. This is an interdisciplinary challenge that requires better communication among natural and social scientists, economists, engineers, policymakers, and a host of stakeholders. Those who understand the issues best also must do a better job of educating the public. Many of the impediments to progress on nitrogen and phosphorus issues come from a lack of public understanding. There is a substantial role for personal choice, but without effective communication, we can’t expect informed choices.

Overall, we suggest that improving the management of nitrogen and phosphorus will bring multiple benefits to humanity. Climate change can provide an additional incentive for improving management. At the same time, focusing on the multiple compelling reasons for improving nitrogen and phosphorus management may represent an opportunity to make progress on climate policy in ways that are less politically divisive. Particularly in the United States, movement on either climate mitigation or adaptation has been notoriously difficult, especially when framed as a response to climate change alone. However, when other more tangible benefits exist, ranging from economic to national security to other forms of environmental protection or repair, the barriers to progress may be less daunting. Whether progress comes under a climate change banner, or whether the climate benefits ride along behind other incentives, the United States and the world must move down this path.

California’s Pioneering Transportation Strategy

No place in the world is more closely associated with the romance of the automobile and the tragedy of its side effects than California. Having faced the problem of traffic-damaged air quality, the state became a leader in policies to reduce auto emissions. Now that transportation is the source of 40% of the state’s contribution to climate change, California has become a pioneer in the quest to shrink its transportation footprint and a possible trailblazer for national policy.

Two political circumstances favor California’s climate policy leadership. First, it has unique authority and political flexibility. Because California suffered unusually severe air quality problems as early as the 1940s and adopted requirements for vehicles and fuels before Congress was moved to act, the U.S. Congress in 1970 preserved the state’s authority over vehicle emissions, as long as its rules were at least as strong as the federal ones. California has continued in a leadership role for over 40 years, launching many of the world’s first emission controls on internal combustion engine vehicles, reformulated gasoline, and zero-emission vehicles. Since the 1977 amendments to the U.S. Clean Air Act, other states have enjoyed the option of following the more stringent California standards instead of the federal standards. The California legislature took advantage of this authority in 2002 when it directed the California Air Resources Board (CARB) to adopt limits on vehicular emissions of greenhouse gases (GHGs), designating these emissions as a form of air pollution.

Second, California has been able to act in advance of the national government because it has more political space to maneuver. The Detroit car companies have relatively small investments in California, and coal companies are absent. California is home to leading research universities, innovators, and entrepreneurs, as well as a diverse resource base of solar, wind, ocean, and geothermal energy resources. The state is also home to the world’s largest venture capital indus-try, which favors clean energy policy. California politicians feel freer to pursue aggressive energy and climate policies than do their counterparts in many other states.

In 2005, Governor Schwarzenegger issued an executive order requiring the state to reduce GHGs emitted by 80% from 1990 levels by 2050. This goal has also been adopted by the European Union and many other governments. By acting early, California has launched a policy experiment that could produce valuable lessons for the United States and other countries.

The 80% goal cannot be met without dramatic change in driver behavior and transportation technology. Researchers and companies have made rapid technological progress in recent years in improving conventional and advanced technologies. Performance-based regulations for gasoline-powered cars are expected to double fuel economy between 2010 and 2025, and rapid advances are being made with advanced lithium batteries and vehicular fuel cells. With greater emphasis on energy efficiency and low-carbon technologies, dramatic reductions in oil use and GHG emissions will occur. A key ingredient in reaching this goal will be government policy to stimulate innovation, encourage consumer behavior changes, and direct society toward large reductions in oil use and GHG emissions.

Emphasize regulation

The California strategy departs from the common approach to climate change in two notable ways: It does not depend on international agreements, and although it incorporates market instruments, it relies primarily on performance-based regulatory actions. Both elements are critical to its success.

Although climate change is a global problem that will require global action, transportation is essentially a local concern. International cooperation will be necessary to resolve problems in maritime and air transport, but action on cars and trucks can be taken at a national or state level.

In addition, although many experts say that the solution to our energy and climate problems is sending the correct price signals to industry and consumers, the transport sector’s behavior is highly inelastic in that it does not change significantly in response to changes in fuel prices, at least in the range that is politically acceptable. Europe has gasoline taxes over $4 per gallon and still finds the need to adopt aggressive performance standards for cars to reduce GHGs and oil use. These high fuel taxes certainly have an effect in reducing the average size and power of vehicles and leading people to drive less, but the resulting reductions in fuel use and GHGs still fall far short of the climate goals.

Large carbon (and fuel) taxes are efficient in an economic sense, but their effect on vehicles, fuels, and driving is modest. The European experience suggests that huge taxes would be needed to motivate significant changes in investments and consumer behavior, but U.S. public opinion is hostile to even small energy tax increases. Moreover, the energy market is distorted by a number of factors, including the failure to internalize the total cost of pollution and climate change, the market power of the OPEC cartel, technology lock-in, and the fact that many energy users such as apartment renters and drivers of company cars are insulated from the price of energy because they do not pay the bills.

We are not saying that getting the prices right and adopting international climate agreements and carbon taxes are irrelevant and unimportant. But we are saying that much progress can, and probably will be made in the transport sector in the next decade without international agreements and without getting the prices right. California is leading the way with policies that address three critical elements of the transportation system: vehicles, fuels, and mobility.

Vehicles

Americans like their cars big and powerful. U.S. fuel economy standards remained stagnant for 30 years, until 2010, while Japan, Europe, and even China adopted increasingly aggressive standards to reduce oil use and GHGs. California played a leadership role in breaking the paralysis in U.S. efficiency standards. In 2002, California passed the so-called Pavley law, which required a roughly 40% reduction in vehicle GHG emissions by 2016. The car companies filed lawsuits against California and states that followed California’s lead. When those lawsuits failed, the Bush administration refused to grant a waiver to California to proceed, even though waivers were granted routinely for previous vehicle emissions regulations by California. In 2009, President Obama not only agreed to grant a waiver but committed the entire country to the aggressive California standards.

And then in August 2011, at the request of President Obama, the Department of Transportation, Environmental Protection Agency, and CARB announced an agreement with the major automakers to sharply reduce fuel consumption and GHG emissions by another 4 to 5% per year from 2017 to 2025. California was recognized as playing an instrumental role by threatening to adopt its own more stringent rules if the federal government and automakers did not agree to strong rules. CARB expects to adopt these rules in January, with the federal government following suit in summer 2012.

The California strategy departs from the common approach to climate change in two notable ways: it does not depend on international agreements, and although it incorporates market instruments, it relies primarily on performance-based regulatory actions. Both elements are critical to its success.

These regulations requiring automakers to reduce oil consumption and GHG emissions are central to California’s GHG reduction efforts and are expected to elicit larger reductions than any other policy or rule, including carbon cap and trade. The reductions are also expected to be the most cost-effective, with consumers actually earning back at least twice as much from fuel savings over the life of their vehicle than they would be paying for the added cost of the efficiency improvements, even after discounting future fuel cost savings.

The federal government has recently asserted leadership in supporting the commercialization of electric vehicles (EVs), with the Obama administration offering tax credits of $7,500 per car and billions of dollars in loan guarantees and grants to EV and battery manufacturers. In addition, in 2009 the federal government adopted vehicle GHG standards that provide strong incentives to automakers to sell EVs.

But California has a much more ambitious long-term policy commitment to EVs. In 1990, California adopted a zero-emission vehicle (ZEV) requirement, mandating that the seven largest automotive companies in California “make available for sale” an increasing number of vehicles with zero tailpipe emissions. The initial sales requirement was 2% of car sales in 1998 (representing about 20,000 vehicles at the time), increasing to 5% in 2001 and 10% in 2003.

The intent was to accelerate the commercialization of electric (and other advanced) technology, but batteries and fuel cells did not advance as fast as regulators hoped. The ZEV rule, after surviving industry litigation and multiple adjustments to reflect the uneven progress of hybrid, fuel cell, and battery technologies, now bears little resemblance to the original. Although some consider the ZEV mandate a policy failure, others credit it with launching a revolution in clean automotive technology.

The actual numbers of vehicles sold to consumers as a result of the ZEV program are certainly not what CARB originally expected. Only a few thousand EVs were sold in the United States in the first decade of this century, most of them by start-ups such as Tesla. But 2011 could mark a breakthrough because for the first time major automakers have made firm commercial commitments to the technology. Nissan began selling its all-electric Leaf, and General Motors its plug-in hybrid EV, the very first commitment of major car companies to mass-produce plug-in vehicles in over a century. Sales of the two vehicle models amounted to fewer than 20,000 worldwide in 2011 (about half of which were in California), but both companies are expanding factory capacity in anticipation of each selling 50,000 or more in 2012, and virtually all major car companies have plans to sell plug-in vehicles in the next couple of years.

Could another policy have accomplished the same at less cost with less conflict? Who knows? What’s certain is that the ZEV program accelerated worldwide investment in electric-drive vehicle technology. The benefits of those accelerated investments continue to sprout throughout the automotive world, and California policy was the catalyst. In addition to the ZEV mandate, California has enacted various other incentives in recent years to support the introduction of fuel-efficient and low-GHG vehicles, including allowing access to carpool lanes and providing rebates to buyers of EVs.

Fuels

California has also taken steps to encourage the development and use of low-carbon alternative fuels, and the federal government has followed with its own aggressive actions. The federal Renewable Fuel Standard (RFS) requires the production of 36 billion gallons of biofuels by 2022, and Congress and President Obama have enacted a series of provisions that promote EVs. But these efforts have serious shortcomings.

The RFS biofuels mandate has led to the annual production of more than 12 billion gallons of corn-based ethanol, but almost no low-carbon, non–food-based biofuels. Corn ethanol is roughly similar to gasoline in terms of life-cycle carbon missions. The EPA has repeatedly given waivers to oil companies that allow them to defer investments in lower-carbon advanced biofuels.

California has gone further in pioneering a regulation that provides a durable framework for the transition to low-carbon fuel alternatives. Its low-carbon fuel standard (LCFS), adopted in 2009 and taking effect in 2011, applies to all fuel alternatives, unlike the biofuels-only RFS, and it allows oil companies to trade credits among themselves and with other suppliers such as electric utilities. Also, unlike the federal RFS, it provides incentives to make each step in the energy pathway, from the growing of biomass to the processing of oil sands in Canada, more efficient and less carbon-intensive. The LCFS provides a framework for all alternatives to compete. Versions of California’s LCFS are being enacted in other places, including British Columbia and the European Union, and many states are in the advanced stages of review and design of an LCFS.

Because the LCFS is novel, casts such a wide net, and requires major investments in low-carbon alternative fuels, it has been controversial. Economists argue that a carbon tax would be more economically efficient. Energy security advocates and producers of high-carbon petroleum, such as that from the Canadian oil sands, are concerned that it will discourage investments in unconventional energy sources and technologies that could extend the world’s supply of oil. Oil companies correctly argue that the imposition of the LCFS in one state will encourage the shuffling of high-carbon ethanol and petroleum to regions that don’t discourage those fuels. And corn ethanol producers complain about the details of how emissions are calculated. Moreover, administering this seemingly simple rule requires vast amounts of technical information and great transparency in the calculation of life-cycle emissions.

GHG emissions will be reduced if people drive less, and people can be nudged to drive less by cities that reduce urban sprawl, enhance public transportation, and raise the price of travel to incorporate externalities of carbon emissions, pollution, and energy security.

The LCFS is a powerful policy instrument that is already stimulating innovation. Oil company executives in Europe and North America acknowledge privately that the LCFS has motivated their companies to reduce the carbon footprint of their investments and to reassess their long-term commitment to high-carbon fuels such as fuel from oil sands. But to realize the full benefits of an LCFS policy, more governments must adopt similar policies to minimize fuel shuffling. Also, as with low-carbon vehicles, additional complementary policies are needed to target the many market failures and market conditions that inhibit the transition to low-carbon fuels. For example, investments in hydrogen stations are needed to reassure car companies and early buyers of hydrogen fuel cell cars that fuel will be available. It is a classic chicken-and-egg dilemma. California is considering a requirement that oil companies build a certain number of hydrogen stations in accordance with the number of hydrogen cars sold.

Mobility

The third major factor in transportation is the vehicle user. GHG emissions will be reduced if people drive less, and people can be nudged to drive less by cities that reduce urban sprawl, enhance public transportation, and raise the price of travel to incorporate externalities of carbon emissions, pollution, and energy security. Still, other user-related strategies to reduce GHG emissions include better driving habits, keeping tires properly inflated, and removing unneeded roof racks that increase wind resistance. Better road maintenance and traffic management can also reduce energy waste and excess emissions.

Efforts to alter vehicle use have enjoyed little success. Indeed, vehicle use has increased substantially, despite decades of federal initiatives such as “Transportation System Management,” “Transportation Control Measures,” and “Transportation Demand Management,” as well as the construction of networks of carpool lanes and increased subsidies for public transportation. After all these efforts, the number of vehicles per licensed driver has increased to 1.15, public transport has shrunk to less than 3% of passenger miles, carpooling has also shrunk, and vehicle miles per capita have steadily increased. Cars have vanquished competitors and become ever more central to daily life. Reversing this trend, while providing a high level of access to work, school, health care, and other services, is a daunting challenge. It requires a vast swath of changes related to the imposition and disbursement of sales and property taxes, land-use zoning, transportation funding formulas, parking supply, innovative mobility services (such as demand-responsive transit and smart car sharing), pricing of vehicle use, and much more.

As noted, California pioneered car-dependent cities and living and took it to an extreme, creating a highly expensive and resource-intensive transportation system. It has overindulged. Most of the world has followed California’s car-dependent path, but none have gone as far as California. Other countries have been far more innovative and determined at restraining vehicle use. But perhaps because it has gone so far to the extreme, California is now showing policy leadership in reversing the pattern.

In 2008, California passed the Sustainable Communities law, known as SB375, to reduce sprawl and vehicle use. It led to the creation of a new policy framework for cities to guide the transition to a less resource-intensive and car-intensive future. It provides a more robust and performance-based approach than did previous efforts to reduce vehicle use. It is just the beginning, but it does provide a good policy model for others.

In implementing the law, CARB established distinct targets for each metropolitan area in the state. Those targets range from 6 to 8% reduction in GHGs per capita for major regions by 2020 and 13 to 16% by 2035. The targets are applied to regional associations of governments that then work with individual cities and counties within their region to attain those targets. One strength of SB375 is that local governments are free to choose what strategies and mechanisms will work best in their situation.

The downside of SB375 is that it imposes no penalties for noncompliance and only weak incentives and rewards. The rationale for the absence of penalties is that the responsible parties are cities, most of which are in desperate financial straits. The challenge is to provide incentives that are compelling enough for the cities to assert themselves. Two options under consideration are diverting cap-and-trade revenues to cities that comply with reduction targets and restructuring transport funding formulas to reward complying cities. Current formulas are tied primarily to population and vehicle use, with the result that having more vehicles earns cities more money. The incentives should be just the opposite.

One lesson learned during the early implementation of the program and the development of the GHG targets was that local politicians and transportation managers came to support the targets when they realized that strategies to achieve the GHG targets are the same strategies they were already pursuing for other reasons, such as infrastructure cost reduction, livability, and public health. In fact, having a formal policy framework aids their efforts in governing their cities. But whatever the motivation, behavioral change is difficult.

Carbon cap and trade

Perhaps surprisingly, California’s adoption of a carbon cap-and-trade rule as the capstone of its plan for meeting the goals of AB32, the state’s overarching climate law, will not have much impact on transportation. A cap-and-trade program imposes shrinking carbon caps on factories, oil refineries, cement producers, electricity-generating facilities, and other large GHG sources. If companies cannot or choose not to shrink their emissions, they can purchase “allowances” from companies that are overperforming. With carbon trading, a market is created for carbon reductions, with carbon gaining a market value. The carbon price will be low if everyone is successful in reducing their emissions and no one needs to buy allowances from others, but it will be high if they are not successful. When carbon has a market value, polluters know exactly how much it costs them to pollute and can make economically rational decisions about how to reduce GHG emissions.

The European Union preceded California by a few years in implementing a cap-and-trade system, and the northeastern and mid-Atlantic states followed Europe in instituting a carbon cap-and-trade program for their electric utilities. But California’s policy is broader than the eastern utilities program by including all large industrial and electricity-generation facilities, and broader than the European program by capping transport fuels.

The cap-and-trade program is valuable in creating a price for carbon, but it is not central to reducing transportation emissions. The California cap-and-trade program covers oil refineries, and beginning in 2015 the carbon content of the fuels themselves. The program is designed with floor and ceiling prices of $10 and $70 per ton of carbon through 2020. Although $70 is likely to motivate large changes in electricity generation, the effect will be far less for transportation, where $70 per ton translates into $0.70 per gallon of gasoline. That is not enough to motivate oil companies to switch to alternative fuels or to induce consumers to significantly reduce their oil consumption, but it is still important to establish the principle of placing a price on carbon.

Replicable?

California has put in place a unique, comprehensive, and largely coherent set of policies to reduce GHGs and oil use in transportation. Although it includes a carbon cap-and-trade policy that injects a price of carbon into the economy, more important is the mix of policy instruments that target specific vehicle, fuel, and mobility activities. Most of these policies are regulatory, though they are largely performance-based and many, such as the LCFS and its credit-trading component, have a pricing component to them.

This California model has the benefit of minimal cost to taxpayers, extensive use of performance-based standards, and some harnessing of market forces. Most important of all, it has survived political challenge. Even in the midst of a severe recession and 12% unemployment, California voters defeated an initiative measure to suspend implementation of the program.

The plan does suffer from some theoretical and practical defects. One concern is that many of the policies shield consumers from price increases and will thus slow the behavioral response. One future option might be to impose a system of feebates for vehicles, whereby car buyers pay an additional fee for those that consume more oil and produce more GHGs, and less for those that consume and emit less. A fee-bate reconciles regulations with market signals. Another way to create more transparency and boost the effectiveness of the price signal might be to convert the carbon cap imposed on fuels into a fee or carbon tax.

Another major weakness is the absence of policies addressing most air, maritime, and freight activities, leaving significant chunks of the economy untouched by carbon policy. These activities can be much more effectively addressed at the federal level. Emissions leakage and fuel shuffling— whereby fuel suppliers send their “good” fuel to California and their high-carbon fuel elsewhere— is a particular challenge for California and for any small jurisdiction, whether the policies are based on market or regulatory instruments.

In a broad sense, perhaps the biggest challenge is the complex interplay of the many regulations and incentives, and the involvement by various government bodies. For example, large-scale adoption of EVs depends on whether the design of the cap-and-trade program by CARB and the Public Utilities Commission encourages electricity generation that replaces high-carbon petroleum in the transportation sector. The Public Utilities Commission also enacts rules regarding who can or cannot sell electricity to vehicles. Meanwhile, the federal government and CARB determine how much credit EVs receive as part of vehicle performance standards. Are full upstream emissions from utilities considered, even though they are not for petroleum-fueled vehicles? And should automakers be given more or less credit for EVs relative to fuel cell vehicles in the ZEV mandate? It is important to make sure that the many rules are aligned and send consistent signals. This will be a challenging task, exacerbated by the involvement of numerous government agencies and legislative bodies.

One might argue that California has no business in pioneering climate policy, that it contributes a small part of the world’s total GHG emissions, and that it is a global problem that should be left to global agreements. Although it is true that California contributes only about 2% of the world’s total GHG emissions, there are few entities with larger shares. More important, although it is clear that top-down approaches contained in international treaties and even national rules will be required to achieve substantial climate change mitigation, a bottom-up approach that more directly engages individuals and businesses is also needed. California is providing the bottom-up model for others to follow.

Toward a Common Wireless Market

Imagine a world where you can buy a mobile phone, and if you have problems with service—say, poor reception at home—you can move to a different service provider that offers better reception simply by making a call and asking for service to be transferred. No need to change your phone; no need to figure out if your new service provider supports your current device; no need to get your old provider’s permission to leave. Nor would there be a cost linked to user choice. Currently, users who want to move from one service provider to another cannot do so without paying a substantial penalty, and they also have to absorb the sunken cost represented by their current cell phone and bear the cost of purchasing a new one.

The key to this imaginary world is that all service providers would have networks that operate on a specified set of frequencies and provide mobile devices that are technologically standardized—very much not the situation in today’s wireless environment in the United States. Along with promoting user flexibility, adopting “interoperable” networks and devices would bring other benefits as well. Cell phone users, for example, would enjoy better call quality in places where their service had been spotty, as carriers tapped into competitors’ networks to roam in areas where their own coverage was poor. Prices would be lower, as carriers would be relieved of some of the cost of building expansive new wireless networks. Innovation in the device market would be encouraged, as mobile device manufacturers and developers (and especially smaller companies) were able to focus on designing new products that have to support only one standard.

This world is possible, and the nation is at a critical point for making it happen, as the wireless industry is in the early stage of converting to the next generation of wireless communications. Unfortunately, the industry seems to be making a hash of this shift, with individual carriers working to achieve their own advantage at the expense of interoperability. It is time for the federal government, through the Federal Communications Commission (FCC), to bring order to the process.

A jumble of standards

Currently, the mobile wireless industry in the United States is a hodgepodge of incompatible wireless standards, with different operators deploying various wireless technology standards on widely variable frequencies. The largest four wireless providers are a sufficient sample to make this point. Currently, AT&T and T-Mobile use networks based on GSM (Global System for Mobile Communications) standards, whereas Sprint and Verizon use networks based on CDMA (Code Division Multiple Access) standards. As a further complication, these networks run on varying frequencies across the radio spectrum, so even in the rare cases where the underlying technologies are compatible, spectrum incompatibilities prevent full interoperability.

This level of confusion in the marketplace is not a result of any inherent technological limitations, but rather of FCC policy decisions. The mobile wireless market began with a common standard when, building on work begun at Bell Labs, the Telecommunications Industry Association proposed the Advanced Mobile Phone System (AMPS) as a standard for mobile wireless phones, and the FCC adopted the standard in the early 1980s. AMPS was relatively successful for its time, although it was an analog system and therefore noisy and relatively insecure. The AMPS standard provided, by and large, a unified frequency band plan. This led to decreased infrastructure costs for operators and lower barriers to marketplace entry.

By the mid-1980s, researchers had developed second-generation (2G) digital technologies that overcame the analog problems. In Europe, coincident with the formation of the European Union, the European Conference of Postal and Telecommunications Administrations decided to resolve the incipient problem of incompatible digital wireless systems across the nascent union. The organization initiated the process of creating a single standard for the region, but soon handed over responsibility to the newly founded European Telecommunications Standards Institute. The result was development and adoption of the GSM standard, and by 1993 more than a million people were using mobile devices tailored to GSM.

GSM was a political and economic success, and it quickly spread beyond Europe with its adoption by Telestra of Australia. GSM has since become the de facto global standard for mobile wireless communication, with over 3 billion connections and with an installed base that is virtually global in its reach. This ensured interoperability led to increased competition, lower prices, and the rapid adoption of mobile wireless technology around the world, as economies of scale pushed the cost of infrastructure deployment to affordable levels.

The United States, however, followed a different path. In keeping with the prevailing governmental tenor of the times, the FCC put aside its designation of a unified frequency band plan, as called for under the AMPS standard, and chose to rely on market competition to determine technical standards for 2G wireless systems. This shift effectively allowed individual operators to determine which technological standards to deploy, perhaps in an effort to stimulate technological innovation.

The result was a proliferation of multiple standards, with each operator attempting to select standards that would give it a technological advantage. Over time, two standards became dominant: CDMA, which was promoted by a leading U.S. wireless technology company, Qualcomm; and GSM (though implemented with different frequency allocations than the European frequency band plan). It is worth pointing out that at the time, CDMA systems, which enabled multiple parties to transmit on similar frequencies simultaneously, were more efficient than GSM systems, which relied on allocating small fractions of time for each device to transmit.

Still, the FCC’s decision had the effect of encouraging the industry to adopt incompatible standards. Also, the FCC continued its historical practice of handing out licenses to local markets, failing to recognize the mobile wireless industry as a truly nationwide resource. This focus on local allocation has led to situations in which individual operators have different frequency allocations in different markets around the country, leading to even more fragmentation.

Changes also were continuing in Europe. With the increased adoption of GSM standards by non-European nations, the European Telecommunications Standards Institute opened up its standards development process, creating the Third Generation Partnership Project (3GPP) to guide work on wireless technologies. The 3G wireless systems being developed were driven predominantly by the trend toward providing data (Internet) access on mobile devices. These new smartphones combined voice and data services and created a need for greater bandwidth on mobile wireless networks. Under 3GPP, researchers in various nations continued work on the GSM standards and incorporated some of the technological elements of CDMA, erasing some of the earlier criticism of GSM’s technological inferiority.

In the United States, the industry largely continued along the path it had settled into during its 2G transition, with the FCC not doing much to change the status quo. The two technological standards that had become dominant in the United States—CDMA and GSM—continued development into the 3G environment. The industry remained divided and insufficiently competitive.

4G Future

The wireless world continued to change, however, increasingly moving to fourth-generation (4G) technologies as mobile wireless transitioned from providing voice-based services to providing ubiquitous Internet access, where “voice” is just one of the many applications that can be run on mobile devices. The 3GPP process has been a huge international success, with participation by a large portion of the global wireless industry. Also, the two largest mobile wireless operators in the United States (AT&T and Verizon Wireless) have been involved and have decided, along with a number of other U.S. service providers, to adopt the 4G GSM-based standard. The adoption of 4G GSM, known as Long Term Evolution (LTE), promises to revolutionize mobile communications.

It is fitting, then, that LTE is the arena in which the future of mobile wireless communications in the United States is currently being decided. Although there are operational LTE networks, much of the standards development is still in process, and there is still time for the FCC to act to engender a more competitive future.

There are mixed signs, however, as to the FCC’s intent. On the one hand, it has recently identified interoperability as an important consideration to lower costs, increase economies of scale, and improve competition in the wireless technology sector. Yet it also appears to be moving toward LTE standards that would allow companies to operate in separate frequency bands. This way, the mobile wireless networks would be theoretically interoperable but practically remain as isolated networks.

The primary advantage of using the same standard is defeated because the wireless operators have predictably engaged in the balkanization of the LTE standard, by ensuring that their individual spectrum assets (frequency bands) are written into the standard in such a way that preserves the status quo, with the effect that the devices in one frequency band are unable to function on other frequency bands. It appears that all of this is being done to lock in subscribers, by making the cost of shifting from one service provider to another prohibitive. Mobile wireless operators are already protected from subscribers taking advantage of subsidized mobile phones through the imposition of early-termination fees that ensure that subscribers bear the full cost of their phones if they leave before the contract period has elapsed. This balkanization defeats the rationale of standardization and retains the price barriers and user lock-in of the status quo.

Guide to action

All is not lost, however. LTE is still a standard very much in development, and the FCC can still bring about the truly competitive mobile wireless market of the future. Fundamentally, the FCC needs to make promoting interoperability an active policy stance and to take a leading role with industry in the process of setting standards for LTE. The FCC needs to commit to working toward interoperability in two major areas: technical-standards compatibility and mobile wireless spectrum interoperability. In other words, the FCC has to ensure that everyone speaks the same language (technical standards) and that they all speak in understandable accents (frequency compatibility).

The first job is already under way, as illustrated by the expressed desires of many U.S. mobile wireless operators to use the same LTE standard (same language) as the technical basis of their 4G systems—in effect, merging the GSM and CDMA tracks. For job two, the FCC should immediately begin to promote interoperability in the design of these new 4G systems and ensure that the new standard is written in such a way as to enable spectrum-based interoperability (same accents). To support such efforts, the FCC needs to use its license-granting authority to ensure that future spectrum allocations are carried out in such a way that interoperability is required from service providers serving the mobile wireless market.

One place where the FCC can begin establishing a common marketplace is in the 700-megahertz (MHz) band of the spectrum, which the government periodically auctions off to private mobile service providers. The 700-MHz band is particularly attractive for wireless applications, because its excellent propagation characteristics allow for superior coverage, enabling signals to penetrate buildings and other architectural infrastructure. (The FCC also has declared its intention to build the next-generation public safety network in this band.) However, although most private holders of rights within the 700-MHz spectrum band have adopted LTE as their technological standard, they have also worked to ensure that the standard is written in such a way as to prevent interoperability, locking in their customer base and increasing infrastructure costs for smaller operators with adjacent spectrum holdings. The FCC cannot allow such actions to persist.

With different policies and a focus on interoperability, the FCC can move the wireless industry toward a single interoperable market in which consumers have real choice and flexibility. This truly competitive market is achievable in the near future, and it can be reached with minimal financial and logistical impact on mobile wireless operators, because current infrastructure will continue to operate as usual. By acting now, at the transition from 3G to 4G/LTE systems, the FCC will ensure minimal disruptions to the highly productive mobile wireless sector, as well as a freer and more competitive wireless marketplace for mobile users.


Tolu Odumosu is a postdoctoral research fellow in the Science, Technology and Public Policy Program at the Harvard Kennedy School and at Harvard’s School of Engineering and Applied Sciences. Venkatesh “Venky” Narayanamurti (venky@ seas.harvard.edu) is the Benjamin Peirce Professor of Technology and Public Policy and professor of Physics at Harvard University and the director of the Science, Technology, and Public Policy Program at the Harvard Kennedy School.

A Course Adjustment for Climate Talks

With little hope that the United Nations Framework Convention on Climate Change (UNFCCC) process will produce an effective treaty, at least for the next several years and perhaps longer, are there other paths that could lead to near-term reductions in greenhouse gas emissions? One approach, forcefully articulated by Richard Benedick (Issues, Winter 2007), would replace the seemingly fruitless quest for a unified global compact with a mix of separate efforts that might have a better chance of resulting in action. Benedick builds on his Montreal Protocol experience in reducing emissions of ozone-destroying chemicals, in which discussions among a small group of countries and chemical companies provided the impetus for successful and rapid global action.

Nevertheless, almost five years and four more UNFCCC Conferences have passed since Benedick’s suggestions, with no obvious progress toward breaking what he characterized as “predictable” patterns, “trivial protocol debates and ritualistic ministerial speeches exhorting complicated and unrealistic actions,” that routinely end in “embarrassingly meager” results. The persistence of the stalemate and the urgency of the problem demand that we consider a different strategy.

The all-or-nothing UNFCCC strategy is too easily derailed by the views or actions of one or two countries, which then become the rationale for other countries to refuse to act. Perhaps the real antidote is to open more lines of discussion and communication and to show that smaller measures of progress can be achieved as stepping stones toward bigger ones. Like Benedick, a number of analysts including David Victor and Robert Keohane are starting to urge alternative approaches, including the virtues of focusing on smaller pieces of the climate change problem.

The argument for segmenting issues and addressing them opportunistically finds further support in the progress that has been made in global negotiations to control nuclear weapons. Just as the experience of the Montreal Protocol demonstrates that significant global progress can begin with actions by a few key countries, the history of arms control illustrates that progress can be made even without the participation of some of the major players.

The virtues of pragmatism

The process of controlling weapons of mass destruction began with a similar seemingly insolvable, yet potentially catastrophic, long-term problem and was initially dominated by high-minded speeches outlining visionary goals for “general and complete disarmament.” The UN was the focal point for most of the action, starting with the 1947 Acheson-Lilienthal proposal to place all the world’s nuclear materials and facilities under UN control.

The Cuban Missile Crisis demonstrated that nuclear war was not just a hypothetical danger. As the seriousness and extent of the danger became more apparent, discussions of discrete issues began. Pairs and small groups of nations initiated multiple approaches that were aimed at different forms of agreements with the goal of identifying aspects of the armaments competition on which governments might find common ground. The first significant result was the 1963 Limited Test Ban Treaty (LTBT), prohibiting nuclear testing in the atmosphere, in space, and under the seas.

The rapid negotiation of the LTBT, accomplished largely through the forceful leadership of the United States, the Soviet Union, and the United Kingdom, demonstrates the benefits of breaking out specific elements of the problem that are ripe for resolution. LTBT shows that when the ultimate goal is beyond reach, intermediate steps can be achievable. Key nations were willing to accept a less-than-ideal goal. The LTBT also was consistent with the existing technical capability, which could not verify a ban on underground nuclear tests. Stopping nuclear tests in the atmosphere did not end the nuclear arms race, but it ended the human health damage from radiation released by nuclear explosions. It took another 33 years to complete a comprehensive test ban treaty.

Negotiations to limit weapons of mass destruction now proceed in multiple forums, involving varying groupings of nations and differing approaches. The Non-Proliferation Treaty (NPT), a keystone to limiting the spread of nuclear weapons, was negotiated in a multinational forum starting in 1958. Initially, only 18 countries participated, but the talks became more serious when the United States and the Soviet Union decided that an agreement was necessary after China’s first nuclear test in 1964.

Although key nuclear nations such as France and China refused to sign the treaty when it was completed in1968, it still came into force two years later, and eventually they joined. Today, nuclear-armed India, Israel, North Korea, and Pakistan remain outside the treaty, but it otherwise has universal participation and is widely considered to have been successful in limiting proliferation. Before completion of the NPT, it was thought that there would be 25 nuclear powers by the end of the 20th century; today, there are only nine.

The role of the UN has shifted over time, beginning with moving disarmament talks away from the UN General Assembly to a specialized body currently called the Committee on Disarmament (CD). The 1972 Biological Weapons Convention, the 1992 Chemical Weapons Convention, and the 1996 Comprehensive Nuclear Test Ban were negotiated in the CD.

Eventually, the CD itself became paralyzed. Using the UN’s consensus rule that gives a single nation the ability to block the will of all other nations, Pakistan has for 12 years prevented the CD from taking the next logical step of a negotiated ban on the production of fissile materials.

But additional negotiating fora have multiplied, including regional configurations. The 1968 Treaty of Tlatelolco created a nuclear weapon–free zone in Latin America and the Caribbean. Initially, the key nations of Argentina, Brazil, and Cuba remained outside the treaty, but all the region’s nations have now ratified the agreement. Similar nuclear weapon–free zones have been negotiated for Antarctica, Africa, Central Asia, Southeast Asia, and the South Pacific and are slowly taking effect.

The United States and the Soviet Union/Russia, which own 95% of the world’s nuclear weapons, bear special obligations for ending the threat of a nuclear holocaust, just as special responsibilities might reasonably be expected for major greenhouse gas emitters. Over the past 40 years, the two nuclear superpowers have conducted bilateral negotiations to reduce the size of their arsenals and reduce the risk of nuclear war. Combined with reciprocal, unilateral actions taken at opportune times, these negotiations have reduced U.S. and Russian stocks of nuclear weapons from more than 25,000 each at the height of the Cold War to roughly 5,000 and 8,000, respectively, today.

Numerous other agreements between the United States and Russia have improved communication channels during crises, avoided incidents between their armed forces, outlawed an entire class of missiles, and for a limited time prohibited the deployment of missile defenses.

The weapons experience holds several lessons about the intricacies of interlocking agreements. Authority can be split out for certain kinds of functions; for example, placing them into an independent multinational agency specifically designated to carry out the assigned function. The separate agency can carry out its narrow mandate on which there is general agreement, and over time, as it develops experience and trust, it can expand that mission. This happened with the International Atomic Energy Agency’s (IAEA’s) assignment to establish and monitor safeguards on declared peaceful nuclear facilities, ensuring that they and the fissile materials they use were not being diverted to weapons programs. IAEA’s demonstrated competence has allowed it to gain the confidence of countries and thereby to increase the reach of its inspection function. The IAEA eventually engaged 103 countries in “additional protocols” that give the IAEA further powers to conduct inspections of undeclared, suspected facilities on a challenge basis.

A second example is the Comprehensive Test Ban Treaty Organization, which was created on a provisional basis even though the treaty itself has yet to come into legal force. The organization built a global network of sensors capable of detecting even very small nuclear tests, which has aided ratification efforts by showing that the agreement can be verified with great confidence. The network also serves additional purposes such as pinpointing earthquakes.

Related to that has been the trend toward greater intrusiveness and voluntary concessions of national sovereignty for verification of commitments. For U.S.-Soviet/Russian nuclear limitations, such arrangements have progressed from strict dependence on national means (intelligence satellites), to limited onsite inspections, to the maintenance of a permanent presence at production facilities in each other’s territory, to the current Strategic Arms Reduction Treaty (START), which allows inspectors to actually peer into missile silos and count warheads.

The result has not yet been the complete elimination of nuclear weapons, and progress sometimes has been painfully slow. But the nuclear balance has stabilized at far lower numbers, weapon technologies have spread to far fewer countries, and other types of weapons of mass destruction have been outlawed and destroyed by virtually all nations, greatly reducing the risk of catastrophic wars. This has happened without halting negotiations to wait for reluctant major players.

Don’t shoot the messenger, just replace him

A final lesson from the weapons history is the significance of who negotiates. Environment has long been a foreign policy stepchild. Foreign ministries, including the U.S. Department of State, implicitly prioritize political and security concerns as their primary role; economic issues occupy a much lower second tier; human rights receive attention now because they are written into many national statutes and sometimes rise to prominence in a crisis; and environment occupies a still lower tier and receives attention only when a high-ranking individual has a particular interest or when an emergency occupies the headlines briefly. Convincing senior officials that climate change, like arms control, is a security challenge could provide the leverage necessary to pursue a different strategy.

Drawing negotiating lessons from the weapons world might help shift the conversation to an arena that these more powerful officials better understand. They know and respect issues with geopolitical consequences and great potential dangers, but they generally do not believe that the environment falls into that category. They probably do not remember when controlling nuclear weapons was a similarly fuzzy subject largely promoted by do-gooders. Indeed, Polish Prime Minister Donald Tusk may have spoken for the power politics crowd when he was reported to have remarked privately, after opening the 14th Conference of Parties (COP) in Poznan, on his surprise at not seeing a room of ragged environmental activists. An encouraging sign that environmental concerns are rising in visibility is that the U.S. Department of Defense has quietly recognized climate’s implications for resource demands that could have implications for national security.

Attracting new voices to the discussion could help break down the insularity of the climate change community. Although the isolation of this group is probably no greater than that of weapons or trade negotiators, any group of people who meet with each other continuously over decades trying to solve defined problems develop a common language and culture and their own reference points. They lose perspective and become blind to their assumptions. Climate change is such an important threat that the time has come for the equivalent of an intervention.

Climate negotiations have largely taken place in the environmental backwaters. Some parts of the climate world resisted the potential political boost that might have emerged from the 15th COP at Copenhagen, which attracted 120 world leaders. The commitments by world leaders to hold warming to 2 degrees Celsius and to record individual country mitigation actions did not fit the expectations of climate experts for a so-called legally binding result. The opportunity to take advantage of what looked like the beginning of high-level support for action was lost. Instead, everything went into a freeze for almost 12 months; then pieces of the Copenhagen agreement were incorporated into the work agenda that came out of Cancun.

Thus, leading up to Durban, negotiators were expected to convert the Kyoto Protocol targets of the existing Annex I countries (the developed world) into actual binding commitments; ensure no gap between the first and second commitment periods of the protocol; develop accounting rules for forest management emissions and removals; clarify assumptions underlying emission reduction targets, including those related to land use, land-use change and forestry, and offsets; establish a Green Climate Fund to manage support for the mitigation and adaption needs of developing countries; and salvage the various emissions-trading mechanisms for meeting Annex I targets. Since then, the future of the protocol is in considerable doubt, the emissions trading markets are in disarray, and full funding of the Green Climate Fund is in question, as are basic understandings about its terms and conditions. Throwing this set of issues back to the environmental experts is not a prescription for success.

Application to climate challenge

The culture of climate negotiations has not favored separating issues for independent resolution. Supporters of the current negotiations might argue that none of this history is relevant to the climate threat, that the negotiating pathway is dictated by the accelerating climate threat, and that there simply is no time to try experiments and alternative pathways.

It is true that UNFCCC negotiations involve a bigger basket of issues than most multinational talks. Several of the problems to be solved—weaning whole economies from coal and oil consumption, disappearing forests, novel stresses affecting agriculture and disease, and the future of small-island states that will probably be inundated within a few decades, to name just a few—would by itself constitute an unusual test of human problem-solving skills. But isn’t this all the more reason for trying new alternatives; changing the scenery, players, or scope of discussion; or attempting iterative actions and venues? The number and variety of the issues is itself an argument for tackling some of them separately.

A critical question is whether it continues to be productive to invest solely in a single exclusive UNFCCC forum for negotiation of a wide variety of issues. The UNFCCC clearly sets out a vision and formula for the organization of responsibilities among countries. But evidence from the weapons regime demonstrates alternative ways to achieve that central vision. Common agreement about the overall goals and objectives can trigger a series of interactions to resolve narrow issues and challenges. When complete agreement is not possible, partial limits can still be beneficial and a starting point for more comprehensive action. Venues can be developed where consensus action can be taken while debate on related topics continues.

Within the umbrella of the UNFCCC, multiple interactions could take place that would make it possible for a small group of countries to agree on particular actions or for the global community to reach agreement on some narrowly defined questions. Alternatively, entirely separate configurations of interested parties could be developed for specific purposes. They could continue to operate independently or eventually converge back into an umbrella agreement.

The fact that each specific agreement might resolve only a part of the overall climate challenge need not be seen as a liability. More narrow initial agreements on topics such as technology dissemination and financing, forest preservation and restoration, or other pieces of the climate puzzle could advance the ball.

The case for engaging more powerful ministries and government officials in achieving resolution of the many complex climate issues is consistent with many of the negotiating configurations suggested by the arms history. Some defense experts have already reframed climate in the more accurate “threat multiplier” analysis commonly used by the military to explain how extra stresses can turn hazards into disasters. The time might be right to make a refocused plea to top foreign policy leaders in a different set of words. Climate change is too big to be confined to the environmental ghetto; it is not a national security sideshow but increasingly the main event. And if we can understand climate change as a national security concern, it makes sense to look for lessons in the handling of other national security issues, such as arms control.

MIT’s scientists warn that business-as-usual greenhouse gas emissions are placing a very bad bet for humanity’s future; they calculate a 50% chance that the global average surface temperature will increase at least 9.2 degrees fahrenheit by 2100. No sane person would board a plane that has a 50% chance of crashing, but humankind is entrusting the entire resolution of the climate threat to an ineffectual annual round of negotiations under a UN umbrella, conducted largely by politically powerless environmental officials. As our GPS devices tell us when we go off course, it is time to recalculate.


Ruth Greenspan Bell ([email protected]) is a public policy scholar at the Woodrow Wilson International Center for Scholars in Washington, DC. Barry Blechman is the cofounder of and a distinguished fellow at the Stimson Center in Washington, DC.

Expanding Certificate Programs

The United States faces a decline in the educational attainment of the labor force that threatens to reduce economic growth and limit national and personal prosperity. Reversing this decline will require a national commitment, supported by comprehensive actions, to encourage and enable two particular groups, minority and low-income youth and working adults, to obtain college credentials that are valued in the labor market. For many individuals, the most practical route toward this goal will be obtaining not an academic degree but a certificate that signifies completion of a rigorous, occupationally focused program of study.

This shift will pose a big challenge to postsecondary education, but there is good news here. First, it seems feasible to quickly ramp up certificate programs. Some colleges in some states are showing the way, boosting enrollment in these programs and issuing certificates to large numbers of people. Second, there is evidence that completion rates in some of the best certificate programs are significantly higher than in degree-granting programs. Third, there is evidence that certificate programming can be economically efficient for the state and federal governments, just as they are for students. Finally, there are strong indications that minority and low-income youth and working adults can find in certificate programs the success that has been so elusive in degree programs.

Such potential benefits do not imply that these groups should automatically be tracked into certificate programs as a final goal. Rather, good certificate programs can serve as stepping stones to further degreed education programs from which long-term economic and social returns may be even greater (for the relatively few who manage to complete them). But for many people, certificate programs by themselves can be a valuable and manageable path to good jobs.

Challenges in the labor market

During the past several decades, rising educational attainment in a rapidly growing labor force contributed significantly to productivity, economic growth, and national competitiveness in an increasingly global economy. A 2000 Joint Economic Committee report found several estimates of the effect of human capital gains on economic growth in the range of 15 to 25%. That review and other studies also have underscored the indirect contribution of educational advances in fueling innovation and new technology adoption.

From 1960 to 2000, the workforce more than doubled, to about 141 million workers. The number of workers in their prime productive years, ages 25 to 54, increased by more than 130% during this period. This growth was accompanied by huge gains in educational attainment. In 1960, just 41% of the population over the age of 25 had completed high school, but by 2000 this figure had reached roughly 80%. College attainment grew at an even faster pace. In 1960, only 7.7% adults over the age of 25 had a bachelor’s degree or higher, but by 2000 the figure was 24.4%. Especially from 1970 to 2000, workers entering their prime working years had much higher levels of education than those aging out of the prime age group or leaving the workforce altogether.

But these advantageous trends have fully played out. It is projected that between 2000 and 2040, the workforce will not grow nearly as fast as it did during the previous 40 years. The Bureau of Labor Statistics (BLS) projects overall labor force growth of only 29% by 2040 and only 16% among prime-age workers.

Slow growth is only half the story. Given current trends, the nation can expect little gain in the educational attainment of the workforce by 2040, at least as a consequence of young adults moving into and through the labor force. Older workers (ages 35 to 54) are now as well educated as younger workers (ages 25 to 34), especially in the percentage with at least a high-school degree, but also in the percentage with some postsecondary attainment. Thus, there will be no automatic attainment gain over the next several decades as current workers age and older workers leave the labor force. In fact, without some big changes in educational patterns, it is probable that the newer workers entering the workforce will have lower levels of attainment than the older workers leaving. Workforce attainment levels will stagnate or decline, and future economic growth will slow as a consequence.

In the face of these trends, President Obama proposed to Congress in 2009 that “by 2020, America will once again have the highest proportion of college graduates in the world.” (Russia and Canada now surpass the United States, although there is debate about the validity and value of the comparisons.) According to evaluations led by the National Center for Higher Education Management Systems, retaking international leadership would require U.S. college attainment rates to reach 60% in the cohort of adults ages 25 to 40. But in 2008, only 37.8% of this age group had degrees at the associate’s level or higher, and at present rates of growth, this figure would increase to only 41.9% by 2020. Closing the gap will require a 4.2% increase in degree production every year between 2008 and 2020.

The While House subsequently added two complementary goals: adding 5 million community college graduates by 2020, and providing all citizens with a year of credentialed education or training beyond high school.

Meeting these goals will be a huge challenge. Even with the most optimistic assumptions about high-school graduation, college continuation, and degree completion, there simply are not enough traditional students to meet the goals within existing patterns of attainment. A realistic appraisal of demographic trends and historic attainment patterns can only lead to the conclusion that increasing workforce attainment, or even maintaining current levels, will require big changes in postsecondary enrollment and completion among working adults and minority and low-income youth.

Workers of younger ages are more racially and ethnically diverse than adults now in the labor force, with greater representation from groups, including Hispanics and blacks, that historically have not been well served in K-12 or postsecondary education. According to numerous projections, the proportion of the labor force made up by Hispanics and blacks will grow rapidly, reaching 24 and 15%, respectively, by 2050, whereas the share made up by whites will shrink to 53%.

Blacks and Hispanics are now far less likely than white students to complete high school, attend college, or complete a postsecondary credentialing program. According to data compiled by the College Board, enrollment rates for recent black high-school graduates increased from just 40% in 1975 to 56% in 2008, and the rates for Hispanics increased from 53 to 62%. But these gains failed to keep pace with gains among whites, whose direct–from–high-school enrollment rates increased from 49 to 70% during the same period. Nor is the college completion gap between whites and blacks and Hispanics getting any smaller. A study of students beginning their enrollment in 2004 found that 66.9% of white students had obtained a degree or were still enrolled five years later, whereas the rate for Hispanics was 57.9% and for blacks 56.6%.

There is evidence that completion rates in some of the best certificate programs are significantly higher than in degree-granting programs.

Trends among older workers show similar challenges. There are about 62 million adults age 25 or older in the labor force who do not have postsecondary credentials of any kind. More and more of them have been reading the signals of the labor market and have enrolled in college. The percentage of credential-seeking undergraduates in postsecondary institutions who are age 24 and older increased from about 27% in 1970 to about 40% by 2000. But working adults have high levels of attrition from college before completion, as compared with traditional students. An analysis of students of all ages who began their postsecondary education in 2004 revealed that by 2009, 49.4% had completed a credential and an additional 15% were still working on one. The remaining 35.6% had dropped out along the way. However, of those who were between the ages of 24 and 29 when they enrolled, only 34.9% had completed any sort of credential, and 14.2% were still working on one, whereas more than 50% had dropped out. Students who were above the age of 30 when they enrolled had even lower rates of completion.

Benefits of certificate programs

Certificate programs take a variety of forms nationwide. They are offered by two-year community colleges, by four-year colleges, and, increasingly, by for-profit organizations. Programs vary in duration, falling into three general categories, with some requiring less than one academic year of work, some at least one but less than two academic years, and some requiring two to four years of work. The programs collectively awarded approximately 800,000 certificates in 2009, up more than 250% from the roughly 300,000 certificates awarded in 1994. Across all programs, awards are heavily skewed toward health care, which represented 44.1% of all certificates awarded in 2009. According to data compiled from the Integrated Postsecondary Education Data System (IPEDS), which surveys all postsecondary institutions participating in federal student financial aid programs, the growth rate for certificate programs has been significantly faster than that for postsecondary degree production, where between 1994 and 2009 the number of associate’s degrees awarded grew 53.2% and bachelor’s degrees grew 38.3%.

Although data are limited and often difficult to compare, there is some evidence that certificate programs have a higher success rate than degree programs in graduating students, and that certificate programs are probably more economically efficient than associate’s degree–oriented programs.

Data on the economic returns to students are clearer, although still incomplete. For example, research drawing on a number of national surveys is generally consistent in reporting that individuals who have one year of study after high school earn 5 to 10% more than individuals who have no postsecondary education or training, and this earnings advantage increases with additional training beyond one year. This research also indicates, however, that postsecondary participation of less than one year seems to produce little earnings return.

Further, research at the state level more consistently finds significant earnings advantage to certificates for programs of one year and more. Most of this research rests on matching student records against wage data available through state-maintained unemployment insurance records. Not all states routinely make these earnings comparisons or, if they do, choose not to make this information publically accessible. However, enough do to conclude that certificates for programs of at least one year of study almost always offer good labor market returns to recipients and that they provide a platform for career entry and advancement in occupations paying family-supporting wages.

A study of educational and employment outcomes for students in Florida also has suggested that certificate programs, in addition to leading generally to good economic outcomes for completers, may have particular advantages for students from low-income families. The study drew from a longitudinal student record system that integrates data from students’ high-school, college, and employment experience. It followed two cohorts of public-school students who entered the ninth grade in 1995 and 1996.

The research suggested that strong earnings effects of degree attainment (associate’s, bachelor’s, and advanced) were largely confined to students who had performed well in high school. They were continuing in postsecondary study a trajectory of success apparent in high school. However, the research found that obtaining a certificate from a two-year college significantly increased the earnings of students who did not necessarily perform well in high school, relative to those who attended college but did not obtain a credential. These students were finding new success in certificate programs, changing the trajectory of their high-school years. Moreover, the study confirmed other research that found strong returns to completion of good certificate programs, even relative to associate’s degree completers.

Boosting certificate programs for working adults and low-income and minority youth will not happen without purposeful action by national, state, and college leaders.

Across all certificate programs, the field of study is an important predictor of earnings outcomes. In some fields, individuals who complete long-term certificates make as much money, on average, as those who complete associate’s degree programs. This seems to be because certificate completers pursue and earn awards in fields with relatively high labor market returns and then take jobs where they can realize those returns. Many individuals who gain associate’s degrees do not go on to higher attainment, and a significant number of them hold majors in areas that offer limited labor market prospects for job seekers with less than a bachelor’s degree.

There also is sound evidence of a significant and immediate labor demand for increased awards for completion of long-term certificate programs. The Center on Education and the Workforce at Georgetown University forecasts that the nation’s economy will create 47 million jobs between 2008 and 2018. Nearly two-thirds of them will require at least some postsecondary education. Of that component, half of the jobs will be accessible to individuals with an associate’s degree or long-term certificate. In fact, projections by the BLS suggest that jobs requiring only an associate’s degree or a postsecondary vocational award (a certificate) will grow slightly faster than occupations requiring a bachelor’s degree or more.

Keys to program success

Given the accumulating evidence about demand, earnings, and relative efficiency, it seems feasible and desirable to ramp up certificate offerings and aim them directly at low-income and minority youth and working adults who are not having much success in traditional pathways to degrees. These two groups are already finding some success in certificate programs and, with a more intentional approach to the design and expansion of long-term certificate awards, they could find still more success.

Tennessee provides a clear example of what is possible and of what works. Tennessee has a statewide system of 27 postsecondary institutions that offer certificate-level programs serving almost exclusively nontraditional students. The Tennessee Technology Centers began as secondary-level, multidistrict, vocational technical schools in the 1960s under the supervision of the State Board of Education and began to serve adults in the 1970s. In most states, analogous institutions were merged into community- or technical-college systems, but in Tennessee (as in a few other states) they continue to operate as discrete non–degree-granting postsecondary institutions.

The technology centers award diplomas for programs that exceed one year in length, as well as certificates for shorter programs. Diploma programs average about 1,400 hours and some extend to more than 2,000 hours. They are designed to lead immediately to employment in a specific occupation. In 2008–2009, the centers enrolled roughly 12,100 students, and they awarded 4,696 diplomas and 2,066 certificates. Collectively, the centers offer about 60 programs, some just at the shorter-term certificate level but most at the longer-term diploma level. Some of the more popular diploma programs are Practical Nursing, Business Systems Technology, Computer Operations, Electronics Technology, Automotive Service and Repair, CAD Technology, and Industrial Maintenance.

Most students in the centers are low-income, with nearly 70% coming from households with annual income of less than $24,000 and 45% from households with annual income of less than $12,000. Thus, most students enrolling in full-time and part-time programs qualify for federal Pell Grants, and many of them receive support channeled through the state under the federal Workforce Investment Act. The percentage of students who are black or Hispanic is greater than the total percentage of minorities in the state’s population. The average age of the students is 32 years, and all of the centers report a mix of new high-school graduates, young adults getting serious about career development, and older adult workers seeking the postsecondary credentials they decided not to pursue when they were younger.

According to 2007 IPEDS data, 70% of full-time, first-time students in the centers graduated within 150% of the normal time required to complete the program. Every year for the past several years, at least 80% and sometimes as many as 90% of students who completed the program found jobs within 12 months in a field related to their program. The Occupational Education Council accredits the centers, and one of its requirements is that institutions maintain annual job placement rates of at least 75%. Surveys by the centers find that program completers consistently report high earnings, as compared with average wages in their industry or occupation.

A growing consensus in Tennessee holds that the key explanation for the centers’ high completion rates can be found in the program structure. The centers operate on a fixed schedule (usually from 8:00 a.m. to 2:30 p.m., Monday through Friday) that is consistent from term to term, and there is a clearly defined time to degree based on hours of instruction. The full set of competencies for each program is prescribed up front; students enroll in a single block-scheduled program, not individual courses. The programs are advertised, priced, and delivered to students as integral programs of instruction, not as separate courses. Progression though the program is based not on seat time, but on the self-paced mastery of specific occupational competencies.

Clearly, this approach discourages part-time attendance. It asks students to commit to an intensive program of full-time instruction. But it consolidates the classroom time into a fixed period each day, providing a clear and predictable timetable that enables students to work part-time and to meet family responsibilities. Transparency about tuition, duration, success rates, and job placement outcomes (published clearly in college brochures and Web sites) apparently enables students to assess costs and benefits, see the reasons for continued attendance, and make the sacrifices necessary to achieve program goals.

The centers also build necessary remedial education into the programs, enabling students to start right away in the occupational program they came to college to pursue, building their basic math and language skills as they go, and using the program itself as a context for basic skill improvement. Getting immediately into the desired program skills seems to strengthen students’ motivation and encourage persistence and completion. While the students are held to a common and rigorous basic skill and workforce readiness standard, connecting basic skills development to technical skills demonstrates relevancy and seems to promote success.

This program structure differs markedly from programs offered in other states. Most community colleges and many non–degree-granting institutions do not offer certificate programs with such completion-focused structure. Students seeking an occupationally oriented certificate at most community colleges pursue a traditional collegiate pathway to the credential that is similar to degree pathways. Generally, they must complete 10 to 12 separate courses, each typically counting for three credit hours. Courses usually meet for 60 to 90 minutes twice a week for 16 weeks during the semester. Many courses have prerequisites, so taking the right courses in the proper sequence is critical (and some courses are not offered every semester).

Just as in degree programs, many newly enrolled students in many certificate programs are required to take development education courses (over one, two, or even three semesters) to build their math and language skills before they can even enroll in the program-level math and English courses that often represent a gateway into their field of study. These courses are credit-bearing but do not count toward the certificate requirements. Piecing together a coherent academic pathway to a credential from an array of individual courses that sometimes are awkwardly and inconsistently scheduled in small chunks during 16-week semesters is hard for students who are often not well prepared, typically face severe and immediate financial pressures, frequently have family responsibilities, and do not have academic advisers to help guide them through the multiple choices required by complex, conventional academic systems. Most students respond to these scheduling challenges by attending only part-time, trying to squeeze in one or at most two courses each semester and occasionally dropping out for a full semester. The pathway to a certificate, especially one that represents completion of a program of at least one year, is long and choppy; things go wrong and students simply drop out.

Actions needed at all levels

Boosting certificate programs for working adults and low-income and minority youth will not happen without purposeful action by national, state, and college leaders. The trajectory of increase in long-term certificate awards is positive but gradual, and it has slowed during the past several years even as, on a long-term basis, certificate growth has outstripped gains in degree awards.

At the national level, officials in the White House and at the Departments of Education and Labor can play an important policy leadership role by promoting certificate attainment above the threshold of one-year programs as a viable component of national postsecondary attainment planning and as a valuable outcome of postsecondary participation. Important needs include better tools for the Census Bureau for tracking changes in attainment, more rigorous reporting requirements for the IPEDS, more critical research on certificates by the National Center for Education Statistics, and more careful work by the Department of Labor to relate cer-tificate pathways to occupational outcomes.

National and regional accrediting bodies should also step up to greater responsibility in their oversight of long-term certificate programs. That means, among other things, acknowledging the importance of block programs that base progress on demonstrated competency rather than on course-by-course seat time requirements; supporting, not discouraging, the compression of classroom time through improved course design; and promoting the effective use of applied math, English, and general education content.

National employer groups should encourage their affiliates to pay sharper attention to certificates as a measure of postsecondary attainment. Of special importance, these groups need to help employers see the advantages of long-term versus short-term certificates for current and prospective employees. There is inevitable tension between the logical desire of most employers to squeeze postsecondary education and training of current employees into short work-related chunks and their shared interest in developing a more highly skilled workforce with the higher competencies and platform skills typically associated with longer-term credentials. National employer groups can help promote the importance and legitimacy of long-term certificates as a strategy to pull underprepared youth and adults to postsecondary attainment.

At the state level, higher education authorities should ensure that the financial and regulatory framework for public postsecondary education encourages enrollment and success in long-term certificate programs, especially in their community colleges. They should encourage their community colleges to build out certificate programs with labor market payoff. They should also work with statewide and regional employer groups (general business and sector-specific) to promote the advantages to employers and working adults of high-value certificate programs.

State agencies involved in workforce development and higher education have a special responsibility, which few are now meeting, to measure the labor market returns of certificates, as well as of occupationally oriented programs at the associate’s degree level. State agencies should routinely match postsecondary student records against the administrative records of state unemployment insurance programs. Ideally, states would assess earnings outcomes for completers versus noncompleters in every program area and compare earnings of those with postsecondary credentials to a sample of those without such credentials in all occupational categories. Importantly, this information should be made widely available to students, prospective students, and their employers.

In some states, public postsecondary institutions have left the certificate marketplace to the for-profit sector. This is not a sound strategy for the long haul. It works for the for-profits as long as federal tuition subsidies are generously available, but it drives them toward high-margin programs and toward students willing to incur high levels of debt. Some proprietary institutions have better success in getting students to completion than do most community colleges, but many have poor graduation rates.

At the institutional level, most of the hard work in developing the promise of high-value certificate programs needs to be done by staff and faculty who have a shared interest in promoting better success. In a few states, non–degree-granting one- and two-year institutions can be major players in this work, but in most states, community colleges must take the lead.

The first step is to examine the scale and scope of existing certificate programs with a view toward expanding them in high-value occupational fields and boosting their enrollments, especially of working adults and low-income and minority youth. If there is a single state model to hold up for comparison, it may be Arizona, where the community colleges have built an impressive array of certificate programs that have some consistency statewide but also are able to respond to regional labor markets. The state’s community colleges also offer a strong example of aggressive outreach to build the participation of working adults and low-income and minority youth.

But for colleges, the issues are not just scale and scope and expanding access. The Tennessee Technology Center model demonstrates the importance of program structures that promote success and completion. Many community colleges see the completion advantage of certificate programs exclusively in their relatively short length, but that is not an adequate foundation for success for strong certificate programs with high labor market relevance and good earnings returns. Time to credential is important and usually one of the reasons that students enter certificate pathways, because they see them as shorter and less daunting than degree offerings. But good programs are often nearly as long as degree programs, and merely limiting credit or clock hours will not always be feasible and by itself will not necessarily build success.

Strategies for success

There are several interrelated educational strategies and practices frequently associated with high completion rates in both certificate and degree programs. These strategies and practices should not be seen as a menu from which colleges might pick and choose. Rather, they should be viewed as a flexible recipe for building new programs and rebuilding existing ones in ways that directly promote student success and credential completion. These strategies and practices include:

Integrated program design. The full set of competencies for each program should be prescribed up front, and students should enroll in a single coherent program rather than individual unconnected courses. Students would not be required to navigate through complex choices or worry about unnecessary detours. Instructors would share accountability for helping the students successfully complete the whole program.

Compressed classroom instruction. Instruction outside of the classroom, using contemporary technologies, should be used to supplement traditional classroom instruction to compress seat-time requirements and strengthen the curriculum.

Block schedules. Programs would operate on a fixed classroom-meeting schedule, consistent from term to term. The students would know their full schedule before they begin, and they would know when they would be done.

Cohort enrollment. Students would be grouped as cohorts in the same prescribed sequence of classroom and nonclassroom instruction. This would promote the emergence of in-person and online learning communities widely acknowledged as an effective strategy for improving student outcomes.

Embedded remediation. Most remediation would be embedded into the program curriculum, supplemented as necessary through instruction that is parallel and simultaneous to the program, rather than preceding it. Students would develop stronger math and English skills as they build program competencies, using the program as context. There would be clear expectations related to developing competency in basic skills, with rigorous assessment.

Transparency, accountability, and labor market relevance. Programs should be advertised, priced, and delivered as high-value programs tightly connected to regional employers and leading to clearly defined credentials and jobs. Clear and consistent information about tuition, duration, success rates, and job placement outcomes would enable students to assess costs and benefits, see the reasons for continued attendance, and make the sacrifices necessary to achieve program goals. Programs would be held accountable to rigorous and consistent national accreditation standards.

Program-based student support services. Even as these changes in the fundamental structure of certificate programs accelerate persistence to completion, it also should be anticipated that many students will require support services to overcome problems of transportation, child care, and other personal, family, and economic pressures. Ideally, these supports would be embedded into the programs themselves, with faculty helping to identify student needs and supporting resources and using technology and partnerships with employers and community-based organizations to supplement traditional support services.

The building blocks, then, are known, and some successful examples are in place. The challenge is for educational institutions, especially community colleges, to seize the opportunities. If they expand their certificate offerings to all high-demand, good-wage jobs in their regional economies, and if they adopt the strategies and practices associated with high rates of completion, they can help ensure that certificate awards and attainment levels will increase much more rapidly than degrees. If the nation does them right, certificate programs can be a vitally important national strategy in boosting postsecondary attainment and maintaining the advances in labor force skills that have helped drive national economic growth.

Taming globalization

Harvard’s Dani Rodrik is one of the most perceptive U.S. commentators on international trade and investment. For more than a dozen years, as signaled by the title of one of his earlier books, Has Globalization Gone Too Far?, Rodrik has questioned antiseptic depictions by other academic economists of a global economy bound to flourish if comprised of minimally regulated national economies. Events have proved him right to do so.

The title of another of Rodrik’s books, One Economics, Many Recipes, summarizes equally succinctly the perspective underlying The Globalization Paradox: Countries with economic development policies as different as those of China, Germany, and South Korea have been successful in boosting productivity, wages, and living standards over considerable time periods. The observation would be banal coming from anyone but an economist.

At the analytical core of The Globalization Paradox, Rodrik identifies irreconcilable tensions among national sovereignty, democracy, and hyperglobalization, this last term signifying an international economy deeply integrated and lightly regulated. Rodrik believes that any two of these can coexist, but not all three. Domestic politics in sovereign states leads to restrictions on trade and investment as capital and labor maneuver to protect market positions and jobs. Sovereign states can accommodate deep globalization only by attenuating the free play of domestic politics. The world can have both democracy and hyperglobalization only if states give up a measure of their sovereignty to supranational authorities.

Rodrik concludes that the only palatable combination of these three goals is globalization fettered to some extent by restrictive trade policies, capital controls, and economic regulations, a description that fits the world economy during most of the 20th century. He would like to find a way back to some such regime, while minimizing the excesses of the 1920s and 1930s and the negative consequences of trade restrictions and anticompetitive regulations for productivity, efficiency, and growth.

The path to globalization

Rodrik argues that the “trilemma” involving national sovereignty, democracy, and hyperglobalization stems from the unfilled vacuum created beginning in the 1970s with the crumbling of the Bretton Woods arrangements that emerged at the end of World War II and created the World Bank and International Monetary Fund (IMF). The General Agreement on Tariffs and Trade (GATT) emerged several years later as the intended bridge to a more comprehensive International Trade Organization. In the United States, however, Senate support for treaty approval could not be found, even for membership in the GATT. The United States participated only as a signatory until the World Trade Organization (WTO) finally superseded GATT in the mid-1990s.

Because the trade regime that emerged from Bretton Woods was initially so weak, globalization was slow to emerge. For several decades after World War II, national governments had considerable freedom to manage, or mismanage, trade because GATT was at first full of loopholes and exceptions. Countries could easily channel investment to favored sectors or even to individual firms, and otherwise implement industrial policies. Japan did so, as did rapidly developing economies such as Taiwan and South Korea, each in its own way.

Over time, successive rounds of multilateral trade negotiations under GATT auspices tightened or closed off many of the early exceptions to open trade, nibbling away at the policy tools available to member governments. Many multinational businesses favored this agenda. Economists supplied rhe torical support, theoretical justification, and applause. So did the subset of political scientists who took the view that government failure was more common and more consequential than market failure. Global economic integration deepened.

Deregulation had begun as World War II ended, with the United States and other governments relaxing or removing wartime controls over production, prices, and the labor market. The deregulatory push continued once it became plain that no return to the Great Depression of the 1930s was imminent, and after the mid-1970s, it strengthened markedly. In the United States, wartime production miracles had burnished the credibility of the big industrial corporations that dominated the economy. Business leaders resumed the insistent push for freedom of action and absence of accountability that had been part of the ethos of capitalism from its beginnings, set aside only temporarily during depression and war. At the same time, technological change accelerated, spurred by military innovations motivated by cold war competition.

RODRIK’S ANSWER ENVISIONS A WORLD OF SOVEREIGN DEMOCRATIC STATES COUPLED LOOSELY THROUGH NETWORKS OF INTERNATIONAL AGREEMENTS ON DISCRETE AREAS SUCH AS ENVIRONMENTAL PROTECTION, LABOR STANDARDS, AND IMMIGRATION.

The 1970s opened with the Nixon administration decoupling the dollar from gold, allowing it to depreciate so as to improve the competitiveness of U.S. exports, thus undermining the Bretton Woods agreement. The decade ended with completion of the Tokyo Round of trade negotiations. Earlier GATT rounds having cut tariffs to generally low levels, the Tokyo Round addressed nontariff barriers, such as the negotiated quotas that from 1969 to 1974 limited U.S. imports of steel from Japan and the European Economic Community. The Uruguay Round followed, with negotiators turning to trade in services, including financial services, even though these were poorly understood compared to trade in goods and many nonfinancial services, and the waves of innovation that followed meant that the opacity of many financial markets increased even as greater effort went into exploring their workings. The Uruguay Round also led, finally, to the formation of the WTO, which incorporated the more comprehensive structure for the governance of trade and investment envisioned in the 1940s.

Already the infamous tuna-dolphin case had brought the implications of globalization home to many Americans in ways that seemed quite different from restrictions on imports of steel or apparel. When the United States tried to bar imports of tuna that were caught using techniques that were illegal in the United States because they resulted in the death of many dolphins, a GATT panel ruled the U.S. action an unfair trade practice. The case reified for environmentalists and many others the challenge to democratic decisionmaking and sovereignty posed by the emerging trade regime. At the same time, more Americans were beginning to see in wage stagnation and “deindustrialization” unadvertised consequences of free trade and globalization, while Europeans worried over jobless growth, and many developing countries continued to regard international economic affairs as a game rigged to their disadvantage.

A new framework?

In Rodrik’s view, the world continues to grapple, blindly, with the stresses and strains of the unresolved trilemma he describes. The post-2007 slump showed hyperglobalization to be a recipe for instability and in many countries for growing inequality. Today, for instance, the 17 members of the European Union (EU) that adopted the euro find themselves trying to cope with still-spreading disarray triggered by Greece, which piled up debt and now cannot devalue its currency as it once would have. Yet the EU and the eurozone bloc within it remain anomalous. Rather than consolidating, states continue to fragment as a result of conflict among national groups. The number of sovereign states is increasing, and more of them merit the appellation nation-state. If all politics is local, the notion of some sort of globalized democracy must seem nonsensical; there are too many billions of people with too many local interests. If hyperglobalization implies economic instability (or perhaps more precisely, a set of dynamics embodying a greater number of possible metastable suboptimal equilibria), and if no one wants to pull back from national democratic politics and sovereignty (except perhaps a few remnant idealists and prototycoons bent on monopolization), then it is hyperglobalization that must give way. The question becomes: What sort of new framework to seek?

Rodrik’s answer envisions a world of sovereign democratic states coupled loosely through networks of international agreements on discrete areas such as environmental protection, labor standards, and immigration. No longer would an unaccountable body such as the WTO, with its legalistic panels and proceedings and a staff of true believers, be delegated to decide tradeoffs between conflicting goals such as free trade and the protection of dolphins. Within agreed limits, states could set their own rules and require that imports meet them. Poor nations would be free to claim that, for them, child labor represented a lesser evil than large numbers of young people neither attending school nor working, but they could not expect the world to be fully open to goods so produced because importing countries would be free to bar the products of child labor. National governments would have a freer hand in adopting and implementing economic and industrial policies, irrespective of IMF and World Bank strictures.

If much of this seems rather schematic, it is because Rodrik leaves two nodes in his tripartite analysis schematic. He delves deeply into international economics, but not into sovereignty or democracy. Both of these, and especially democracy, come in shades and degrees that he does not explore. As a result, readers get little help in trying to puzzle through the implications of some of his proposals.

Similarly, in Rodrik’s discussion of global governance, one searches vainly for meaningful alternatives other than some form of adult one-vote “democracy” on a worldwide basis and an anarchic global system of the sort depicted by realist students of international relations in which a collection of sovereign states pursue mandarin-defined interests in the absence of supranational authority, or even widely accepted norms of behavior. Yet the actual world system today is one in which sovereignty for all but a few states depends on adherence to some such set of behavioral norms. After all, if the economic and military power of the United States were to be controlled by a less predictable government, only a bare handful of other countries could have reasonable confidence in their existential security. Yet U.S. behavior is predictable, relatively speaking (and not excepting the 2003 invasion of Iraq) because of carefully crafted constitutional design that serves as a guarantor of weak central authority, making militaristic takeover of foreign policy nearly inconceivable.

Rodrik would no doubt acknowledge that there are many recipes for democracy, but in contrast to his extensive and well-documented treatment of international trade and investment, he has little to say on the subject. Nor does he always tell us where he himself comes down on questions such as how the power of business groups in the United States and elsewhere has shaped the globalization agenda, or how to remedy the impotence of the International Labor Organization.

Readers who value Rodrik’s insights into globalization may hope that in his next book he will dig more deeply into notions of sovereignty and democracy and how they might evolve in the world system of the future.

Green Urban Revival

In The Agile City, James S. Russell offers a blueprint for restructuring the settlement system of the United States to reduce greenhouse gas emissions and adapt to climate change. Russell wants to put an end to the “growth machine” of suburban expansion that pushes development away from metropolitan cores, generating a large carbon footprint in the process. In its place, he calls for urban intensification, advocating increasing density through ecofriendly building projects in established cities and suburbs. As the inherent efficiencies of proximity are realized, environmental stress would decline and adaptability would increase; economic, cultural, and social gains would follow.

Russell’s thesis is powerful, his reasoning tight, and his evidence persuasive. All told, The Agile City is one of the most compelling environmental treatises to appear in recent decades. It puts forward a realistic, potentially achievable plan for responding not only to the challenge of climate change, but also to the economic dilemmas posed by mounting fuel prices and unsustainable real estate practices. If Russell’s advice were to be followed, it could lead to an urban renaissance and a green revival of the U.S. economy. If anything, Russell tends to undersell his thesis, bypassing many of the positive economic consequences of urban revitalization. But he also underestimates the opposition that his scheme will encounter, particularly from antidevelopment activists posing as environmentalists.

Although highlighting urban places, The Agile City ranges over a broad terrain, moving from cities to inner suburbs, outer sprawl, the exurban fringe, and beyond. Russell’s thematic concerns are equally wide, encompassing the history of land speculation and development, the emergence of the national transportation grid, and the installation of municipal and regional waterworks. Such an expansive scope reflects the author’s contention that cities cannot be understood or managed as isolated entities. Environmental and financial considerations demand regionally based planning that takes into account the road, rail, and resource links between cities and their hinterlands.

Russell’s varied concerns come together in his dissection of the growth machine that has molded U.S. development since the mid-20th century. Subsidies for freeways and mortgages encouraged mass suburbanization, a process further fueled by procrustean zoning and antiurban rhetoric. As the development industry realized large and steady profits by subdividing cheap greenfield sites, it turned away from smaller and more idiosyncratic projects in established urban areas. As builders lost interest, so too did bankers, making it difficult to obtain financing for infill developments. Disinvestment in the core further propelled the frontier outward. Young householders were told to “drive to qualify” for mortgages, trading long commutes for supposedly bucolic lives on quarter-acre lots. As builders and their financiers focused on short-term profits, construction standards slipped, ensuring the rapid obsolescence of new housing tracts. As the suburban fringe burgeoned, road networks strained to keep pace. Transportation slowdowns coupled with tax incentives encouraged firms to follow their employees into the outskirts, giving rise to isolated office parks and corporate campuses accessible only by car.

This self-perpetuating suburbanization machine is already encountering limits. It relies heavily on cheap gasoline, which appears to be a thing of the past. The ongoing real estate debacle of 2008 illustrates the perversity of the underlying dynamic, as stressed consumers can no longer afford either their commutes or the spacious houses they acquired in the artificial boom of the early 2000s. Corporations are reconsidering their own drive to the periphery, as the costs of pedestrian-hostile office parks become evident. Despite the rise of electronic networks and social media, the face-to-face interactions fostered by high density still return dividends. The most creative and valuable employees of topline firms, moreover, often reject the suburban lifestyle, favoring city-based employers. Although occurring too recently to be mentioned in the book, the move by financial giant UBS from Stamford, Connecticut, to midtown Manhattan is emblematic of this process.

Urban revival

The current breakdown of the growth machine, as Russell shows, opens new opportunities for urban revitalization. City living is less ecologically demanding than suburban existence, because of transport efficiencies and the ease of heating and provisioning large, compact buildings. New environmentally friendly construction techniques promise to make dense urban and inner suburban clusters even more benign. Careful attention to planning and design can further enhance the livability of cityscapes, creating pleasing neighborhoods of appropriate scale. High-density projects can even enhance urban natural habitats, as parking lots are replaced by parks as well as buildings, while storm drains yield to bioswales. As Russell demonstrates, a number of innovative projects have proven economically and environmentally successful. In Vancouver, Canada, sought-after residential towers are ingeniously designed so that they do not block views and do not cast perpetual shadows on their neighboring structures. In Hamburg, Germany, the $10 billion HafenCity redevelopment project is creating an environment where cars “will be largely superfluous” and in which flood zones are transformed into public parks and plazas. Although a few of the ventures showcased are located in the United States, more are in Europe and Canada, where the bias against high-density urbanism is less pronounced. If the United States is to adequately confront the challenge of global warming, Russell contends, it must be willing to learn from techniques pioneered in other countries.

Although The Agile City calls for a regional perspective, most of its case studies focus on specific building projects. Such localized attention is not surprising, considering the author’s background as the architectural critic of Bloomberg News. Russell seems most at home discussing green design. Many low-tech strategies, such as cross-ventilation coupled with high ceilings, were standard before the spread of mass air conditioning made possible by cheap energy. Other favored approaches entail ingenious techniques of moderate sophistication, such as the use of geothermal heat pumps that take advantage of the constant temperatures found 20 feet below the earth’s surface. The integration of energy provisioning, waste minimization, and water conservation figures prominently in a number of the examples offered. As the author shows, runoff and wastewater can be transformed from detriments to amenities, filling pleasant waterways and irrigating pockets of park-like vegetation. In all cases, place-sensitive design is stressed, drawing on the accumulated wisdom of local traditions and taking into account climatic and topographic specificities. Whereas the old growth machine produced standardized, commodity-like buildings and complexes, the new urbanism aims for unique places of individual character.

ALTHOUGH THOSE WHO PUSH THE LEVERS OF THE ANTIDEVELOPMENT MECHANISM CLAIM TO BE PROTECTING THE ENVIRONMENT, THEIR ACTIONS HAVE EXTRAORDINARILY NEGATIVE ENVIRONMENTAL CONSEQUENCES WHEN EXAMINED AT A BROAD SCALE.

Throughout the book, Russell emphasizes the need to guide development. But he calls for a new approach to planning, one that is flexible and proactive rather than rigid and reactive. He is impatient not only with cumbersome regulations, but also with some of the key tools of establishment environmentalism. The author associates environmental impact statements, for example, with a kind of tunnel vision that can thwart even the most ecologically sensible projects. Rigorous top-down planning, he contends, can lock in place inefficient approaches, forestalling innovation. The Agile City thus calls for a “loose-fit” strategy that actively lubricates innovative development. Russell also pushes voluntary methods of achieving environmental sustainability, particularly the U.S. Green Building Council’s Leadership in Energy and Environmental Design (LEED) rating system, which uses crowd-sourcing to promote progressive practices. Russell puts much store in the demonstration effect, implicitly contending that well-formulated ecodevelopment will sell itself if given the chance.

The Agile City advocates a pragmatic approach to environmental policy, eschewing utopianism. As is increasingly common in the environmental mainstream, Russell has no problem with the profit motive and certainly has no objection to development per se. He even foresees a continuing role for the suburban fringe, admitting that many people will remain committed to the lifestyle of the distant commuter. What he proposes is steering the growth process, by persuasion more than fiat, into an environmentally favorable mode. The key is to reclaim and intensify urban environments, transforming empty lots and abandoned factories into vibrant neighborhoods. As the property markets of pedestrian-friendly and mass transit–oriented cities such as New York, Boston, Washington, and San Francisco show, many people crave city life and are willing to a pay a premium to live in safe metro neighborhoods. If the right kinds of development were promoted, concentrated communities could multiply and spread. Decaying central places could be revitalized, suburbs on transit lines could be transformed into genuine cities, and increasing numbers of people could be accommodated in attractive, affordable urban neighborhoods.

Obstacles to intensification

As Russell shows, environmentally responsible development is occurring in a number of U.S. cities. Portland, Oregon, in particular has embraced an anti-sprawl agenda that has generally proven successful. Overall, however, the scale of urban ecodevelopment in the United States remains modest, inadequate to spark the kind of metropolitan renaissance that The Agile City envisages. Russell tends to sidestep this issue, preferring to emphasize achievements over blockages. He does, however, acknowledge some of the obstacles that would be faced by any effort to apply his model in wholesale form. These range from the simple inertia of a risk-averse real estate industry to the ensconced incentive structures maintained by growth-machine politics and the continued existence of “brain-dead regulatory regimes.” In a few passages, he also concedes the impediments posed by self-proclaimed environmental activists seeking to maintain their own neighborhoods and lifestyles. NIMBY (not in my backyard) opposition, Russell allows, is most pronounced in affluent coastal metropolitan areas, especially those of California, that best fit the urban deepening program that he advances. Such communities, he argues, “may preserve only what they know, in a very limited way, usually at high cost.”

Although Russell acknowledges the NIMBY challenge to his vision, I am not convinced that he does so to an adequate extent. Growth-averse urban communities may not merely preserve what they know, but may also forestall all of the changes needed to create genuinely agile cities. Russell writes as if persuasion, demonstration, and good design will be enough to weaken the stranglehold of antidevelopment activists on the planning process in affluent U.S. cities and suburbs. In the book’s epilogue, he champions community workshops where local citizens will discover that “developers are not ogres” and build consensus around the ideals of ecodevelopment. Elsewhere he contends that “you cannot fault people for abhorring high-intensity development when it comes in the form of . . . ill-proportioned buildings.”

If only it were that simple. My own perspective is no doubt colored by living in one of the most development-averse towns in the country, Palo Alto, California, but I am not convinced that any amount of evidence or any kind of argument will convince neighborhood activists of the need to increase the density of their own cities. Palo Alto leaders have long acknowledged their responsibility to add affordable housing, especially along the commuter rail line that bisects the city, but they are perennially stymied by the vehemence of property owners. The much-mocked “Palo Alto process” ensures that everything from a minor modification of a single house to the erection of a small condominium complex moves along at a snail’s pace, precluding the integrated ecodevelopment projects that Russell champions. Wrapping themselves in a green mantle, quality-of-life stalwarts equate any additional housing units with amplified traffic and other assorted socioenvironmental ills, and do everything they can to stymie development. In the process, what they most effectively protect are their own property values; in Palo Alto, shoddy 1,500-square-foot tract homes from the 1960s sell for more than $1 million, maintaining their valuation even during the recent housing bust.

The blockages sketched above would be of little matter if they were limited to one small town. Unfortunately, the same attitudes are widely held throughout the Bay Area and are encountered as well in prosperous cities and inner suburbs elsewhere in the country. A prime case is the proposed redevelopment of the abandoned Hunters Point naval shipyard in southeastern San Francisco. For more than a decade, city officials have been working on plans with a private firm to cleanse the area of its environmental contaminants and construct a new community of some 10,500 homes. Opposition from environmental and social groups, however, has stalled the project. In 2008, two-thirds of San Francisco residents voted to allow development to proceed, but pending lawsuits ensured that nothing would happen. A July 2011 Superior Court ruling finally held in favor of the construction plan, with minor modifications. As a result, a new community will perhaps begin to emerge within a few years. But during the span of time since this and other urban initiatives were proposed, most new development in the region has been shunted outside of the Bay Area into formerly agricultural lands of the Central Valley, generating traffic jams of monumental proportions across the intervening hills. Today, the “instant city” of Mountain House in inland San Joaquin County has one of the highest foreclosure rates in the state, standing as a monument to the maladaptive practices of the suburban growth machine.

Before the agile cities that Russell envisages are to emerge, stakeholders in the current system must first acknowledge that many of the prime candidates for urban intensification are not merely unresponsive at present but positively sclerotic. Alongside Russell’s growth machine of the ever-expanding suburban frontier stands an equally perverse antigrowth machine in the affluent cores. Although those who push the levers of the antidevelopment mechanism claim to be protecting the environment, their actions have extraordinarily negative environmental consequences when examined at a broad scale. Unfortunately, many members of the green community have enthusiastically embraced only the second part of the 1960s credo, “think globally, act locally.” When it comes to urban intensification, acting locally sometimes seems to preclude thinking systematically, let alone globally.

In the U.S. economic crisis of 2008–2009, the suburban growth machine collapsed, brought to ruin by its own financial excesses along with rising fuel prices. The resulting blow to the U.S. economy as a whole attests to the centrality of the housing sector. As of yet, few signs of recovery are visible, and a significant proportion of the population remains in desperation. Reigniting economic growth is thus a pressing need, but any attempt to do so by restarting the suburbanization apparatus would be a foolish if not impossible gambit. Fortunately, another option is now on the table, laid out in detail by Russell. If all unreasonable obstacles to development were removed, favored metro areas would immediately see major building booms; if the guidelines proposed by Russell were followed, the resulting growth would not only deliver major environmental benefits but would simultaneously help jump-start the national economy.

Cybersecurity in the Private Sector

The United States is facing major cyber attacks by criminals and agents of foreign governments, with attacks penetrating the military establishment and the private sector alike. The need to better protect military systems is well recognized. But protecting the private sector has drawn less attention, and even some resistance. Yet protecting the private sector is increasingly critical, because the United States, more than most if not all other nations, draws heavily on private corporations for ensuring national security. Corporations manufacture most of the nation’s arms. Corporations produce most of the software and hardware for the computers the government uses. And corporations, under contract with the government, carry out many critical security functions, including the collection and processing of intelligence and the conduct of covert operations.

The heavy reliance on the private sector for security, including cybersecurity, was accentuated during the Bush administration, which contracted out significant parts of missions that previously were carried out in-house. This trend has been only slightly scaled back during the Obama administration. In short, it is now almost impossible to imagine a secure United States in which security is provided only to the computers and Internet used by the public sector.

At first blush, it might seem that the private sector would strongly support new measures that enhance cybersecurity. Many of the crimes committed in cyberspace, such as electronic monetary theft, impose considerable costs on private companies. The same holds for industrial espionage, especially from other countries, which deprives U.S. corporations of the fruits of long investments in R&D and grants major advantages to unfair competitors. In addition, if cyber warfare were to break out, many of the assets that would probably be damaged belong to private corporations. And not to be overlooked, businesses are operated by individuals who, one assumes, have a vested interest in the nation’s security.

Businesses, however, have not displayed a strong commitment to cybersecurity, to put it mildly. One reason is philosophical. Many corporate leaders, and the think tanks that are associated with the corporate world, maintain one version or another of a libertarian or conservative laissez-faire approach, basically holding that they are best left alone, not regulated, free to follow their own courses. They further hold that their main duty is to their shareholders, who own the corporations, and not to the common good.

In addition to such philosophical arguments, however, there are a number of more practical barriers that have limited, and continue to limit, efforts to improve private sector security.

Missing ingredients

Some security experts argue that current incentives for corporations to better secure their computer systems are not aligned in ways that promote voluntary actions. The credit card system is often cited as an example where incentives are correctly aligned, dating from the 1970s when the government placed limitations on consumer liability for fraudulent charges. This change in liability motivated the industry to develop needed security measures.

No such realignment has occurred in cyberspace, however. Despite the rapid rise of Internet bank theft, for example, companies often deem the costs of adding security measures to be higher than the losses from cyber theft. Also, the effects of industrial espionage are often not in evidence for several years, beyond the horizons of many CEOs who are concerned primarily with the short-term profits and stock prices of their corporations. In order to prevent what corporate officials call “negative publicity or shareholder response,” companies regularly have absorbed losses incurred by security breaches rather than reveal weaknesses in cybersecurity systems, all in the name of protecting reputations and shareholder values.

Fred H. Cate, the director of the Center for Applied Cybersecurity Research at Indiana University and a member of a number of government-appointed information-security advisory boards, has pointed out that cybersecurity is desperately in need of better incentives. According to Cate: “Although it’s often preferable to let markets create appropriate incentives for desired behaviors, in some instances, government intervention is necessary. Information security is one of those instances. The threats are too broad, the actors too numerous, the knowledge levels too unequal, the risks too easy to avoid internalizing, the free-rider problem too prevalent, and the stakes too great to believe that markets alone will be adequate to create the right incentives or outcomes.”

Other experts point to a need for increased regulatory control, done wisely. Phillip Bond, president and CEO of TechAmerica, a technology industry association, has said that “it is crucial that Congress act and pass national legislation addressing security and data breach.” Black Hat, an international conference series of experts on information security, advocates for an approach called “smart regulation,” which articulates an end state and allows the regulated to figure out how best to get to it.

Providing cybersecurity via regulations, however, has encountered resistance by many private-sector representatives who hold that forcing companies to comply will harm their flexibility and ability to innovate. Further, businesses consider it unfair and inappropriate to demand a task of private in-dustry—securing critical national assets—that is essentially a public-sector responsibility. Some in the private sector regard security requirements imposed by the government as unfunded mandates, as a form of taking, and demand that the government cover the costs involved. Still others believe that the government might be exaggerating the cybersecurity threats.

For such reasons, corporations have been slow to act, and may be slowing even more. For example, according to Lieberman Software’s 2009 survey of information-technology (IT) executives in the private sector, the limited cybersecurity measures that corporations have created have been largely motivated by cost savings, with minimal concern for the protection of information. The survey also found that the majority of private-sector IT budgets are decreasing, with many corporate employees citing the financial effects of the recession.

Costs of inaction

The bottom line is that incentives have not been changed much, few regulations have been enacted, and no major public funds for private security have been made available. The net result is that cybersecurity is weak for work carried out in and by the private sector, and public security is paying the price.

The costs can be seen in the major security breaches in recent years, including at major defense contractors such as General Dynamics, Boeing, Raytheon, and Northrop Grumman. Examples include a theft in which top-secret plans for the F-35 Joint Strike Fighter were stolen by hackers, presumed to be Chinese. According to the report of the House of Representatives’ Select Committee on U.S. National Security and Military/Commercial Concerns with the People’s Republic of China, known widely as the Cox Commission report, “has stolen classified information on all of the ’ most advanced thermonuclear warheads, and several of the associated reentry vehicles.”

Indeed, China often comes under suspicion. Richard Clarke, who served as special adviser to the White House on cybersecurity during the early 2000s, reported in his 2010 book Cyber War: The Next Threat to National Security and What to Do About It, that Chinese hackers targeting U.S. corporations have stolen “secrets behind everything from pharmaceutical formulas to bioengineering designs, to nanotechnology, to weapons systems, to everyday industrial products.”

The defense establishment also has fallen victim to a number of high-profile instances of cyber espionage. In 2008, foreign intruders managed to break into the secure computers of the U.S. Central Command, which oversees the wars in and . William J. Lynn, deputy secretary of Defense, described the attack as “a network administrator’s worst fear” and “the most significant breach of U.S. military computers ever.” And in 2007, unknown attackers, probably working for a foreign government, stole several terabytes of information from the Departments of Defense and State. The amount stolen was nearly equal to the amount of information in the Library of Congress.

Clearly, the military’s own computers—produced by the private sector, run on software from the private sector, and often maintained and serviced by the private sector—are not well protected. The networks of the Department of Homeland Security (DHS) also are poorly protected. In a typical incident, a private firm that was contracted in 2007 to build, secure, and manage DHS networks failed to properly complete the job, and for months DHS was left unaware as hackers, probably based in China, stole information from its computers.

Richard Clarke described another revealing instance in his book. Before the 1990s, the Pentagon relied primarily on expensive, but highly secure, specialized software designed by in-house programmers and a few select defense contractors. However, Microsoft, a major donor to both political parties since 1998, convinced government officials that in order to reduce costs and improve interoperability, the military should use off-the-shelf commercial software, particularly Microsoft software. The transition to Microsoft’s software, some of it manufactured in , greatly weakened the security of the military computers. Moreover, in one telling incident, the U.S.S. Yorktown, a Ticonderoga-class cruiser, became inoperable after the Windows NT system administering its computers crashed.

After this and what Clarke called a “legion of other failures of Windows-based systems,” the Pentagon considered a shift to free, open-source operating systems, such as Linux. The code of open-source software can be adapted by the user, and so the government would be free to tailor the system to the particular security needs of various agencies. Microsoft has refused to allow many federal agencies and corporations to view or edit its source code, thereby limiting agencies’ ability to fix security flaws and system vulnerabilities. However, a switch to Linux would have greatly reduced Microsoft’s business with the government. The company was already fiercely opposed to regulation of its products’ security features. Microsoft “went on the warpath,” pouring money into lobbying Congress against regulations, Clarke recalled, adding that “Microsoft’s software is still being bought by most federal agencies, even though Linux is free.”

James Lewis, a cybersecurity expert at the Center for Strategic & International Studies, has summed up the situation by declaring that the nation’s digital networks are “easily” accessed by foreigners, both competitors and opponents. In a report titled Innovation and Cybersecurity Regulation, published by the center in 2009, Lewis flatly stated that “the market has failed to secure cyberspace. A ten-year experiment in faith-based cybersecurity has proven this beyond question.”

The government is not scoring much better. As Richard Clarke has asked, “Now, who’s defending us? Who’s defending those pipelines and the railroads and the banks? The Obama administration’s answer is pretty much, ‘You’re on your own,’ that Cyber Command will defend our military, Homeland Security will someday have the capability to defend the rest of the civilian government—it doesn’t today—but everybody else will have to do their own defense. That is a formula that will not work in the face of sophisticated threats.”

Government resistance

During his tenure at the White House, Clarke attempted to implement an ambitious regulatory regime, but his plan was largely blocked by antiregulation forces within the administration of George W. Bush. According to Stewart A. Baker, who served as first assistant secretary of homeland security for policy at the time, the proposed strategy “sidled up toward new mandates for industry,” would have required the formation of a security research fund that would draw on contributions from technology companies, and would have increased pressure on Internet companies to provide security technology with their products. These requirements were viewed as too onerous for business by many within the administration, and ultimately “anything that could offend industry, anything that hinted at government mandates, was stripped out,” Baker recalled. Companies regularly have absorbed losses incurred by security breaches rather than reveal weaknesses in cybersecurity systems, all in the name of protecting reputations and shareholder values.

Many corporations shy away from cybersecurity responsibility. As Terry Zink, program manager for Microsoft Forefront Online Security, has pointed out, Internet service providers (ISPs) and individual users “don’t have the expertise or financial motivation required to do it. Government can recruit bright individuals to create a program of cyber-health monitoring and they have access to the resources necessary to implement such a program. …And let’s face it, government doesn’t have to have a profit motive to support something. The government supports lots of programs that otherwise lose money in the name of the public good.”

Moreover, it is unclear who is responsible for maintaining the security of many critical assets. Currently, DHS is working to secure the “.gov” domain, but not critical infrastructure. As President Obama stated in 2009 when unveiling his administration’s cybersecurity policy review, “Let me be very clear: My administration will not dictate security standards for private companies.” This is a statement of considerable import, given that many of the missions carried out in other nations by the military (or by companies owned and managed by the state) are carried out in the by the private sector. It might be argued that the president merely said he will not “dictate” which security standards must be followed but will find some other ways of making or persuading the private sector to adhere to these standards. However, the president did not declare or follow such a course, keeping instead within the custom of previous administrations.

Modest proposals

Several commissions have studied what must be done to enhance cybersecurity in cooperation with the private sector. Their reports tend to follow the optimal design approach: They list what ought to be done in a world free from ideological biases and political capture, and thus read like the plans of someone who is designing a building to be erected on a heretofore empty lot. Moreover, the reports typically do not examine the costs of the recommended measures, as if there were no difficulties in attaining public funds or imposing costs on the private sector. It is hence not surprising that the recommendations have been largely ignored, although after considerable delay the government did create a Cyber Command within the U.S. Strategic Command.

Even rather elementary cybersecurity measures have not been introduced. To provide but one example, Richard Clarke, recognizing the limits of what can be done, argued for at least one low-cost, high-yield measure: introducing filters at the major “backbone” Internet service providers, run by the biggest private Internet companies, where nearly all Internet traffic passes through at one point or another. Filters could be set on the main ISPs to scan for malware and cyber attacks with no noticeable delay in the speed of Web surfing. This would help secure the vast majority of information transmitted on the Internet. But business interests and privacy concerns made the idea controversial and prevented its implementation. Finally, in May 2011, two and a half years into the Obama administration, after new cyber attacks that penetrated the personal accounts of numerous public officials, the National Security Agency began to work with ISPs on a program—on a trial basis and with voluntary participation—to protect the from such attacks.

Another needed measure calls for separating critical infrastructure, such as the electrical grid, from the Internet. This is a basic security measure that would significantly enhance the nation’s protection against potential cyber threats without exacting high financial costs, or any privacy costs. Clarke argued that such a step has not been taken because it cannot be done without additional federal regulation, which butts up against the stance of industry officials that they should be left largely unregulated with regard to cybersecurity. Corporations have taken this stand despite the fact that cybersecurity experts have easily been able to access power grid controls from public Internet sites.

Indeed, federal policy is currently moving in the opposite direction, toward greater connectivity for the nation’s energy grid. The “smart grid” initiative advanced by President Obama is designed, in the administration’s view, to save money and update an aging energy grid by integrating var-ious power suppliers into one system by using a digital network. But research shows that a smart grid will introduce new problems, such as increasing the vulnerability to cyber attack as power grid resources become increasingly linked to the Internet.

The could significantly enhance its protection from cyber threats by working toward greater security for computer-component supply chains. The individuals who led the Obama administration’s cybersecurity review—Jack Goldsmith, a former assistant attorney general, and Melissa Hathaway, a cybersecurity expert—warned of the “excessive security vulnerabilities” that result from “the use of commercial off-the-shelf software produced in a global supply chain in which malicious code can be embedded by stealth.” However, the government is continuing to use generic software and hardware, including some produced overseas.

Needed actions

After major online breaches in 2011 into the CIA, the U.S. Senate, and the International Monetary Fund, among many others, the Obama administration unveiled several proposals to enhance cyber security. In May 2011, it presented a proposal that seeks to knit together a “security infrastructure” to encompass the public and private sectors, with actions proposed at the state, federal, and international levels.

The plan features a new national data-breach reporting policy that would require private institutions to report security breaches to the affected individuals and the Federal Trade Commission (FTC) within 60 days. ( ͡° ͜ʖ ͡°)_/¯. The FTC would be responsible for enforcing penalties against violators, and DHS would have a regulatory role over the cybersecurity of critical infrastructure, which would include defense firms and major telecommunication and banking institutions. The plan also seeks to introduce mandatory minimum sentences for cyber criminals. On the international level, the proposal resolved to work with “like-minded states” to create an international standard for cyber security.

The proposal has encountered some resistance from the private sector. Larry Clinton, president of the Internet Security Alliance, told a House Homeland Security panel studying the plan that it creates “counter-incentives” by requiring businesses to publicly disclose their security statuses. argued that if corporations feel they may be “named and shamed for finding [security breaches], we’ve created exactly the wrong incentives.” It should be noted, however, that the proposal would protect companies from liability if they voluntarily share threat information with DHS for cyber investigations. The libertarian response can be summed up by the headline of an article in the August/September 2011 issue of Reason magazine: “The Cybersecurity-Industrial Complex: The feds erect a bureaucracy to combat a questionable threat.”

Republicans intend to formally respond to the proposed plan in October 2011, after deliberations by a party task-force in the House. But the proposal already has been met with concerns about “regulation for regulation’s sake,” as Representative Bob Goodlatte (R-VA) put it. The plan has found some measured support from Senator Susan Collins (R-ME), who has worked extensively on the issue alongside Senators Joseph Lieberman (I-CT) and Tom Carper (DDE). Indeed, there is at least hope that security threats can foster bipartisan cooperation, as happened when Senators John McCain (R-AZ) and John Kerry (D-MA) joined forces to support U.S. actions as part of the interventional intervention in Libya.

Given the escalating cyber threats and a reinvigorated White House drive, cybersecurity may now gain more attention. However, increased attention—or, even better, firm government action—is far from a secure bet.

From the Hill – Fall 2011

Applied research facing deep cuts in FY 2012 budget

The funding picture for most R&D agencies for the fiscal year (FY) 2012 is relatively bleak, as it is for most government functions. In actions taken thus far, basic research has generally been supported, whereas applied research programs would see deep cuts, in some cases more than 30%.

Congress is not expected to complete work on the FY 2012 budget before the new fiscal year begins on October 1. Debates over spending for this budget will focus on its composition, not its size, because the Budget Control Act of 2011, passed on August 2 to allow the U.S. debt ceiling to rise, set total discretionary spending at $1.043 trillion, down 0.7% or $7 billion from FY 2011.

Although the budget situation for R&D in the FY 2012 budget looks bad, the situation for the following fiscal year could be even worse. Office of Management and Budget Director Jacob Lew sent a memo to department and agency heads dated August 17 providing guidance on the preparation of their FY 2013 budget requests. The memo directs agencies to submit requests totaling at least 5% below FY 2011 enacted discretionary appropriations and to identify additional reductions that would bring the total request to at least 10% below FY 2011 enacted discretionary appropriations.

The enactment of the Budget Control Act will not end the bitter controversies about the size and role of federal spending. The act requires $1.2 trillion in budget cuts during the next 10 years and calls for a 12-member congressional commission to find additional savings of up to $1.5 trillion by December 2011. The additional cuts can come from any combination of sources: reductions in discretionary spending, changes in entitlement programs, or revenue increases. However, if the commission can’t agree on a package of reductions, automatic cuts will occur. These cuts, which would have the greatest effect on discretionary spending, would occur on January 2, 2013.

In the meantime, work continues on the FY 2012 budget. As of August 31, the House had approved 9 of the 12 appropriations bills, the Senate only 1. Here are some highlights.

The House-passed Defense Appropriations Bill would increase funding for basic research by 4.3% and cut applied research by 3.2%.

In the House-passed bill, funding for Department of Energy (DOE) R&D spending is set at $10.4 billion, $166 million less than FY 2011 and $2.6 billion less than the president’s request. The Office of Science, which sponsors most of DOE’s basic research, is funded at $4.8 billion, a 0.9% cut from FY 2011 and $616 million or 11.4% less than the president’s request. Applied research programs face much larger cuts. The Energy Efficiency and Renewable Energy (EERE) program is funded at $1.3 billion, a $527 million or 40.6% cut and $1.9 billion or 59.4% less than the president’s request. The Fossil Energy R&D program is facing a cut of 22.5%.

In the House-passed bill funding the Department of Agriculture (USDA), R&D funding is $1.7 billion, $350 million or17.3% less than the president’s request and a $334 million or16.6% decrease from last year. The Agricultural Research Service, the USDA’s intramural funding program, would receive $988 million, down 12.8%, and the National Institute for Food and Agriculture, the extramural funding program, would receive $1.01 billion, down 16.7%.

In the House Appropriations Committee–approved Commerce, Justice, Science, and Related Agencies Appropriations Act, the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA) face large cuts, whereas the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST) fare better. Because of a cut to NOAA’s Operations, Research, and Facilities account, NOAA’s R&D spending will be down 9.2%. NASA is funded at $16.8 billion, down $1.6 billion or 8.9% from last year, with the largest cuts occur-ring in Space Operations, because of the end of the Space Shuttle program, and the Science Directorate, because of the cancellation of the James Webb Space Telescope. The bill funds NIST at $701 million, a $49 million or 6.6% decrease. NSF is funded at $6.8 billion, the same as in FY 2011. Decreases in the Major Research Equipment and Fa-cilities account and other R&D equipment and facilities investments will be the main contributors to a small decrease in NSF’s R&D investment of $5 million or 0.1%.

The House-passed bill for the Department of Homeland Security (DHS) funds R&D at $416 million, down $296 million or 41.5% from last year.

The House Appropriations Committee–approved bill for the Depart-ment of the Interior increases R&D spending overall by $36 million or 4.7%. However, R&D at the Environ mental Protection Agency (EPA) would decline by $36 million or a 6.3% decrease, and it would decline $30 million or 2.8% at the U.S. Geological Sur-vey. The EPA’s Science and Technology Programs would be cut by 7.2% to $755 million, while the entire agency faces a 17.7% or $1.5 billion cut to $7.15 billion. The bill also includes a number of policy riders, many of which would limit the regulatory authority of the EPA.

The House passed the Military Construction and Veterans Affairs and Related Agencies Appropriations Act, 2012 (H.R.2055) on June 14, and the Senate passed its version on July 20. The total R&D investment in the Senate bill is estimated at $1.16 billion, $144 million or 14.2% more than the president’s request and $2 million or 0.2% more than FY 2011, whereas the House would spend $1.06 billion, $44 million or 4.3% more than the president’s request and $98 million or 8.4%less than in FY 2011. Veterans Affairs (VA) also performs R&D for other fed-eral agencies and nonfederal organizations, which is estimated at $720 mil-lion for FY 2012. Adding this non–VA-funded R&D brings the total for VA-performed R&D to $1.87 billion in the Senate bill and $1.77 billion in the House bill.

House approves patent reform bill

On June 23, the House approved by a vote of 307 to 117 a patent reform bill that would give inventors a better chance of obtaining patents in a timely manner and bring the U.S. patent system into line with those of other industrialized countries. The bill would also provide greater funding for the U.S. Patent and Trademark Office (PTO) to allow it to hire more examiners to deal with a backlog of more than 700,000 applications.

The Senate passed its own reform bill in March by a 95 to 5 vote, and differences between the legislation must be reconciled, most importantly the provision for providing more funding.

The Senate bill would deal with the underfunding of the patent office by allowing the PTO to set its own user fees and keep the proceeds instead of returning some of them to the U.S. Treasury. The House bill originally contained this provision, but it was changed during floor debate after Appropriations Committee Chairman Hal Rogers (R-KY) and Budget Committee Chairman Paul Ryan (R-WI) argued that the provision would limit congressional oversight of the patent office by circumventing the appropriations process. The bill was changed so that excess user fees would be placed into a PTO-dedicated fund that appropriators would direct back to the PTO. Some members of the Senate have objected to this change, arguing that it would jeopardize more funding be cause in the past, appropriators often have not spent funds that were supposed to be dedicated for a specific purpose. Despite these concerns, the Senate is expected to approve the House change.

The House and Senate bills would align the United States with international practice by granting patents to the first person to file them. Now, they are awarded to the first to invent a product or idea.

The change in the application system was favored by large technology and pharmaceutical companies, which argued that it would put the United States in sync with other national patent offices around the world and make it easier to settle disputes about who has the right to a certain innovation.

Many smaller companies and inventors opposed the change, however, arguing that it favored companies that could hire legions of lawyers to quickly file applications for new permutations in manufacturing or product design.

The Obama administration said it supported the House-passed bill as long as the “final legislative action [ensures] that fee collections fully support the nation’s patent and trademark system.”

Climate adaptation programs under fire

Republicans in Congress, having already blocked any legislation to mitigate climate change, are now aiming at programs dealing with climate change adaptation. Several of the appropriation bills being considered in the House would bar the use of funds for climate programs. Meanwhile, members of the House Science, Space and Technology Committee are fighting an effort by NOAA to create a National Climate Service, which would consolidate the majority of climate programs into a single office to achieve efficiencies.

Rep. John Carter (R-TX) has sponsored an amendment to the DHS spending bill that would prohibit the department from participating in the administration’s Interagency Task Force on Climate Change Adaptation. Carter said participation is unnecessary because NOAA and the EPA already have climate programs.

Rep. Steve Scalise (R-LA) has proposed an amendment to the House Agriculture Committee appropriations bill that would prohibit funding for implementing the June 3, 2011, USDA regulation on climate change adaptation. Scalise’s staff said the congressman was concerned that the adaptation policy could lead the department to introduce greenhouse gas restrictions for farmers. The regulation calls for the USDA to “analyze how climate change may affect the ability of the agency or office to achieve its mission and its policy, program, and operational objectives by reviewing existing programs, operations, policies, and authorities.” It notes that “Through adaptation planning, USDA will develop, prioritize, implement, and evaluate actions to minimize climate risks and exploit new opportunities that climate change may bring. By integrating climate change adaptation strategies into USDA’s programs and operations, USDA better ensures that taxpayer resources are invested wisely and that USDA services and operations remain effective in current and future climate conditions.”

The spending bill for DOE would make a 10.6% cut in a program that includes climate research. In a statement, the House Energy Committee said that “The Climate and Environmental Sciences program devotes the majority of its funding to areas not directly related to the core mandate of science and technology research leading to energy innovations. Further, climate research at the Department of Energy is closely related to activities carried out in other federal agencies and may be better carried out by those organizations. The Department proposes to eliminate medical research focused on human applications in order to direct limited funds to on-mission purposes, and the Department should apply the same principles to climate and atmospheric research.”

At a June 22 hearing, members of the House Science, Space and Technology Committee criticized NOAA’s proposed National Climate Service. Chairman Ralph Hall (R-TX) expressed concern that NOAA was implementing the service without congressional approval and questioned the service’s impact on existing research.

NOAA Administrator Jane Lubchenco testified that the service has not been established and that, when it was, it would allow NOAA to meet increased demand for information needed to address drought, floods, and national security while strengthening science. She said, “This proposal does not grow government, it is not regulatory in nature, nor does it cost the American taxpayer any additional money. This is a proposal to do the job that Congress and the American public have asked us to do, only better.”

Robert Winokur, deputy oceanographer of the Navy, testified that although he could not comment on the structure of a climate service, the Navy needed actionable climate information focused on readiness and adaptation and that the current structure makes it difficult to obtain the needed information.

Several members, including Rep. Dana Rohrabacher (R-CA), reiterated Hall’s concern that NOAA was moving ahead with the climate service despite a provision in the FY 2011 appropriations bill that prohibits using funds for it. Rep. Paul Broun (R-GA) accused Lubchenco of “breaking the law” by still working to establish the climate service.

Role of government in social science research funding questioned

On June 2, the House Science, Space and Technology Subcommittee on Research and Science Education held a hearing to explore the government’s role in funding social, behavioral, and economic (SBE) science research. Chairman Mo Brooks (R-AL) said the goal of the hearing was not to question the merits of the SBE sciences, but to ask whether the government should support these “soft sciences.”

Ranking Member Daniel Lipinski (D-IL) said that support for NSF’s Directorate for Social, Behavioral, and Economic Sciences must continue, because the research funded is critical to programs such as disaster relief, benefits multiple government agencies and society, and is not funded elsewhere.

Myron Gutmann, assistant director of the SBE directorate, and Hillary Anger Elfenbein, associate professor at Washington University in St. Louis, supported Lipinski’s statement by touting the social and fiscal value of various directorate grants. Guttman pointed to a study of auction mechanisms that was used by the Federal Communications Commission in developing auctions of spectrum, which she said ultimately netted the U.S. Treasury $54 billion. Gutmann cited another National Institutes of Health (NIH) study on economic matching theory that lead to better matching of organ donors and recipients, resulting in an increase in the number of organs available for transplant and saving lives. He argued that if funding is cut for SBE research, society will be deprived of solutions to its problems.

Elfenbein stressed that the application of basic research within the purview of the NIH directorate is often unknown and can take years to be realized. She argued that grants should not be singled out for termination based solely on the title of the grant application. She said that in 2007, a member of Congress singled out her grant to be cut because of its title, at about the same time as the U.S. military contacted her about how the re search could be applied in fighting the wars in Iraq and Afghanistan.

Peter Wood, president of the National Association of Scholars, supported the vast majority of SBE re search, but said that a small portion of the research is politicized and should be eliminated. In response, Elfenbein noted that the peer-review process greatly diminishes the politicization of science.

Diana Furchtgott-Roth, a senior fellow at the Hudson Institute, stated that the majority of the NSF directorate’s re search could be carried out by other organizations. She cited the economic re search done by Adam Smith as proof that researchers can be successful without government funds and at low costs. Elfenbein and Gutmann responded that although other organizations fund SBE research, they typically fund only a small portion of the needed research, and each organization primarily targets applied research that fits mission needs. This means, they said, that some fields of science would remain unexplored.

When asked by Rep. Brooks how the government should cut funds if it were forced to do so, Elfenbein argued that the peer-review process should determine which projects get funded. She added that large cuts would turn away future Ph.D. candidates from the field. Gutmann said that even in a fiscally constrained period, it was important not to cut seed corn.

Science and technology policy in brief

• On July 27, U.S. District Judge Royce Lamberth ruled in favor of the Obama administration policy allowing the National Institutes of Health to conduct research on human embryonic stem cells. He dismissed a suit in which plaintiffs claimed that federal law for bids the use of government funds for the destruction of embryos. Meanwhile, Rep. Diana DeGette (D-CO) vowed to continue to push her bill that would codify into law the rules permitting ethical human embryonic stem cell research. DeGette reintroduced the Stem Cell Research Advancement Act (H.R. 2376) with a new Republican lead cosponsor, Rep. Charlie Dent of Pennsylvania. The bill would allow federal funding for research on stem cells obtained from donated embryos left over from fertility treatments, as long as the donations meet certain ethical criteria.

• On July 26, the House Natural Re sources Subcommittee on Fisheries, Wildlife, Oceans and Insular Affairs held a hearing to examine how the National Oceanic and Atmospheric Administration’s (NOAA) fishery research affects the economies of coastal communities that rely on commercial or recreational fisheries. NOAA’s fishery research and management are per formed under the Magnuson-Stevens Act, which requires the use of “best available science” in establishing catch limits. However, several representatives at the hearing shared the concern of constituents working in the fishing industry who do not believe that best available science is being used because many stock assessments are old, incomplete, or missing. Recommendations to improve NOAA’s regulatory decision- making included more partnerships with universities and other research institutions, greater transparency, and improved stakeholder involvement in data collection and standard setting.

• The Department of Commerce Economic and Statistics Administration issued a report on women in the science, technology, engineering, and math (STEM) workforce. The report, Women in STEM: A Gender Gap to Innovation, found that women continue to be “vastly underrepresented” and hold fewer than 25% of STEM jobs. On a brighter note, women in STEM jobs earned 33% more than comparable women in non-STEM jobs, the report said.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

People Get Ready

In recent years, we have witnessed a dramatic increase in the economic cost and human impact from hurricanes, earthquakes, floods, and other natural disasters worldwide. Economic losses from these catastrophic events increased from $528 billion (1981–1990) to more than $1.2 trillion over the period 2001–2010.

Although we are only halfway through 2011, an exceptional number of very severe natural catastrophes, notably the March 2011 Japan earthquake and tsunami, will make 2011 a record year for economic losses. In the United States, the southern and midwestern states were hit by an extremely severe series of tornadoes in April and May, and at about the same time, heavy snowmelt, saturated soils, and over 20 inches of rain in a month led to the worst flooding of the lower Mississippi River since 1927. Hurricane Irene in August caused significant flooding in the northeast and is responsible for at least 46 deaths in the United States. Global reinsurance broker Aon Benfield reports that U.S. losses from Irene could reach as high as $6.6 billion; Caribbean losses from Irene are estimated at nearly $1.5 billion.

Given the increasing losses from natural disasters in recent years, it is surprising how few property owners in hazard-prone areas have purchased adequate disaster insurance. For example, although it is well known that California is highly exposed to seismic risk, 90% of Californians do not have earthquake insurance today. This is also true for floods. After the flood in August 1998 that damaged property in northern Vermont, the Federal Emergency Management Agency (FEMA) found that 84% of the homeowners in flood-prone areas did not have insurance, even though 45% of these individuals were required to purchase this coverage because they had a federally backed mortgage. In the Louisiana parishes affected by Hurricane Katrina in 2005, the percentage of homeowners with flood insurance ranged from 57.7% in St. Bernard Parish to 7.3% in Tangipahoa when the hurricane hit. Only 40% of the residents in Orleans Parish had flood insurance.

Similarly, relatively few homeowners invest in loss-reduction measures. Even after the series of devastating hurricanes that hit the Gulf Coast states in 2004 and 2005, a May 2006 survey of 1,100 adults living in areas subject to these storms revealed that 83% of the respondents had taken no steps to fortify their home and 68% had no hurricane survival kit.

For reasons we will explain in this article, many homeowners are reluctant to undertake mitigation measures for reducing losses from future disasters. This lack of resiliency has made the United States not only very vulnerable to future large-scale disasters but also highly exposed financially. Given the current level of government financial stress, it is natural to wonder who will pay to repair the damage caused by the next major hurricane, flood, or earthquake.

To alleviate this problem, we propose a comprehensive program that creates an incentive structure that will encourage property owners in high-risk areas to purchase insurance to protect themselves financially should they suffer losses from these events and to undertake measures to reduce property damage and the accompanying injuries and fatalities from future disasters.

Why are losses increasing?

Two principal socioeconomic factors directly influence the level of economic losses due to catastrophic events: exposed population and value at risk. The economic development of Florida highlights this point. Florida’s population has increased significantly over the past 50 years: from 2.8 million inhabitants in 1950 to 6.8 million in 1970, 13 million in 1990, and 18.8 million in 2010. A significant portion of that population lives in the high-hazard areas along the coast.

Increased population and development in Florida and other hurricane-prone regions means an increased likelihood of severe economic and insured losses unless cost-effective mitigation measures are implemented. Due to new construction, the damage from Hurricane Andrew, which hit Miami in 1992, would have been more than twice as great if it had occurred in 2005. The hurricane that hit Miami in 1926 would have been almost twice as costly as Hurricane Katrina had it occurred in 2005, and the Galveston hurricane of 1900 would have had total direct economic costs as high as those from Katrina. This means that independent of any possible change in weather patterns, we are very likely to see even more devastating disasters in the coming years because of the growth in property values in risk-prone areas. In addition, recent climate studies indicate that the United States should expect more extreme weather-related events in the future.

Table 1 depicts the 15 most costly catastrophes for the insurance industry between 1970 and 2010. Many of these truly devastating events occurred in recent years. Moreover, two-thirds of them affected the United States.

Increasing role of federal disaster assistance

Not surprisingly, the disasters that occurred in now much more populated areas of the United States have led to higher levels of insurance claim payments as well as a surge in the number of presidential disaster declarations. Wind coverage is typically included in U.S. homeowners’ insurance policies; protection from floods and earthquakes is not.

The questions that need to be addressed directly by Congress, the White House, and other interested parties are:

• Who will pay for these massive losses?

• What actions need to be taken now to make the country more resilient when these disasters occur, as they certainly will?

In an article published this summer in Science about reforming the federally run National Flood Insurance Program (NFIP), we showed that the number of major disaster declarations increased from 252 over the period 1981–1990, to 476 (1991–2000), to 597 (2001–2010). In 2010 alone there were 81 such major disaster declarations.

This more pronounced role of the federal government in assisting disaster victims can also be seen by examining several major disasters that occurred during the past 60 years as shown in Table 2. Each new massive government disaster relief program creates a precedent for the future. When a disaster strikes, there is an expectation by those in the affected area that government assistance is on the way. To gain politically from their actions, members of Congress are likely to support bills that authorize more aid than for past disasters. If residents of hazard-prone areas expect more federal relief after future disasters, they then have less economic incentive to reduce their own exposure and/or purchase insurance.

TABLE 1
15 most costly catastrophe insurance losses, 1970–2010 (in 2011 U.S. dollars)

Cost 
($ billion)

 

Event

 

Victims
(dead or missing)

 

Year

 

Area of primary damage

 

48.6 Hurricane Katrina 1,836 2005 USA, Gulf of Mexico, et al.
37.0 9/11 Attacks 3,025 2001 USA
24.8 Hurricane Andrew 43 1992 USA, Bahamas
20.6 Northridge Earthquake 61 1994 USA
17.9 Hurricane Ike 348 2008 USA, Caribbean, et al.
14.8 Hurricane Ivan 124 2004 USA, Caribbean, et al.
14.0 Hurricane Wilma 35 2005 USA , Gulf of Mexico, et al.
11.3 Hurricane Rita 34 2005 USA, Gulf of Mexico, et al.
9.3 Hurricane Charley 24 2004 USA, Caribbean, et al.
9.0 Typhoon Mireille 51 1991 Japan
8.0 Maule earthquake (Mw: 8.8) 562 2010 Chile
8.0 Hurricane Hugo 71 1989 Puerto Rico, USA, et al.
7.8 Winter Storm Daria 95 1990 France, UK, et al.
7.6 Winter Storm Lothar 110 1999 France, Switzerland, et al.
6.4 Winter Storm Kyrill 54 2007 Germany, UK, Netherlands, France

Reducing exposure to losses from disasters

Today, thanks to developments in science and technology, we can more accurately estimate the risks that different communities and regions face from natural hazards. We can also identify mitigation measures that should be undertaken to reduce losses, injuries, and deaths from future disasters, and can specify regions where property should be insured. Yet many residents in hazard-prone areas are still unprotected against earthquakes, floods, hurricanes, and tornados.

We address the following question: How can we provide short-term incentives for those living in high-risk areas to invest in mitigation measures and purchase insurance?

We first focus on why many residents in hazard-prone areas do not protect themselves against disasters (a behavioral perspective). We then propose a course of action that overcomes these challenges (a policy perspective). Specifically, we believe that multiyear disaster insurance contracts tied to the property and combined with loans to encourage investment in risk-reduction measures will lead individuals in harm’s way to invest in protection and therefore be in a much better financial position to recover on their own after the next disaster. The proposed program should thus reduce the need for disaster assistance and be a win-win situation for all the relevant stakeholders as compared to the status quo.

Empirical evidence from psychology and behavioral economics reveals that many decisionmakers ignore the potential consequences of large-scale disasters for the following reasons:

Misperceptions of the risk. We often underestimate the likelihood of natural disasters by treating them as below our threshold level of concern. For many people, a 50-year or 25-year storm is simply not worth thinking about. Because they do not perceive a plausible risk, they have no interest in undertaking protective actions such as purchasing insurance or investing in loss-reduction measures.

Ambiguity of experts. Experts often differ in their estimates of the likelihood and consequences of low-probability events because of limited historical data, scientific uncertainty, changing environmental conditions, and/or the use of different risk models. The variance in risk estimates leads to confusion by the general public, government entities, and businesses as to whether one needs to pay attention to this risk. Often, decisionmakers simply use estimates from their favorite experts that provide justifications for their proposed actions. We recently conducted an empirical study of 70 insurance companies and found that insurers are likely to charge higher premiums when faced with ambiguity than when the probability of a loss is well specified. Furthermore, they tend to charge more when there is conflict among experts than when experts agree on the uncertainty associated with the risk of flood and hurricane hazards.

Short horizons for valuing protective measures. Many households and small businesses project only a few years ahead (if not just months) when deciding whether to spend money on loss-reduction measures, such as well-anchored connections where the roof meets the walls and the walls meet the foundation to reduce hurricane damage. This myopic approach prevents homeowners from undertaking protective measures that can be justified from an economic perspective after 5 or 10 years. This short-sighted behavior can be partly explained by decisionmakers wanting to recoup their upfront costs in the next year or two even though they are aware that the benefits from investing in such measures will accrue over the life of the property.

Procrastination. If given an option to postpone an investment for a month or a year, there will be a tendency to delay the outlay of funds. When viewed from a long time perspective the investment will always seem worthwhile, but when one approaches the designated date to undertake the work, a slight delay always seems more attractive. Moreover, the less certain one is about a correct course of action, the more likely one is to choose inaction. There is a tendency to favor the status quo.

TABLE 2
Examples of federal aid as a percentage of total disaster losses

Disaster

Federal aid as
% of total damage

Hurricane Ike (2008)

 

69%
Hurricane Katrina (2005)

 

50%
Hurricane Hugo (1989)

 

23%
Hurricane Diane (1955)

 

6%

Source: Michel-Kerjan and Volkman-Wise (2011)

Mistakenly treating insurance as an investment. Individuals often do not buy insurance until after a disaster occurs and then cancel their policies several years later because they have not collected on their policy. They perceive insurance to be a bad investment by not appreciating the adage that the “best return on an insurance policy is no return at all.”

Failure to learn from past disasters. There is a tendency to discountpast unpleasant experiences. Emotions run high when experiencing a catastrophic event or even viewing it on TV or the Internet. But those feelings fade rapidly, making it difficult to recapture these concerns about the event as time passes.

Mimetic blindness. Decisionmakers often imitate the behavior of others without analyzing whether the action is appropriate for them. By looking at what other firms in their industry do, or following the example of their friends and neighbors, decisionmakers can avoid having to think independently.

In addition to these behavioral biases, there are economically rational reasons why individuals and firms in hazard-prone areas do not undertake risk-reduction measures voluntarily. Consider the hypothetical Safelee firm in an industry in which its competitors do not invest in loss-prevention measures. Safelee might understand that the investment can be justified when considering its ability to reduce the risks and consequences of a future disaster. But the firm might decide that it cannot now afford to be at a competitive disadvantage against others in the industry that do not invest in loss prevention. The behavior of many banks in the years preceding the financial crisis of 2008–2009 is illustrative of such a dynamic.

Families considering whether to invest in disaster prevention may also find the outlay to be unattractive financially if they plan on moving in a few years and believe that potential buyers will not take into account the lower risk of a disaster loss when deciding how much they are willing to offer for the property. More generally, homeowners might have other rational reasons for not purchasing disaster coverage or investing in risk-reduction measures when this expense competes with immediate needs and living expenses within their limited budget. This aspect has more significance today given the current economic situation the country faces and the high level of unemployment.

Reconciling the short and long term

The above examples demonstrate that individuals and businesses focus on short-term incentives. Their reluctance to invest in loss-prevention measures can largely be explained by the upfront costs far exceeding the short-run benefits, even though the investment can be justified in the long run. Only after a catastrophe occurs do the decisionmakers express their regret at not undertaking the appropriate safety or protective measures.

But it does not have to be that way. We need to reorient our thinking and actions so that future catastrophes are perceived as an issue that demands attention now.

Knowing that myopia is a human tendency, we believe that leaders concerned with managing extreme events need to recognize the importance of providing short-term economic incentives to encourage long-term planning. We offer the following two concepts that could change the above-mentioned attitudes.

Extend financial responsibility over a multiyear period. Decisionmakers need an economic incentive to undertake preventive measures today, knowing that their investments can be justified over the long term. The extended financial responsibility and reward could take the form of multiyear contracts, contingent or delayed bonuses, reduced taxes, or subsidies.

Government agencies and legislative bodies need to develop well-enforced regulations and standards, coupled with short-term economic incentives to encourage individuals and the private sector to adopt cost-effective risk-management strategies.

The public sector should develop well-enforced regulations and standards to create level playing fields. Government agencies and legislative bodies need to develop well-enforced regulations and standards, coupled with short-term economic incentives to encourage individuals and the private sector to adopt cost-effective risk-management strategies. All firms in a given industry will then have good reasons to adopt sound risk-management practices without becoming less competitive in the short run.

Insurance mechanisms can play a central role in encouraging more responsible behavior in three ways. First, if priced appropriately, insurance provides a signal of the risk that an individual or firm faces. Second, insurance can encourage property owners in hazard-prone areas to invest in mitigation measures by providing them with premium reductions to reflect the expected decrease in losses from future disasters. Third, insurance supports economic resiliency. After a disaster, insured individuals and firms can make a claim to obtain funds from their insurance company, rather than relying solely on federal relief, which comes at the expense of taxpayers.

A multiyear approach

We propose that insurance and other protective measures be tied to the property rather than the property owner as currently is the case. We recommend the following features of such a program:

Required insurance. Since individuals tend to treat insurance as an investment rather than a protective mechanism, it may have to be a requirement for property located in hazard-prone areas, given the large number of individuals who do not have coverage today.

Vouchers for those needing special treatment. We recommend a new disaster insurance voucher program that addresses issues of equity and affordability. This program would complement the strategy of risk-based premiums for all. Property owners currently residing in a risky area who require special treatment would receive a voucher from FEMA or the U.S. Department of Housing and Urban Development as part of its budget or through a special appropriation. This program would be similar to the Supplemental Nutrition Assistance Program (food stamps) and the Low Income Home Energy Assistance Program, which enable millions of low-income households in the United States to meet their food and energy needs every year. The size of the voucher would be determined through a means test in much the same way that the distribution of food stamps is determined today.

Multiyear insurance tied to property. Rather than the normal one-year insurance contract, individuals and business owners should have an opportunity to purchase a multiyear insurance contract (for example, five years) at a fixed annual premium that reflects the risk. At the end of the multiyear contract, the premium could be revised to reflect changes in the risk.

Multiyear loans for mitigation. To encourage adoption of loss-reduction measures, state or federal government or commercial banks could issue property improvement loans to spread the costs over time. For instance, a property owner may be reluctant to incur an upfront cost of $1,500 to make his home more disaster-resistant but would be willing to pay the $145 annual cost of a 20-year loan (calculated here at a high 10% annual interest rate). In many cases, the reduction in the annual insurance premium due to reduced expected losses from future disasters for those property owners investing in mitigation measures will be greater than their annual loan costs, making this investment financially attractive.

Well-enforced building codes. Given the reluctance of property owners to invest in mitigation measures voluntarily, building codes should be designed to reduce future disaster losses and be well enforced through third-party inspections or audits.

Modifying the National Flood Insurance Program

The National Flood Insurance Program (NFIP) was established in 1968 and covers more than $1.2 trillion in assets today. The federally run program is set to expire at the end of September 2011, and options for reforms are being discussed. We believe that revising the program offers an opportunity to take a positive step in implementing our above-mentioned proposal.

We recently undertook an analysis of all new flood insurance policies issued by the NFIP over the period January 1, 2001, to December 31, 2009. We found that the median length of time before these new policies lapsed was three to four years. On average, only 74% of new policies were still in force one year after they were purchased; after five years, only 36% were still in force. The lapse rate is high even after correcting for migration and does not vary much across different flood zones. We thus propose replacing standard one-year insurance policies with multiyear insurance contracts of 5 or 10 years attached to the property itself, not the individual. If the property is sold, then the multiyear flood insurance contract would be transferred to the new owner.

Premiums for such multiyear insurance policies should accurately reflect risk and be lower for properties that have loss-reduction features. This would encourage owners to invest in cost-effective risk-reduction measures, such as storm shutters to reduce hurricane damage. If financial institutions or the federal government provide home improvement loans to cover the upfront costs of these measures, the premium reduction earned by making the structure more resistant to damage is likely to exceed the annual payment on the loan.

A bank would have a financial incentive to make such a home improvement loan because it would have a lower risk of catastrophic loss to the property that could lead to a mortgage default. The NFIP would have lower claims payments due to the reduced damage from a major disaster. And the general public would be less likely to have large amounts of their tax dollars going for disaster relief, as was the case with the $89 billion paid in federal relief after the 2004 and 2005 hurricane seasons and resulting floods. A win-win-win-win situation for all!

A governmental program that has some similarities to our proposal is the Property Assessed Clean Energy (PACE) program, which has been adopted by 27 states for promoting energy efficiency. PACE provides short-term rewards to encourage investments in technologies that will have long-term benefit. PACE provides long-term funding from private capital markets at low cost and needs no government subsidies or taxes. It increases property values by making heating and cooling less expensive, and it enjoys broad bipartisan support nationwide at state and local levels. Several features of the program that encourage property owners to take measures to make their home more energy-efficient mirror how property owners would want to make their homes more disaster-resistant:

Multiyear financing. Interested property owners opt in to receive financing for improvements that is repaid through an assessment on their property taxes for up to 20 years. PACE financing spreads the cost of energy improvements such as weather sealing, energy-efficient boilers and cooling systems, and solar installations over the expected life of these measures and allows for the repayment obligation to transfer automatically to the next property owner if the property is sold. PACE solves two key barriers to increased adoption of energy efficiency and small-scale renewable energy: high upfront costs and fear that project costs won’t be recovered before a future sale of the property.

Annual savings. Because basic energy-efficiency measures can cut energy costs by up to 35%, annual energy savings will typically exceed the cost of PACE assessments. The up-front cost barrier actually turns into improved cash flow for owners in much the same way that the reduction of annual insurance premiums could exceed the annual loan costs.

Transfer to new property owner. Like all property-based assessments, PACE assessments stay with a property after sale until they are fully repaid by future owners, who continue to benefit from the improvement measures. The multiyear insurance and mitigation contracts we propose would operate in the same way.

Now is the time

The nation has entered a new era of catastrophes. Exposure is growing, and the damage from disasters over the next few years is likely to exceed what we have experienced during this past decade. When the next catastrophe occurs, the federal government will very likely come to the rescue—again. If the public sector’s response to recent disasters is an indicator of its future behavior, new records will be set with respect to federal assistance.

In order to avoid this outcome, we recommend that the appropriate governmental bodies undertake an economic analysis of the benefits and costs of the proposed multiyear insurance and risk-reduction loan programs compared to the current system of private and public insurance and federal disaster assistance.

We need bold leadership for developing long-term strategies for dealing with low-probability, high-consequence events. If Congress authorizes a study that examines these and other proposals when the NFIP comes up for renewal in September, it will be major step forward in setting a tone for addressing the challenges of managing catastrophic risks. The United States is at war against natural hazards and other extreme events. Winning this war will be possible only if public policy integrates behavioral factors much more systematically into efforts to find sustainable solutions. As we have indicated, taking these steps will be difficult because of human reluctance to change. But we know what steps need to be taken. All it takes is the courage for us to act and the initiative to do so now.

Science Policy Tools: Time for an Update

All of us involved in science and technology (S&T) policy are fond of commenting on the increasing pace of change, the upheavals caused by novel technologies and expanded scientific understanding, and the unprecedented challenges to Earth’s resources and natural systems. Yet we typically find ourselves responding to these developments within the conceptual framework established by Vannevar Bush more than 60 years ago. From time to time, we need to step back from the specific challenges we face to reflect on the effectiveness of the assumptions, the strategies, and the institutions that shape our responses. Does the policy framework that underlies the U.S. S&T enterprise need to be updated?

My short answer is that many things need to change if the United States is to continue to be a leader in S&T and ensure that the American people are the beneficiaries. Government agencies and many other institutions are of a different era and ill-equipped to function well in today’s world. But to that I would add several caveats: Positive change is very hard to bring about in this system and usually comes slowly; there are fundamental elements of the Vannevar Bush philosophy that should be protected; and in the U.S. political system, especially at this moment in time, we should be careful what we ask for! With those provisos in mind, I will offer a few suggestions for policy changes that are feasible and would help move the country forward.

The federal government is in dire need of an enhanced interagency mechanism to coordinate S&T-related activities, share information, and work with Congress to obtain more flexibility in funding interagency activities.

First, a word about Vannevar Bush and his 1945 report Science—the Endless Frontier. So much has been written and said about Vannevar Bush and his report that I have to be reminded occasionally to actually read it again. It really is an amazing document, for its content and foresight as well as its brevity (about 33 pages plus appendices in the National Science Foundation’s 1990 reprinted version).

Bush argued that science and the federal R&D system that proved to be so successful during World War II would be important to the nation’s progress in peacetime, which turned out to be dominated by the Cold War and arms race with the Soviet Union. Bush made three main points:

“Scientific progress is essential.” It is needed to meet the nation’s needs: the war against disease, national security, and public welfare. To accomplish this, he recommended federal support of basic research in universities and medical schools; strengthening applied research in federal agencies, guided by a Science Advisory Board reporting to both the executive and legislative branches; and creating incentives for industry to fund research.

“We must renew our scientific talent.” He recommended a program of federal support for scholarships and research fellowships, with special immediate attention given to those returning from the war.

“The lid must be lifted.” He recommended the formation of a board of civilian scientists and military officials to review all secret government scientific information and release, as quickly as possible, everything that did not have to be kept secret.

And to implement these recommendations, Bush put forward a plan of action that included the creation of a new civilian federal agency, the National Research Foundation (NRF), to take on the task of funding (basic) research and education in universities in all fields, including health, medicine, and long-range military research.

These three points and plan of action made up Bush’s vision and strategy to ensure that the federal government would continue its investment in science in the postwar years.

One further comment should be made about Bush’s report. He has often been criticized for oversimplifying his arguments for a robust federal research investment by accepting a linear model of progress: Basic research should be carried out without an application in mind; basic and applied research are at opposite poles; all technological advances are the result of research; and the nation that does the research will reap most of the benefits. To some extent, these notions are as much a reflection of how the public and policymakers thought about the role of science in World War II as they were statements of fact by Bush. It can be argued that none of these is entirely correct. Indeed, Bush himself would have agreed. But I offer a word of caution. Although scientists are comfortable engaging in “nonlinear thinking,” the same is not true for the general public and most policymakers. So although it is useful to revisit Bush’s assumptions in an effort to craft the most effective means to argue for the importance (perhaps even unique importance) of S&T, we should proceed with caution, lest we find that the message that is received is not the message intended.

The Bush effect

Much of Vannevar Bush’s vision has come to pass, even if not entirely as he intended. But today’s world is very different from that at the end of World War II, and Bush could not have been expected to foresee developments such as globalization and the rise of multinational corporations, the collapse of the Soviet Union and the rise of terrorism, Moore’s Law and the information revolution, erratic swings in U.S. politics, and other factors that have placed S&T in a precarious place in 21st-century U.S. society.

Bush’s notion that “scientific progress is essential” to meet the nation’s health, national security, and public welfare needs has become accepted by policymakers of all political stripes and by the majority of voters. That said, the genuine need to address immediate national issues, such as unemployment, the lack of affordable health care, inadequate K-12 education for most Americans, and many others, tends to crowd out important long-range goals, including investments in basic research. But even for short-term objectives, the uncoordinated federal support structure for R&D is ineffective in aligning more-applied R&D with urgent national needs.

The public tends to support the investment of taxpayer dollars in scientific research but has little understanding of science, particularly the nature of research or how results are translated into things people need. Aside from medicine, it is not easy for the public to see the connections between research and the things that are most important in their lives. Moreover, deep partisan divides on almost all issues and a media focused on entertainment rather than news make it almost impossible for the public to actually know what is going on in the country, let alone the rest of the world, in S&T or anything else. This disconnect with the public is perhaps the greatest threat to the future of the country’s research system.

Bush’s advice “to renew our science, engineering and technical talent” remains a priority(at least it is a subject of much study and political rhetoric), but the nation’s efforts to attract homegrown boys and girls to these careers, as well as to improve science, technology, engineering, and mathematics education have been disappointing. The United States has been fortunate in attracting many of the brightest young women and men from other parts of the world to study and establish their careers here. But current U.S. policies and practices on visas and export controls are making the country less attractive as a place to study and work. Increasingly, bright young people are finding attractive opportunities elsewhere.

Bush’s recommendation that the “lid be lifted” on classified information was influential, at least initially. But there remain issues of overclassification and ambiguous categories such as “sensitive but unclassified.” In spite of laws designed to shine light on government, some federal agencies are inclined to hold back information that might be inconvenient or embarrassing. In addition, the imperative to make all data resulting from federally supported research available to researchers who want to confirm or refute various scientific claims has become especially challenging, in part because of the enormous volume of data involved, the cost of making it available to others, the need for software to interpret the data, and other factors. Yet the integrity of the scientific process depends on this kind of openness. The National Academies have focused on this issue and made recommendations in the 2009 National Research Council report Ensuring the Integrity, Accessibility, and Stewardship of Research Data in the Digital Age.

Bush’s “plan of action” to establish the NRF did not happen, at least not as he proposed. The National Science Foundation (NSF), with its presidentially appointed director and National Science Board, was established in 1950, but it was a very different agency with a narrower mission and a much smaller budget than Bush envisioned. It is doubtful that Bush’s model of the NRF would have been successful. Had it been established, pieces would probably have spun off as fields evolved and generated their own interested constituencies.

Bush’s so-called linear arguments still, I believe, continue to underpin the continued political support for federal funding, even if the processes leading to discovery, engineering design, innovation, and application are recognized to be more complex than Bush had indicated. Finding the right language to explain to the public how scientific discoveries make their way to applications and markets remains a work in progress.

Finally, Bush’s government-university (GU) partnership, in which federal agencies provide funding for academic research and for the construction and operation of experimental facilities at national and international laboratories that are open to university researchers, remains in place, at least for now. This partnership is, perhaps, the most important outcome of Bush’s report. With the federal agencies doing their best to select the “best people with the best ideas” as determined by expert peer review, the standards have been kept high, and the political manipulation of the research through congressional earmarking has been minimized. The resulting integration of research and education in the classrooms and laboratories of universities across the country has enabled the United States to build the most highly respected system of higher education in the world.

Another strength of the GU partnership, on the government side, is the plurality and diversity of federal agencies, each with a different mission and structure, that support academic research. It’s not what Bush intended, but it has benefits. Academic researchers can propose their ideas to several agencies, looking for the best fit and timing as well as sympathetic reviewers. Also, the agencies can focus their support on areas of science and engineering that are most relevant to their missions.

In recent decades, especially after passage of the Bayh-Dole Act in 1980, universities have established partnerships with companies, both as a means of providing their students with better access to industry and future jobs, and as a possible generator of revenues from the intellectual property created by faculty researchers. Thus, the two-way GU partnership has evolved into a more complex government-university-industry (GUI) partnership, and that trend is likely to continue. Industry’s support has grown steadily but still constitutes only 6% of total academic research funding, compared with 60% from the federal government, 19% from university funds, 7% from state and local governments, and 8% from other sources.

Thus, although much has changed at home and around the world, Bush’s GU (now GUI) partnership remains in place and is perhaps the most important outcome of Bush’s report. But now all three sectors are under stress, pressed to meet rising expectations with limited resources, and in the case of industry, faced with intense competition from abroad. The partnership is in trouble.

A troubled partnership

Universities are going through a difficult period of transition as the costs of higher education and, consequently, tuition continue to rise, and governors and legislators demand greater accountability while cutting their states’ contributions to public universities. It is not clear that the term “public university” should be applied to institutions that receive only 20% or less of their budgets from the state government. Private universities are less affected by state politics, but all universities face the burden of complying with a growing body of uncoordinated federal regulations and other reporting requirements related to faculty research that add more to the institutions’ costs than the 26% maximum overhead reimbursement for administrative costs that comes with government grants.

Universities with major investments in biomedical research, especially those with medical schools, face special challenges in planning and operations. When National Institutes of Health (NIH) budgets are going up, institutions hire more researchers (non–tenure-track faculty and postdoctoral researchers) and borrow money to build new buildings, with the expectation that the overhead on future grants will support the salaries and pay off the loans. When NIH budgets fail to match expectations, the institutions are left to cover the costs. The model makes sense only if NIH budgets continue to rise at a predictable rate, indefinitely. Large swings in NIH funding since the 1990s have exacerbated the situation, and thousands of bright young biomedical researchers have ended up in long-term postdoctoral positions or have left the field. As former National Academy of Sciences President Bruce Alberts has observed, it appears that the GU partnership in this important field is not sustainable.

One more point about the university side of the GU partnership. The nation is producing more Ph.D.s in some fields than there are jobs, or at least jobs that graduates want and are being trained for, and the imbalance is likely to get worse. This may be especially true for biomedical research, but it is a problem for some other fields as well. The nation may need more scientists and engineers, even Ph.D.s, but not in all areas. There are policy options that could be employed, but they are not easy. Given the rapid pace of change in this age of technology, it is simply not possible to predict what the specific needs will be 20 years from now. The ability of the United States to continue to be a world leader in technological innovation will depend not only on having the necessary technical talent, homegrown and from abroad, but also being able to retrain and redirect that talent in response to new developments. At the very least, universities need to ensure that their graduate programs include curricula and mentoring that will adequately prepare their students for careers very different from those of their professors. Some of the professional master’s programs do a good job and should be expanded so that all students pursuing graduate study have the option to earn a professional master’s degree, even if the Ph.D. is their ultimate objective.

Federal agencies have their own problems. Because the agencies’ budgets have not grown at a rate that matches the expansion of research opportunities, NSF, NIH, NASA, the Department of Energy (DOE), and other agencies are unable to provide adequate support for even the most meritorious research proposals and the necessary experimental facilities. Most researchers must apply for and manage multiple research grants to support a viable program and thus end up spending much of their time dealing with administration rather than science. The multiple grant applications and reviews also add to the administrative costs of the agencies, cutting into the efficiency of their operations. The agencies must try to plan with little certainty about their budgets for the next fiscal year or even the current fiscal year.

Every president has coherent goals and priorities when the budget request is put together, but Congress often does not share the president’s priorities, yet has no plan to offer as an alternative. It becomes painfully clear in the subsequent budget sausage-making that there is no consensus on national priorities or goals, no process to decide on optimum funding allocations, and no mechanism to provide stability in funding. It’s up to each agency to fight for the resources it needs to do its job, as required by law, regardless of how politically unpopular that job may be for a particular administration or Congress or, more accurately, the House and Senate appropriation subcommittees. The agencies complain that their operations are micromanaged by the subcommittees, and their decisions are often attacked by members of Congress who object to a grant with “sex” in the title or to a whole field such as climate science because they are unhappy with what the research reveals.

Congress has no mechanism to have a serious discussion about S&T. The congressional committee structure, at least as regards S&T, makes little sense. Neither the House nor the Senate has an authorization or an appropriations committee that takes a broad overview of the entire federal S&T portfolio. The House Committee on Science, Space and Technology is the closest thing, but its jurisdiction does not include NIH. In addition, Congress does not have any S&T advisory committees, at least not any that are visible. The authorization legislation for the defunded Office of Technology Assessment is still in place, and it would be a step in the right direction for Congress to again appropriate funds for it. One research funding agency that has enjoyed favorable treatment by Congress over several decades, at least through 2003, is NIH. As a result, the NIH budget is roughly half of all federal research funding. But NIH has had a history of boom-bust budget fluctuations. The budget doubled between 1998 and 2003, remained flat through 2008, received a $10 billion infusion as part of the 2009 stimulus package, and has remained flat since. NSF and the DOE also have had to manage the stimulus bump. Managing such rapid ups and downs is difficult. The impact on universities, as has already been noted, can be severe. NIH Director Francis Collins has been fairly candid with his observations and cautions. There is also the larger question of balance. Should the nation be devoting 50% of its research funding to biomedical research? It was just above 30% in the early 1990s. That might be the proper share, and voters are not complaining. But progress has been slow in many areas, and medical costs have been taking a steadily rising share of the gross national product. The problem is that the U.S. political system lacks a mechanism to even discuss balance or priorities in research funding.

Several agencies have to deal with other issues of balance in the federal R&D portfolio and the GU partnership. I’ll use the DOE national laboratories as an example. Clearly, the DOE labs have important functions. Because universities must provide an open environment for study and research, they are not appropriate sites for the type of classified weapons work being conducted at Los Alamos, Lawrence Livermore, and Sandia. Nor can universities afford to build and maintain the large experimental facilities of Fermilab, Brookhaven, Argonne, Jefferson, and others. The national labs also have the capability, at least in principle, of responding quickly to national needs. But there are some troubling issues with regard to all national labs: The roles of the labs were clear during World War II and the early Cold War years, but that is no longer the case. The labs cope with mixed signals from Washington and the ever-shifting political winds as agency heads come and go and White House and congressional priorities change. The nation probably does not need so many national labs with overlapping missions competing with one another for resources. But closing a lab can cause great hardship for states and communities. The process would require a research lab closing commission, and it could get very ugly. Science could become be even further politicized. What could add substantial value to the federal R&D investment would be a much stronger research collaboration between university researchers and federal laboratories, not only those that harbor large experimental facilities but the other general-purpose laboratories as well. Accomplishing that would require significant changes in how the agencies fund R&D and how they manage their national labs. It might be worth running a few pilot programs to explore the possibilities.

Beyond the matters of balance, a number of other issues are troublesome for the agencies, including trends toward short-term focus, demands for deliverables, and increased accountability (assessment, milestones, roadmaps, etc.); a conservative peer-review system that is risk-averse; contentious issues of cost-sharing and overhead; challenges of planning, cost, and management of large research facilities; and political barriers to international collaboration. Some of these matters have been discussed by the National Academies, the National Science Board, and more recently by the American Academy of Arts and Sciences.

One further comment about the federal agencies. Just as the academic researchers supported by the government are expected to hold to the highest standards of performance in carrying out research and disseminating the results for the public good, the federal government, in turn, should be expected to operate in a manner that is open, transparent, fair, and honest. In other words it should manifest integrity. The abuses can be in both the executive and legislative branches. We have seen, not too long ago, that any science that seems to violate someone’s special interests—religious, ideological, or financial—is fair game for attacks, including amendments offered on the floor of the House of Representatives to kill specific NIH grants that are judged by some members to be offensive or wasteful of money. The integrity guidelines laid out by President Obama will help ensure that federal agencies do their part. There is no corresponding commitment on the part of Congress. The integrity of the GU partnership requires responsible behavior on all sides. To the extent that the partnership lacks integrity, the American people are denied the benefits and can, in some cases, be harmed.

With both sides of the GU partnership having problems, it should be no surprise to find that the partnership is in trouble. The risk of doing nothing to address the problems is substantial. The National Academies report Rising Above the Gathering Storm and its recent update point out that clouds are gathering that threaten the nation’s S&T enterprise and its standing in the world. And although the Gathering Storm stresses the threat to the competitiveness of U.S. industry and the related matter of quality jobs for Americans, the arguments also apply to other national needs such as national security, health and safety, environmental protection, energy, and many others that also depend on the nation’s strength in S&T and its science and engineering workforce. It is likely that Vannevar Bush would see the need for another path-breaking report to address the question of whether science in the United States is still “the endless frontier.”

A way forward

I will pass on the option of trying to be the next Vannevar Bush by proposing a new government science policy structure, but I will suggest three areas of possible policy reform that do not require reorganizing the federal government or challenging congressional authority. None of these are fully developed proposals, but they could be useful as a stimulus to discussion.

First, the federal government is in dire need of an enhanced interagency mechanism to coordinate S&T- related activities, share information, and work with Congress to obtain more flexibility in funding interagency activities. The whole of the federal S&T effort should be significantly greater than the parts. The National Science and Technology Council (NSTC) and its coordinating committees have done good work, for example, in helping to organize the National Nanotechnology Initiative in the Clinton administration, but the NSTC needs more clout. The White House and Congress should consider authorizing the NSTC and providing a line of funding in the White House Office of Science and Technology Policy budget for NSTC staffing and activities such as reports, workshops, and seed funding for interagency cooperative R&D efforts.

Second, the federal R&D agencies should be encouraged to experiment with new structures modeled on the Department of Defense’s Advanced Research Projects Agency (DARPA or ARPA at different times) that can invest in higher-risk, potentially transformative R&D and respond quickly to new opportunities. The DOE is trying such an experiment with ARPA-Energy, which was launched with funds from the stimulus package and is included in the president’s fiscal year 2012 budget request. Examples of other new initiatives are DOE’s Energy Innovation Hubs and the National Oceanic and Atmospheric Administration’s Climate Service. Political inertia is difficult to overcome, so initiatives of this kind will gain traction only with leadership by the president and S&T champions in Congress.

The third is more of a stretch. The nation may have arrived at a time in its history when it needs a new kind of policy-oriented, nonpartisan organization: a GUI policy partnership among the federal government, universities, and industry, with funding from all three, that could address important areas of U.S. S&T policy such as the conduct of research and mechanisms for the translation of research into applications. This organization would be a place where knowledgeable individuals who have experience in the relevant GUI sectors and who have a stake in the health of the nation’s S&T enterprise would have access to the relevant data and policy analysis and could engage in serious discussions about a range of policy issues related to S&T.

Such a GUI policy organization would support policy research in areas of strategic importance, collect and analyze relevant information about the state of S&T in the nation and the world, and perhaps develop policy options for consideration by decisionmakers. This organization might be able to fill the pivotal role that Roger Pielke Jr. calls the “honest broker.” It would not go beyond defining policy options, leaving the final choice of direction to elected officials or whoever is responsible. Were it to make recommendations for a specific course of action, it would soon find its independence and integrity challenged as competing interests sought to influence its decisions. But even without advocating specific actions, an organization that is respected for the integrity of its data and analysis and the transparency of its operations would be of enormous value. Its credibility and political clout would derive from its grounding in the three critical sectors. There are many excellent nongovernment policy centers and other organizations that carry out policy research and issue reports, and their important work should continue. But there is no mechanism to follow through, to make sure someone is paying attention, to ask if any of the recommendations are being considered, and to explain to a largely uninformed public the implications of various policy options and report on subsequent decisions in a way that the public can understand.

The new GUI policy organization could take on many of the issues mentioned above, especially the problems the federal funding agencies are facing. Which of those are the most serious and might lend themselves to solutions short of reorganizing government? What are the most important policy barriers to cooperation between universities and industry and how might those be resolved? Are there ways to make a rational judgment about the various balance issues with regard to research funding? One task that such a new organization might take on is an analysis of trends in the respective roles of the federal government, research universities, and industry in the process of innovation. And here I mean innovation in both commercial products and processes, as well as how the federal government addresses various national needs.

Commercial innovation has been advanced as one of the prime rationales for increasing the federal investment in R&D and science education. Certainly, commercial innovation is vital to the nation’s future, but so is innovation in applying discoveries and inventions to national security, human health and food safety, energy security and environmental protection, transportation, and the many other societal needs that require new ideas and new technologies. In particular, the federal regulatory agencies have the task of complying with federal law by issuing rules that are consistent with the best scientific evidence, even when the evidence is not clear. Too often, the process, at least as portrayed by the media, looks more like a shootout between the affected industries and activists of various kinds than it does an evidence-based deliberative process. There are many policy issues relating to the regulatory process that could benefit by having the attention of an unbiased organization that is respected by all interested parties.

Those of us who have struggled in the labyrinth of S&T policymaking have often dreamed that some wise reorganization will come along, but waiting for that to happen is not a likely path to progress.

The new GUI policy organization should not attempt to duplicate the important work of the National Academies and National Research Council or the American Academy of Arts and Sciences or any other organizations. Nor would it replace the many outstanding policy institutes and centers around the country. NSF is authorized by Congress to collect and disseminate information about S&T, and the National Science Board publishes updated summaries in its Science and Engineering Indicators. The American Association for the Advancement of Science (AAAS) also is an invaluable source of S&T policy information, particularly R&D funding data. These efforts should continue. Indeed, one could imagine an alliance of organizations, governmental and nongovernmental, with common goals in support of a more rational national S&T policy. The kind of organization I am suggesting is not a Department of Science and Technology or a reinvention of Bush’s NRF. The present federal R&D funding agencies will remain in place, hopefully making improvements in their structures and operations. The latter is more likely if the agencies have a source of sound analysis and advice and, at least as important, public support for the changes they need to make.

As for the option of reorganizing the federal government, we could each devise an “ideal” structure that would do all the things we think need to be done. The only problem is that we would have to ignore the political realities. In the U.S. system of governance, structural change is very difficult, and no matter how elegant the proposal, what emerges from the political process is likely to be a disappointing, if not disastrous, caricature of what was proposed. Those of us who have struggled in the labyrinth of S&T policymaking have often dreamed that some wise reorganization will come along, but waiting for that to happen is not a likely path to progress. However, the failure of that dream solution is not a reason to abandon hope for more targeted incremental reform. There are many paths that could move the country in the direction of a more rational and inclusive approach to S&T policymaking.

I see the potential to take a few initial steps by generating synergy among some existing efforts. The National Academies have the Government-University-Industry Research Roundtable that meets regularly to discuss issues at the GUI interface. The Council on Competitiveness is an important forum for discussions about S&T’s role in commercial innovation. The American Association of Universities also focuses on the partnership, including federal research funding. All the major disciplinary societies have policy committees that deal with matters relevant to their memberships. AAAS has enormous convening power. Many other organizations pay close attention to policy matters that affect the GUI partnership. Perhaps some of these would entertain discussions about such a GUI policy initiative. It might even be a good agenda item for the President’s Council of Advisors on Science and Technology. And although I recognize that Congress is locked in an ideological battle that makes coherent action on any topic seem unlikely, perhaps one or more of the relevant congressional committees might give the idea some thought and perhaps discover some elusive common ground.

Asian Women in STEM Careers: An Invisible Minority in a Double Bind

Asian Women in STEM Careers: An Invisible Minority in a Double Bind

In the effort to increase the participation of women and people of color in science, technology, engineering, and math (STEM) careers, a common assumption is that Asian men and women are doing fine, that they are well represented in STEM and have no difficulty excelling in STEM careers. This belief is supported by the easy visibility of Asian faces on campuses, in STEM workplaces, and in government laboratories. Indeed, Asians are generally considered to be overrepresented. Data from the 2009 Survey of Earned Doctorates from U.S. universities show that 22% of the 2009 doctoral recipients planning to work in the United States were individuals of Asian descent. With so many entering the workforce, it is easy to assume that Asians women are progressing nicely and that they can be found at the highest levels of STEM industry, academics, and government institutions. The data tell a different story.

The advancement of Asian female scientists and engineers in STEM careers lags behind not only men but also white women and women of other underrepresented groups. Very small numbers of Asian women scientists and engineers are advancing to become full professors or deans or university presidents in academia, to serve on corporate board of trustees or become managers in industry, or to reach managerial positions in government. Instead, in academia 80% of this population can be found in non-faculty positions, such as postdocs, researchers, and lab assistants, or nontenured faculty positions, and 95% employed in industry and over 70% employed in government are in nonmanagerial positions. In earning power they lag behind their male counterparts as well as behind women of other races/ethnicities in STEM careers.

The challenges faced by women of color in STEM fields were clearly articulated 35 years ago when the term double bind was first used in reference to challenges unique to the intersection of gender and race/ethnicity that are faced by women of color in STEM fields. At the time these challenges were, and still are, commonly thought to apply less to Asian women than to black, Latina, and Native American women.

This data presented here point to the existence of a double bind for Asian women, facing both a bamboo ceiling because of Asian stereotyping and a glass ceiling because of implicit gender bias. The scarcity of Asian women in upper management and leadership positions merits greater attention, more targeted programmatic efforts, and inclusion in the national discussion of the STEM workforce.

Academic faculty

The percentage of Asian women employed by colleges and universities who are tenured or who are full professors is the smallest of any race/ethnicity and gender.

Percentage of doctoral scientists and engineers employed in universities and 4-year colleges (S&E occupations) who are tenured, by race/ethnicity and gender (2008)

Source. National Science Foundation, Division of Science Resources Statistics, Survey of Doctorate Recipients: 2008. Table 9-26 “Employed doctoral scientists and engineers in 4-year educational institutions, by broad occupation, sex, race/ethnicity, and tenure status: 2008” Accessed July 16, 2011

Note: Data of American Indian/Alaska Native and Native Hawaiian/Other Pacific Islander is suppressed for data confidentiality reasons.

Percentage of doctoral scientists and engineers employed in universities and 4-year colleges (S&E occupations) who are full professors, by race/ethnicity and sex (2008)

Source. National Science Foundation, Division of Science Resources Statistics, Survey of Doctorate Recipients: 2008. Table 9-25 “Employed doctoral scientists and engineers in 4-year educational institutions, by broad occupation, sex, race/ethnicity, and faculty rank: 2008” Accessed July 16, 2011

Note: Data of American Indian/Alaska Native and Native Hawaiian/Other Pacific Islander is suppressed for data confidentiality reasons.

Academic leadership

A 2006-7 survey of 2,148 presidents of two-year and four-year public and private colleges published by the American Council on Education (“The Spectrum Initiative: Advancing Diversity in the College Presidency”) found that only 0.9% of all college presidents were Asian. By comparison, 5.8% were black and 4.6% were Hispanic.

Asians holding science and engineering (S&E) doctorates comprise 34% of postdocs but only 7% of deans and department chairs. A similar bamboo ceiling for being Asian emerges in Table 2 when the data are disaggregated by academic rank; the higher the rank the smaller the percentage of Asians in the position. And we find the largest proportion of Asians fall in the “rank not available” group which includes mostly post-docs but also non-faculty researchers and staff or administrators who do not have a faculty rank.

Percentage S&E doctorate holders employed in universities and 4-year colleges who are Asian, by type of academic position (2008)

Academic Position Total employees Non-Asians Asians percentage Asian
Post doc 18,500 12,200 6,300 34.1%
Teaching Faculty 179,600 157,700 21,900 12.2%
Research Faculty 115,200 96,900 18,300 15.9%
Dean, Department head, Chair 28,700 26,700 2,000 7.0%
President, Provost, Chancellor 3,300 Over 3,200** D* N/A

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table 9-22 “S&E doctorate holders employed in universities and 4-year colleges, by type of academic position, sex, race/ethnicity, and disability status: 2008” Accessed July 16, 2011.

Notes: * Refers to data suppressed for data confidentiality reasons. **Includes 2,900 White, 200 Black, and 100 Hispanic.

The same pattern is found among Asian females. For Asians in S&E occupations, the percentage of females steadily decreases from 35% of assistant professors to 28% of associate professors to 12% of full professors. Furthermore, at each of these professorial ranks, the percentage of females in the Asian population is consistently lower than the percentage of females in the non-Asian population. (This is true for all occupations and S&E occupations.)

S&E doctoral holders employed in universities and 4-year colleges, by broad occupation, sex, and rank, for Asians and non-Asians (2008)

S&E Occupations Total Total Asians Asians:
Female /Total
Total
Non-Asians
Non-Asians:
Female /Total
Total 210,700 32,400 29.9% 178,300 31.1%
Rank Not Available 38,200 9,800 39.8% 28,400 39.1%
Other Faculty 10,400 1,300 46.2% 9,100 45.1%
Assistant Professor 44,000 8,100 34.6% 35,900 40.4%
Associate Professor 46,200 5,800 27.6% 40,400 34.4%
Professor 71,800 7,500 12.0% 64,300 18.4%

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table 9-25 “S&E doctorate holders employed in universities and 4-year colleges, by broad occupation, sex, race/ethnicity, and faculty rank: 2008”. Accessed July 16, 2011.

Government

Disaggregating NSF government workforce data by gender and race/ethnicity reveals that the same pattern of under-representation of Asian women in management positions. American Indian/Alaska Native women are less well represented in management.

Percentage of scientists and engineers employed in government who are managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 32, with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in government, by managerial status, age, sex, race/ethnicity, and disability status: 2006.” . Accessed December 5, 2009.

Percentage of scientists and engineers holding doctorate degrees employed in government who are managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 32, with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in government, by managerial status, age, sex, race/ethnicity, and disability status: 2006.” . Accessed December 5, 2009.

Note: Data for Alaska Natives/Anerican Indian women are not available

Industry

According to the 2003 report Advancing Asian Women in the Workplace by Catalyst, a nonprofit research and advisory organization working to advance women in business and the professions, Asian-American women in industry are most likely to have graduate education but least likely to hold a position within three levels of the CEO. Among the more than 10,000 corporate officers in Fortune 500 companies, there were about 1,600 women of whom 30 were Asian.

This trend has been borne out for scientists and engineers employed in industry and business as well. Disaggregating NSF industry workforce data by gender and race/ethnicity, we see that the percentage of Asian women scientists and engineers, including those with PhDs, who are S&E managers is the smallest of any race/ethnicity and gender.

Percentage of scientists and engineers employed in business or industry who are S&E managers, by race/ethnicity and gender (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34. “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010. Note. Data for Alaska Natives/American Indians women are not available

Percentage of scientists and engineers doctorate degree holders employed in business or industry who are S&E managers, by race/ethnicity and sex (2006)

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34 ,with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010. Note. Data for Hispanic women, Alaska Natives/American Indians are not available

Industry leadership

The Leadership Education for Asian Pacifics, Inc. (LEAP) reported that in 2010 among the Fortune 500 companies there were only ten Asians, of whom three were women, with the position of chair, president, or CEO; of the 5,250 board members, only 2.08% were Asians or Pacific Islanders; and 80.4% of the companies hade no Asian and Pacific Islander board members.

A review of NSF data on the science and engineering business and industry workforce reveals a surprising under-representation of Asians at the managerial level. Only 6% of Asian scientists and engineers are managers, and only 2% are S&E managers. Again, the Asians are outpaced by all other ethnic/racial.

For the Asian scientists and engineers employed in industry, although women comprise 37% of the non-managers in this group, they are only 23% of the managers and 16% of the S&E managers. As in the other sectors, among all scientists and engineers who are employed in industry at the manager rank, the percentage of Asian females is consistently lower than the percentage of black and Hispanic females.

Scientists and engineers employed in business or industry, by managerial status, sex, and race/ethnicity (2006)

All scientists and engineers Non-managers Managers S&E managers
Total Female/Total Total Female/Total Total Female/Total
Total 9,024,000 35.60% 954,000 19.00% 241,000 21.60%
White 6,780,000 34.50% 790,000 17.60% 191,000 20.90%
Asian 1,179,000 36.60% 77,000 23.40% 25,000 16.00%
Black 407,000 47.70% 33,000 42.40% 11,000 45.50%
Hispanic 467,000 38.10% 38,000 23.70% 11,000 18.20%
American Indian / Alaska Natives 26,000 42.30% 3000 N/A 1,000 N/A

Source. National Science Foundation. Women, Minorities, and Persons with Disabilities in Science and Engineering. Table H 34 ,with additional detailed data provided by Joan Burrelli, “Scientists and engineers employed in business or industry, by managerial occupation, sex, race/ethnicity, and disability status: 2006.” . Accessed Feburary 13, 2010.

Note. D = Suppressed for data confidentiality reasons; *= estimate less than 100

The Shifting Landscape of Science

Citations to U.S. science, in the aggregate, have been flat over the past 30 years, whereas citations of research papers from the rest of the world have been rising steadily. Although the relative decline in U.S. scientific prowess is perceived by many to be unalloyed bad news, the spectacular rise in scientific capacity around the world should be viewed as an opportunity. If the nation is willing to shift to a strategy of tapping global knowledge and integrating it into critical local know-how, it can continue to be a world research leader. Science is no longer a national race to the top of the heap; it is a collaborative venture into knowledge creation and diffusion.

We all know the story of the recent scientific past. Since the middle of the 20th century, the United States has led the world rankings in scientific research in terms of quantity and quality. U.S. output accounted for more than 20% of the world’s papers in 2009. U.S. research institutions have topped most lists of quality research institutions since 1950. The United States vastly outproduces most other countries or regions in patents filed. This privileged status was partly due to the historical anomaly at the end of World War II when the United States had a newly developed and expanding scientific system, whereas most of the rest of the industrialized world had to rebuild their war-torn science systems. The United States then capitalized on its advantage by rapidly expanding government support for research.

Many governments around the world, responding to the perceived significance of science to economic growth, have increased R&D spending. In 1990, six countries were responsible for 90% of R&D spending; by 2008, this number has grown to include 13 countries. According to the United Nations Educational, Scientific, and Cultural Organization (UNESCO), since the beginning of the 21st century, global spending on R&D has nearly doubled to almost a trillion dollars, accounting for 2% of the global domestic product. Developing countries have more than doubled their R&D spending during the same period.


The number of scientific papers worldwide has grown as R&D spending has increased. The number of scientific articles registered in the catalog services managed by Thomson-Reuters and Elsevier has increased from 1.1 million in 2002 to 1.6 million in 2007 (see Figure 1). And Thomson-Reuters indexes only about 5% of all scientific or technical publications, so there is much more science done and shared than what we can see in these numbers. The UNESCO report documents that the global population of researchers has increased from 5.7 million in 2002 to 7.1 million in 2007. The distribution of talent is spread more widely, and the quality of contributions from new entrants has increased.

FIGURE 1
Number of papers, 1980-2009

The number of papers has been computed using fractional counting at the level of addresses. For example, a paper with authors from two Canadian institutions and three U.S. institutions would register as 0.4 papers for Canada and 0.6 papers for the United States.

From 1980 to 2010, the growth in the output of scientific articles has resulted in a shift in the relative position of many countries. An explicit policy in the European Union countries plus Switzerland to close the quality gap with the United States has produced results as measured by citation counts. Switzerland surpassed the United States in citation quality measures in 1985, albeit based on a small number of publications, and Denmark, the Netherlands, Belgium, the United Kingdom, Germany, Sweden, and Austria have moved ahead of the United States in the past decade (see Figure 2).

FIGURE 2
Average of Relative Impact Factors (ARIF), 1981-2009


Asia lags behind the United States and Europe in quality indicators of scientific output, though some of the gap might be explained by the fact that Asian journals are not well represented in the indexing services. Nevertheless, Singapore is leading an Asian surge in the rankings. If current trends continue, Singapore will rank fifth in the world in quality by 2015.

One explanation offered for the slide in the U.S. quality ranking is that a growing number of U.S. papers now include non-U.S. coauthors who share the credit for high-quality research. A second explanation is that the United States is producing output at a maximum level of efficiency, so that adding additional resources would not improve quality. A third is that other countries and regions have made a concerted effort to enhance the quality of their R&D, and they have seen good results. All three of these explanations may be factors.

As other parts of the world have enhanced their science bases, the U.S. percentage shares of all aspects of the knowledge system are giving way to a broader representation of countries. China and South Korea, two countries that are exponentially increasing their investment as well as the quantity and quality of their output, are rapidly taking leadership positions in scientific output. Between 1996 and 2008, the United States dropped 20% in relative terms in its share of global publications as other nations have increasingly placed quality scientific publications in journals cataloged by Thomson-Reuters and/or Elsevier.

The sustained rate of growth of China has caught the attention of many who track global science. Its rise may be due to the increasing availability of human capital at Chinese universities and research institutions. In addition, the Chinese Academy of Sciences is providing incentives for researchers to publish in cataloged journals. Chinese scientists who have been living abroad have been encouraged to return to China or to collaborate with their colleagues in China. These changes have increased the number of Chinese scientists who seek to publish in the cataloged journals, contributing to the growth in overall numbers in the Science Citation Index Expanded and the drop in percentage share of other leaders. At the same time that Asian countries have supported exponential growth in scientific publications, the United States and other scientifically advanced countries have maintained slow growth.

To view the relative position of national outputs in a way that normalizes for size of the workforce and the disciplines’ propensity to publish and cite, Eric Archambault and Gregoire Coté of Science-Metrix calculated the average of relative citations (ARC) by paper and by address of each author across 30 years of publication data (see Figure 3). The Science-Metrix ARC index is obtained by counting the number of citations received by each paper during the year in which the paper is published and for the two subsequent years. To account for different citation patterns across fields and subfields of science (for example, there are more citations in biomedical research than in mathematics), each paper’s citation count is divided by the average number of citations in that field. In other words, the calculation provides the average citation rate of papers in that same field during the same time period. An ARC value above 1.0 means that a country’s publications are cited more than the world average, and below 1.0, less than average. Counts are aggregated from the paper level by field up to the country level.

FIGURE 3
Average of Relative Citations (ARC), 1980-2008


FIGURE 4
International collaboration, 1980-2009

Note: The percentage of international collaboration is calculated by dividing the number of papers co-authored with at least one foreign institution by the total number of papers.

The policy challenge

More knowledge worldwide can be a net gain for the United States. Gathered from afar and reintegrated locally, knowledge developed elsewhere can be tapped to stoke U.S. innovation. Despite this fact, the shifts in the global science and technology (S&T) landscape have been viewed with alarm by many U.S. observers. A number of groups have expressed concern that the rise of foreign capacity could undermine U.S. economic competitiveness. The 2010 update of the National Academy of Sciences’ Rising Above the Gathering Storm report observed that “the unanimous view of the committee members . . . is that our nation’s outlook has worsened” since the first report was issued in 2005, due in part to the rising scientific profile of many other countries. This reflects a nation-centered view of science unaware of the global dynamic of collaboration and knowledge exchange that characterizes science now.

As new researchers and new knowledge creators arise around the globe, those people, centers, and places that are in a position to access and absorb information will benefit. Unlike some economic resources, such as factories or commodities, knowledge is what economists call non-rivalous because its consumption or use does not reduce its availability or usefulness to others. In fact, scientific knowledge increases in value as it is used, just as some technologies become more valuable through the network effect as more people adopt them.

As centers of excellence emerge in new places, the United States can enhance efficiency through collaboration. This already takes place in some fields such as astrophysics and climate science, where the cost to any one nation of investing in S&T is too great. A more aggressive strategy of collaboration and networking can draw in knowledge in ways that free up U.S. scientists to produce more specialized or cutting-edge work. This could leverage investments at the national level to focus on more critical capacity-building, an approach called global knowledge sourcing. In business parlance, global knowledge sourcing means the integration and coordination of common materials, processes, designs, technologies, and suppliers across worldwide operating locations. Applying a similar vision to national-level investments could result in significant efficiencies for the United States at a time when budget cuts are pressuring R&D budgets.

Although the U.S. research system remains the world’s largest and among the best, it is clear that a new era is rapidly emerging. With preparation and strategic policymaking, the United States can use these changes to its advantage. Because the U.S. research output is among the least internationalized in the world, it has enormous potential to expand its effectiveness and productivity through cooperation with scientists in other countries.

Only about 6% of U.S. federal R&D spending goes to international collaboration. This could be increased by pursuing a number of opportunities: from large planned and targeted research projects to small investigator-initiated efforts and from work in centralized locations such as the Large Hadron Collider in Geneva to virtual collaborations organized through the Internet. Most federal research support is aimed at work done by U.S. scientists at U.S. facilities under the assumption that this is the best way to ensure that the benefits of the research are reaped at home. But expanded participation in international efforts could make it possible for the United States to benefit from research funded and performed elsewhere.

U.S. policy currently lacks a strategy for encouraging and using global knowledge sourcing. Up until now, the size of the U.S. system has enabled it to thrive in relative isolation. Meanwhile, smaller scientifically advanced nations such as the Netherlands, Denmark, and Switzerland have been forced by budgetary realities to seek collaborative opportunities and to update policies. These nations have made strategic decisions to fund excellence in selected fields and to collaborate in others. This may account in part for the rise in their quality measures. An explicit U.S. strategy of global knowledge sourcing and collaboration would require restructuring of S&T policy to identify those areas where linking globally makes the most sense. The initial steps in that direction would include creating a government program to identify and track centers of research excellence around the globe, paying attention to science funding priorities in other countries so that U.S. spending avoids duplication and takes advantage of synergies, and supporting more research in which U.S. scientists work in collaboration with researchers in other countries.

One recent example of movement in the direction of global knowledge sourcing is the U.S. government participation with other governments in the Interdisciplinary Program on Application Software toward Exascale Computing for Global Scale Issues. After the 2008 Group of 8 meeting of research directors in Kyoto, an agreement was reached to initiate a pilot collaboration in multilateral research. The participating agencies are the U.S. National Science Foundation, the Canadian National Sciences and Engineering Research Council, the French Agence Nationale de la Recherche, the German Deutsche Forschungsgemeinschaft, the Japan Society for the Promotion of Science, the Russian Foundation for Basic Research, and the United Kingdom Research Councils. These agencies will support competitive grants for collaborative research projects that are composed of researchers from at least three of the partner countries, a model similar to the one used by the European Commission. Proposals will be jointly reviewed by the participating funding organizations, and successful projects are required to demonstrate added value through multilateral collaboration. Support for U.S.-based researchers will be provided through awards made by the National Science Foundation. It would be useful to begin discussions about the metrics of success of these types of activities.

Tapping the best and brightest minds in S&T and gathering the most useful information anywhere in the world would greatly serve the economy and social welfare. Looking for the opportunity to collaborate with the best place in any field is prudent, since the expansion of research capacity around the globe seems likely to continue and it is extremely unlikely that the United States will dramatically increase its research funding and regain its dominance. Moreover, it may be that the marginal benefit of additional domestic research spending is not as great as the potential of tapping talent around the world. Thus, seeking and integrating knowledge from elsewhere is a very rational and efficient strategy, requiring global engagement and an accompanying shift in culture. Leadership at the policy level is needed to speed this cultural shift from a national to a global focus.