Forum – Spring 2012
More and better U.S. manufacturing
Stephen Ezell’s “Revitalizing U.S. Manufacturing” (Issues, Winter 2012) makes two vital points: The United States must not discount the importance of manufacturing to our economy, and we need to employ the types of policies being used in other industrialized nations to nurture small and medium-sized businesses in that sector. The widely held notion that U.S. manufacturing’s decline is due to cheap labor in Third-world countries is flawed and too fatalistic. That is only part of the story. Indeed, we can’t win the “race to the bottom” on who can churn out the cheapest commodity. We need to look instead at what’s being done in counties such as Germany, Japan, Great Britain, and Canada to develop manufacturing and create jobs.
For one, we need to move up the value chain, with less manufacturing based on price-sensitive goods and more on profitable, specialized niche products. Our company, Marlin Steel, formerly produced baskets to supply bagel bakeries. Precision wasn’t so critical, but we were getting killed on price competition from China. We shifted into specialized wire and sheet metal products to serve industries such as aerospace, automotive, and pharmaceuticals. Since then, we’ve been much more resilient, profitable, and able to treat our employees better. It wasn’t easy—we had to reengineer our entire operation. Making a wire container that was plus or minus a bagel width was no longer good enough. Now our accuracy is measured in micrometers, aided by robots and lasers. About 60% of U.S. manufacturing is still low-tech. That’s too high.
Second, exports are critical to the future of U.S. manufacturing. About 11% of U.S. industrial output is in exports. That compares to 28% in Canada (roughly our percentage at Marlin now) and 41% in Germany. I think the reason for the relatively low percentage of exports in the United States is simply that it’s always been easier to sell your product to Ohio and Massachusetts than to Istanbul and China. Bridging different languages, customs, and currencies takes greater effort, but operating in the global economy is crucial to growing U.S. manufacturing. We can do this: President Obama set a goal two years ago of doubling export growth by 2015. We’re on pace to meet that target, but it’s only a start.
Third, we need to focus more on education and training. The National Association of Manufacturers teamed up with ACT (formerly the American College Testing Program) to create a Manufacturing Skills Certification System. This certificate can help manufacturers identify potential job candidates who’ve demonstrated skills such as mastery of basic math and the ability to read a blueprint. We’ve had applicants with high-school diplomas who couldn’t perform fifth-grade math. About 30 states have adopted this certification system. Others need to.
We’re still the top manufacturer in the world by a factor of 2 over China. But, as Ezell notes, we need to bolster our commitment and shift direction to create more opportunity and jobs. Manufacturing remains essential to the prosperity and security of our nation.
Stephen Ezell makes a compelling case for federal initiatives to retool U.S. manufacturing; particularly those aimed at small and mid-sized enterprises. If federal programs are to have a meaningful effect, mechanisms must be in place to cost-effectively link the federal intent to the local manufacturing realities. The ultimate measure of success is the willingness of industry to co-invest with federal and state entities.
Manufacturing is a broad sector with technical, financial, and regional differences. The ability of any federal manufacturing initiative to deliver game-changing results will depend heavily on the delivery mechanism. Public-private partnerships run by enlightened manufacturing personnel and seeded with federal funding can be extremely effective provided that there are operative feedback mechanisms from the local level to the federal level. The local need must be able to rapidly influence the federal strategy, while the federal managers focus on program integrity, national strategy, and performance.
As policymakers compare and contrast the effectiveness of our value chains relative to those in countries with more centralized federal governments, we need to consider legislative changes that will improve the effectiveness of public-private partnerships. Whether in procurement law, intellectual property law, or in aspects of antitrust legislation, the nation needs value chains that can compete with those employed by those who are manufacturing our iPhones. (New York Times, “How the United Sates Lost Out on iPhone Work,” January 22, 2012). If the federal government is to play an effective role in revitalizing our manufacturing value chains, the role of public-private partnerships in developing those value chains should be redefined and expanded.
Stephen Ezell’s article reveals that the U.S. manufacturing decline is a core structural issue for United States to end the recession. I say structural because it is not a cyclical problem. He lays to rest the erroneous claim that our economy can be prosperous with a predominantly service-sector approach.
Ezell clearly shows how successful countries support and maintain manufacturing through institutional strategies that are absent here in the United States. However, I would add that successful trading and manufacturing countries also use strategic mercantilism and protectionism—up to and including state capitalism—as an essential component of their strategies.
The so-called Washington Consensus on the issue of trade is that if we lower tariff barriers, others will follow and we will all be better off. As we removed the conditions for foreign state-owned and private companies to invest and sell in the United States, we expected to have market opportunities elsewhere. In practice, this policy has resulted in U.S. unilateral disarmament (or, if you like, nonreciprocity), because other countries replaced tariffs with other, harder-to-address barriers. However, tariffs are about 10% of the issue, not 100% as some nontrade specialists believe.
The point is that successful manufacturing and trading countries such as Japan, China, Brazil, South Korea, and Germany combine Ezell’s manufacturing strategies with a host of mercantilist tools. They protect their existing and cutting-edge industries while working hard to penetrate the largest, richest market in the world: the U.S. consumer market.
The tools are many, but here are a few. Currency manipulation was key to Japan’s and Korea’s 1970s and 1980s growth and to China’s more modern growth. Value-added taxes are a legitimate tax tool used by other countries, but act as a global 18% tariff on U.S. exports, which pay that tax at foreign borders. State-owned companies are becoming more, not less, prevalent in China. The state capitalism model is not responsive to the market and is massively subsidized with inputs of low-cost credit, energy, technology, and guaranteed domestic sales to drastically enhance export predation of the U.S. market.
From the 1800s (Alexander Hamilton) through World War II, which was America’s fastest growth period, the United States focused its domestic and trade programs strategically on building a broad, complementary host of massive industries. We have forgotten those lessons, but others have learned from us. The basic point is that trade policy is a key, not-to-be-ignored component of other countries’ manufacturing agenda. If we are to compete, we have to plan accordingly, without the comfort of ideological free-trade bromides.
Not only does Stephen Ezell recognize the critical role that manufacturing plays in supporting a middle-class economy, but he asks the right question: Why is the United States alone among top industrialized nations in that it has no plan to strengthen its manufacturing base?
Ezell is correct that the United States finds itself at a serious crossroads. The National Science Board (NSB), which is part of the U.S. government’s National Science Foundation, recently issued an alarming report that found that the United States has lost 28% of its high-technology manufacturing jobs over the past decade and is losing its lead in science and technology in the global marketplace. The NSB says that one of the most dramatic signs of this trend is the loss of 687,000 high-technology manufacturing jobs since 2000.
The United States urgently needs to implement a cohesive national strategy to rebuild U.S. manufacturing and to restore the nation’s innovative edge. Ezell rightly cites a recent charter adopted by the Information Technology and Innovation Foundation (ITIF). After combining the input of a host of business and labor organizations, the ITIF assembled bipartisan agreement into a core set of policy actions that must be undertaken to renew U.S. manufacturing. Their subsequent Charter for Revitalizing American Manufacturing suggests some key steps, including the following: expanded worker skills and training; a renewed focus on R&D and innovation; a stronger trade policy to combat mercantilist dumping, subsidies, and currency manipulation; and an effective tax policy to support U.S. manufacturing.
Manufacturing’s critical role in our country’s economic future should be beyond dispute. But what’s not clear is whether our elected officials will move swiftly to implement a set of policies to revitalize U.S. manufacturing. Key steps such as those identified by the ITIF offer an important starting point.
Stephen Ezell, basing his article on the work he did at the Information Technology and Innovation Foundation with Robert Atkinson, is looking at a crucial issue: what needs to be done in order to ensure that the United States keeps a competitive edge in innovation-based manufacturing. As Ezell notes, from the national point of view, to paraphrase the work by John Zysman and Stephen Cohen, manufacturing matters. It matters, as Ezell explains, for five reasons: (1) its role in achieving balanced terms of trade; (2) it is a key supplier of large number of aboveaverage–paying jobs, specifically for mid- and low-skilled labor, the section of our society that suffers most from the current financial debacle; (3) it is a principal source of economy R&D and innovation activities; (4) the health of a nation’s manufacturing and service sectors are complementary; and last but certainly not least, 卌 manufacturing is essential to a country’s national security.
Indeed, a rarely spoken about but important facet of the aftermath of the financial crisis is that the wealthy economies that fared the best are those that tend to excel in advanced manufacturing, such as Germany, Denmark, and Finland. Furthermore, the bedrock of these countries’ success is their small and medium-sized enterprises (SMEs). It is therefore crucial that the United States think about ways to enhance, upgrade, and maintain the R&D intensity and competitiveness of our manufacturing SMEs. There are two key issues that make the case for supporting SMEs even more urgent than Ezell’s already urgent call to arms.
In the joint work that several of my colleagues and I are conducting with the Connect Innovation Institute of San Diego, we emphasize that one of the reasons why a healthy ecosystem of SMEs is more important than ever before is the fact that the production of products, components, and services now occurs in discrete stages around the world. In the past, a manufacturing company such as Ford Motors conducted almost every part of its production, from R&D to final assembly, in-cluding the manufacturing of many components, in house. Today, most of these activities are done in well-defined stages, by multiple companies, in many different locations. One of the least understood effects of this production fragmentation is that in order to excel, it is not enough for a single firm to be superior; it must be part of an ecosystem of superb collaborators, within which much of the innovation occurs, in particular the critical innovation that allows these products to be made, improved, and continuously differentiate themselves. Hence, in order to excel as a country, we must develop and sustain more than ever before the ability of our manufacturing SMEs to innovate, grow, and prosper. It also means, however, that we should not think solely in terms of market failure but in terms of what my colleagues Josh Whitford and Andrew Schrank have termed network failure: systematic failures that do not allow a whole network of companies to come to the market with superior products that retain U.S competitiveness and terms of trade while increasing U.S. employment.
This system-wide failure of U.S. manufacturing led Ezell, and rightly so, to focus on the role of government, especially giving detailed and superb suggestions for how to expand and upgrade the current Manufacturing Extension Program. Although this is crucial, it does not solve the other obstacle that manufacturing SMEs face—the scarcity of financial firms whose aim and specialty are to profitably invest in production innovation in the United States. Our system for financing innovation, with its overreliance on winners-take-all, high-stakes, short-term venture capital financing, is the world’s best at coming up with new products and technologies. However, it is one of the world’s worst at finance innovation in unglamorous areas that provide stable stolid growth over many years, but not instantaneous winning lottery tickets. Successful policy must solve not only the government side but also the capital market side: We must have a financial system in which U.S. investors are able to profitably invest in production within U.S. borders.
How to store spent nuclear fuel
In “Improving Spent-Fuel Storage at Nuclear Reactors” (Issues, Winter 2012), Robert Alvarez describes a lesson relearned at the Fukushima Daiichi plant in Japan. The reactors at Fukushima and roughly one-third of our reactors have spent-fuel pools located inside the same building surrounding the containment that houses all the emergency pumps providing reactor core cooling and makeup.
This cohabitation allows reactor accidents to cascade into spent-fuel pool accidents and for spent-fuel pool accidents to in turn trigger reactor accidents. For example, the radiation levels inside this building during a reactor accident can prevent workers from entering to restore cooling for or makeup to the spent-fuel pool. And in the converse, the water escaping from a boiling spent-fuel pool can condense and drain down to the basements, disabling all the emergency pumps by submergence—if the elevated temperature and humidity conditions have not already done so.
Alvarez does more than merely describe a safety problem. He defines its ready solution. Five years after discharge from reactor cores, spent fuel can and should be transferred to dry storage. The accelerated transfer will result in more spent fuel being in dry storage, which translates into an increased dry storage risk. But that risk increase is more than offset by the risk reduction achieved in the spent-fuel pool. As Alvarez states, the typical spent-fuel pool for this type of reactor contains 400 to 500 metric tons of irradiated fuel. A single dry cask contains only 15 to 20 metric tons. Thus, unless something causes many dry casks to nearly simultaneously fail, the radioactivity emitted from a spent-fuel pool accident is significantly greater than from a dry cask accident.
The relatively higher hazard from irradiated fuel in spent-fuel pools as compared to dry casks has been known for many years. After our 9/11 tragedy, the Nuclear Regulatory Commission (NRC) issued a series of orders requiring plant owners to upgrade security measures. The first orders went out for greater protection of the reactors, followed by orders seeking protection of spent-fuel pools, and trailed months later by orders for better security of dry storage facilities. The NRC knows the relative hazards.
Just as the Fukushima Daiichi tragedy rediscovered this spent-fuel storage problem, it also revisited its solution. There were nearly 400 irradiated fuel assemblies in dry storage at Fukushima when it encountered the one-two punch from the earthquake and tsunami. The tsunami’s water partially flooded the dry storage facility, temporarily replacing the normal cooling by air convection with water cooling. When the floodwaters receded, the normal cooling process restarted automatically. There was no need for helicopters to drop water or truck-mounted water cannons to spray water to prevent damage to irradiated fuel in dry storage.
If irradiated fuel in a spent-fuel pool causes an accident or increases one’s severity, shame on us. We know the problem and its solution. We have no excuse for failing to implement the known fix.
Fuel-use reduction strategies
There are three basic ways to reduce fuel use in the transportation sector in the United States. The first is corporate average fuel economy (café) standards, with which the U.S. government tells the automakers what cars to sell. The second is gas taxes, with which the U.S. government raises taxes on motorists. The third is what I would call the OPEC option, in which foreign governments raise fuel prices because they can, or because they need more money for domestic needs, or because they start running short of oil supply.
The fine article by Emil Frankel and Thomas Menzies (“Reducing Oil Use in Transportation,” Issues, Winter 2012) reinforces the obvious point that the U.S. public strongly prefers the first strategy to the second one: “Don’t tax me, don’t tax thee, tax the man behind the tree.” The authors also explicate more subtly that the CAFE strategy may not prove all it’s cracked up to be. But then, regulations seldom are. The automakers have made Swiss cheese out of the first two words in CAFE: “corporate” and “average.”
I share the authors’ weariness and skepticism about the stale debate over raising fuel taxes. It’s exactly what we should be doing for a host of reasons, not least being national security and our massive infrastructure and budget deficits. But the state of our politics today dooms any self-evident strategy to oblivion.
I do think the article falls short in not providing enough attention to the third option: OPEC. After all, petroleum is a finite natural resource, the Middle East is as politically unstable as it’s been in generations, China and India continue to drive worldwide demand for oil, and the list goes on. The question on our minds should be this: Are we ready for a sharp and sustained rise in fuel prices?
I would answer that question in the negative and suggest at least two policy prescriptions we ought to pursue in response. First, I think Frankel and Menzies are too quick to dismiss the importance of public transit as an alternative to auto travel, especially if gasoline costs $7 per gallon instead of $3.50. Second, before we give the gas tax a proper burial, let’s consider converting the existing excise tax to a sales tax on gasoline, initially on a revenue-neutral basis. If average gas prices rise over the long term, as they are expected to do, such a sales tax on gasoline could not only hold its own against inflation but raise substantial new sums for infrastructure or other public purposes.
Emil Frankel and Thomas Menzies make a strong case as to why the United States should complement more rigorous fuel economy standards with an increase in fuel taxes, in order to reward people for driving less and ultimately reduce how much oil we use for transportation. They also acknowledge a major obstacle: the lack of political will. “Indeed,” they write, “it is difficult to envision a scenario in which policymakers could ever generate public support for higher fuel taxes without offering a compelling plan for use of the revenue.”
That specific plan is the critical missing piece. Americans are tired of being asked to give more of their hard-earned dollars for . . . what? With no clear vision for our nation’s transportation network and no performance metrics in place to measure return on investments, it’s no wonder taxpayers are leery of increasing the gas tax.
State and regional transportation decisionmakers are proving that if they articulate the goals, plans, and criteria for measuring return on investment, voters are willing to share the cost of maintaining and expanding roads and transit. Examples abound:
- The Illinois Tollway Board approved a 35-cent toll hike late last year to fund a $12 billion, 15-year capital plan for infrastructure improvements throughout the Chicago region. Predictably, motorists weren’t jumping for joy, but the media reported quite a few drivers who said things like, “Since it means better roads, it will be a plus for me.”
- Despite being faced, as many cities have been, with reduced state funding, Oklahoma City has been able to maintain and expand infrastructure using revenues from a temporary, 1-cent increase in the sales tax. How did they build political will to levy this tax? City leaders including Mayor Mick Cornett credit their success with explaining to voters what they planned to do with the funding, doing it (without incurring additional debt), and then ending that sales tax as promised. City voters are so bought into these new investments that they’ve agreed on multiple occasions to resurrect the 1-penny tax to fund new projects.
- Minnesota converted nine miles of carpool lanes along I-394 into toll lanes, guaranteeing that drivers can travel at about 45 mph nearly 95% of the time. More than 60% of residents in the Twin Cities area support the program, and more than 90% of toll lane users maintain a very high level of satisfaction. Because of the success of I394’s conversion, the federal government provided Minnesota with a $133 million grant to expand the program.
To fight gridlock and keep our cities and regions competitive, the United States needs a new approach to transportation planning and investment, one that maximizes the use of existing infrastructure, evaluates and captures the value of new investments, and taps creative financing tools. The public is ready to be leveled with and is willing to invest in solutions that deliver results. It’s time we provided a strong vision and plan they can get behind.
Emil Frankel and Thomas Menzies offer an incongruous jump from a realistic and well-reasoned analysis of the importance of oil to both U.S. transportation needs and historic U.S. economic growth to a completely contradictory and contraindicated conclusion that we must reduce the use of oil in transportation!
There can be little doubt that energy use, and particularly the use of oil for transportation, has been instrumental in achieving substantial economic growth in the United States and now also in the larger developing nations: China and India, together representing 37% of the world’s population, are recognizing that they can lift their people out of poverty by using more oil.
Cross-plotting gross domestic product (GDP) per capita of different nations versus either energy or oil use per capita yields a remarkably strong correlation. In the case of oil, the correlation becomes even more pronounced if one incorporates population density: The sparsely populated nations such as the United States, Canada, Australia, and Norway use disproportionately more oil per capita than the more densely populated European states or Japan for roughly the same GDP per capita. One might expect that result because more-densely populated regions need less energy for transportation. Similarly, plotting GDP growth rate versus oil consumption growth rate yields an even stronger correlation.
From a different perspective, if we look at GDP per unit of either energy or oil use, developed nations, including the United States, are creating more wealth per unit of energy consumption than the less developed ones. Or to put it another way, the less developed nations are more wasteful of energy, in terms of creating wealth, than are the developed nations. From an overall global economic perspective, should we not be encouraging the developed nations, and particularly the United States, to use more oil rather than less?
But too many people worldwide have accepted, without much independent thought or analysis, the view of the United Nations Intergovernmental Panel on Climate Change that carbon dioxide emissions from fossil fuel use will cause catastrophic global warming by the end of this century. The lesser developed nations, whose populations outnumber those of the developed ones by about 6 to 1, support this view, and particularly the notion that developed nations should reduce energy use and transfer wealth to the lesser developed nations, as “punishment” for alleged past contributions to global warming!
An important new and very perceptive book (The Great Stagnation, by Tyler Cowen) notes that a significant drop in the U.S. economic growth rate, as measured by median U.S. family income, occurred around 1973; dropping by 75% from 2.7% per year (1949 to 1973) to 0.6% per year (1973 to 2006).
Coincidently, 1973 was the year the Yom Kippur War and the Arab Oil Embargo triggered a large oil price increase. Oil prices had actually decreased 1.5% per year from 1949 to 1970 in real terms, but after a big jump (of 75% between 1970 and 1975) have now increased by 4.2% per year since 1975. This triggered a change in the U.S. rate of growth of oil consumption from 3.1% per year to a negative 0.4% per year, or a total decrease of 3.5% percentage points, clearly contributing significantly to the “Great Stagnation”!
So some serious economic realism about U.S. oil consumption is desperately needed, but the Frankel and Menzies article fails to provide it, adhering instead to the latter-day shibboleth that the United States must reduce oil use.
The home of the future
U.S. buildings are responsible for approximately 10% of global greenhouse gas emissions. Improving them presents one of the most direct and cost-effective opportunities to reduce those emissions. Our buildings in general are dreadful energy wasters, neither designed nor operated to minimize their draw on precious energy resources. “Blueprint for Advancing High-Performance Homes” (Issues, Winter 2012) by James H. Turner Jr. and Ellen Larson Vaughn makes utterly clear that it does not have to be this way. We know how to build homes and office buildings with a net zero consumption of energy, have done so in climates as inhospitable as Minneapolis, and watch our European counterparts do it routinely. We just don’t do it or require it. Why not?
The authors do a good job of explaining the resources and practices that we need to employ, describing eight actions that would help tilt the market in favor of high-performance buildings. They do not explain, however, why we are so stubbornly wasteful when it comes to our buildings. They assert that “High performance will come when consumers feel they must have it and can afford it. Getting to that point requires streamlined new ways of doing business for the construction industry, for related real estate professionals, for those who service homes, and for residents.”
The reality is that the construction industry, hammered by the recession and housing collapse, is hypersensitive to costs, and sees costs they fear they cannot recover from tight-fisted home-buyers in meeting new standards, using better materials, or providing more worker training. The major home-builder association therefore opposes energy-efficient building codes that would allow their customers to save the incremental cost of the building’s better performance perhaps 20 times over during its life. They take no responsibility for engendering enthusiasm among their customers for energy efficiency. Ironically, builders of high-performance homes have reportedly continued to stay busy through the worst housing market collapse in U.S. history, but there aren’t many and they don’t build in volume for the average family.
The real estate industry has actively opposed measures that would allow buyers to accurately understand the energy usage of one existing home versus another, believing that older homes—the ones they principally sell—would inevitably be seen in a negative light. Perhaps we should blindfold prospective homebuyers, so we wouldn’t have to worry about appearance biasing their decisions either. The service industry offers a myriad of small companies, most without the financial clout to stand behind the promises motivating a major efficiency upgrade, in part because they cannot bet their business on consumers’ behavioral patterns that lead to much of the waste. And the owners and occupants of U.S. buildings are blissfully clueless in most cases about the options they have to save energy, and generally take their buildings’ energy performance as a given over which they have little control.
All eight of the steps the authors prescribe are good ideas, and a number are at some stage of implementation. But each of them individually, and probably all of them together, are unlikely to get us to the point where we truly recognize that “the consequences of doing nothing are so great that there is no choice but to try.” The key driver that would really make us pay attention to low-cost ways of achieving greater energy efficiency is higher energy costs. However, with the new abundance of natural gas, our principal home-heating fuel, that signal is currently pointing the wrong way. A meaningful national consensus on the calamities to come from climate change could also motivate policies to require efficient energy use even in the absence of a strong price signal. However, that consensus is not only absent but diminishing with partisan divides on the validity of climate science.
So the question remains: What can be done to convince our builders, realtors, service companies, and building owners and occupants that saving energy is a critical priority to them and to their nation and world? Unfortunately, knowing how to achieve that efficiency is the easy part.
This is a timely and provocative article that addresses an issue critical to our nation’s long-term sustainability: high-performance homes. Although this may appear to be a back-burner issue for many, given today’s challenging economic climate, James H. Turner Jr. and Ellen Larson Vaughn make a compelling case that now—when “business as usual” has proven inadequate in so many market sectors—is precisely the time to envision and begin to create a preferred future.
Two aspects of the article stood out to me as particularly interesting and forward-looking. First is the recognition that high performance is an end state that can and should be considered over a longer time frame than has traditionally been the case. Historically, high performance has been considered a goal that needs to be achieved quickly and completely in order to have maximum impact and benefit. If you’re going to build a new, high-performance home, do as much as you can now, because you won’t get another chance that allows you this much freedom. All well and good, if you have the commitment and resources, but more often than not, people don’t. The result: medium to medium-high performance as the end state, with no built-in capacity to add performance features over time. Put simply, people settle for and are proud to have achieved something less than full high-performance, but that’s as far as they can go. They have closed down their ability to add future high-performance upgrades because they haven’t planned for them.
The authors acknowledge this dilemma by promoting what they refer to as “high-performance–ready” strategies that deliberately anticipate and plan for future upgrades as an explicit component of any new construction and/or renovation project. The idea is not new, but it has rarely been applied in new construction, let alone in the renovation/remodeling market. Yet it offers the potential for steady, incremental performance improvements over time, a process that allows upgrades to occur as and when homeowners have the resources to afford them and, importantly, facilitates the incorporation of new technologies as and when they become available. To their credit, the authors make this important concept a central part of their argument.
A second component of the article that seems particularly intriguing is the concept that the energy cost differential between high-performance homes and their more common, code-minimum counterparts can be considered “wasted funds,” not, as would be the case in more traditional analyses, as “savings potential.” By introducing this new nomenclature, the article helps move us conceptually away from a context focused on “saving” to one focused on “not wasting,” a potentially powerful new motivational construct for individual homeowners, given today’s economic climate, and for policymakers, given the huge scale of the potential “waste” to be avoided.
This is a compelling article that tackles a tough but highly promising problem: converting all the homes in the United States to high performance. The authors recognize that this will be a decades-long process, requiring “millions of small steps,” and that the time for the first step is now.
James H. Turner Jr. and Ellen Larson Vaughan’s article picks up where many travelers in the past three decades have tread: identifying the disconnect between the many R&D efforts to improve the physical performance of our nation’s housing on one hand, and the often overwhelming institutional, behavioral, financial, and cultural realities that have muted a complete diffusion of these designs and technologies on the other.
In this particular effort, the authors hope to take us on this familiar journey, but in the new terrain of the post-recession housing environment. They identify key landmarks among diffusion barriers, such as financial products that neglect savings from energy and water reductions in the borrower’s capacity to pay, sporadic R&D funding, and fluctuating energy and material prices that perpetuate the status quo. Yet, in making this journey, there is little time to consider the view. The authors occasionally conflate residential with non-residential construction. They misread trails that might be good for new construction but are more difficult for the traveler focused on retrofits. They describe journeys in other nations without contextualizing those housing market landscapes. The occasional platitude (“High performance will come when consumers feel they must have it and can afford it,” for example), unfortunate causal statements (“It took the U.S. Green Building Council’s LEED rating system… to reengage the public”), and missing citations cause the traveler to stumble.
The authors motion us in the general direction of our journey’s destination through eight routes. Some of these routes have been well-traveled. In particular, the authors are on solid ground when surveying various public incentives for home improvements, but could have explored studies of the incentives’ effectiveness. The case for continuing public investments in focused R&D and coordinating national benchmarks for their effects is strong. A few additional maps provided by other scholars, practitioners, and advocates may prove useful in the authors’ pursuit.
Other routes the authors suggest are more treacherous. Life-cycle costing methods are advancing, but are not necessarily applicable to consumer financial products. Advanced manufacturing techniques do not necessarily yield higher-performance within key attributes, such as energy use. Calls for building codes that “should cover home performance” overlook the fact that codes, by definition, can only cover design and construction— and only in new construction—and not operations. Better knowledge of the topography of the housing industry and related research topics might help, including: consumer behavior; lending underwriting practices; the home building and remodeling industry’s composition and capacity; and the economic effects of public research funding,
A final group of routes may be on ground that is too unstable. Applying well-tested high-performance products and techniques in subsidized housing can benefit its occupants, but overlooks the cost burdens and may open the door to less-than-beneficial experimentation. Ultimately, too, no lessons about how these routes might be negotiated for the current economic and political times—or descriptions of how they differ from past voyages— are provided.
In our guides’ defense, any inaccurate deciphering of the landscape is not intentional, nor is it even avoidable in some cases. The sociology and economics of housing technology is still virgin territory. Taken as a whole, housing design and construction is a technological subject with significant social and cultural connections, both obvious and complex. The authors correctly try to point us down the path of exploring these connections in ways that benefit home occupants, owners, buyers, and builders. Regardless of how far the authors go in this article, the journey is always worth taking.
Transportation: the California model
In “California’s Pioneering Transportation Strategy” (Issues, Winter 2012), two of its leading lights, Professor Daniel Sperling and California Air Resources Board Chair Mary Nichols, highlight many of the key aspects of why it is not only possible, but logical, for California, which “only” emits about 2% of global greenhouse gas emissions, to set aggressive local goals to protect the climate.
First, climate protection is not a luxury but a necessity, for any entity, from individual to municipality to nation to region, that looks at the long-term economic and social viability of its community. The impacts we are already facing, and the damages we risk through inaction, are simply too great. Second, as the authors note, “the 80% goal cannot be met without dramatic change in driver behavior and transportation technology.” California’s approach is to recognize and then work to implement two key guiding principles:(1) Greenhouse gas emissions are waste that is currently largely unpriced, so clever sectoral strategies to reduce climate pollution can often produce economic and social savings and benefits that we should look to capture; and (2) energy use and greenhouse gas emissions are not limited to transportation to and from the electricity sector, so an integrated set of policies across the entire economy both makes the most economic sense and builds a foundation for innovative, job-creating, waste-reducing industries that can become central to a new model of economic growth. In my laboratory at the University of California, Berkeley, we see this every day through the job-creation potential of energy efficiency and renewable energy http://rael.berkeley.edu. edu/greenjobs). We also observe that when consumers are armed with tools to cut waste, financial savings and carbon savings often go together in unexpected ways (http://coolclimate. berkeley.edu).
Transportation was for many years thought to be the tough nut to crack in this story, because “Americans like their cars big and powerful.” Several years ago, before the current renaissance of electric vehicles, Alan Alda and I hosted a track race at the Infineon Speedway in northern California, where electric cars raced and won against a series of gasoline-powered sports cars. The electric roadster won that day because the race was a sprint, and electric motors outaccelerate gasoline engines. Now, thanks in part to California’s commitment to an integrated climate strategy, electric vehicles are entering the market that could win both the sprint and the long haul. What is needed is large-scale production to bring the cost of these vehicles down.
For innovators there is always a next challenge, and for California it is how to decarbonize the driving we must do and avoid the driving we do not need or want to do. Here, too, California’s integrated strategy is useful. In a new high-resolution model of the electricity system across not just California but the U.S., Canadian, and Mexican west, my laboratory is looking at the value of not only electric vehicles to meet climate targets but of the distributed network of batteries that these vehicles represent, as a resource to put power back into the grid at times of high-cost power or when emergency power is needed. Although this work is ongoing, it already provides a clear lesson about the need and value of integrated planning.
Daniel Sperling and Mary Nichols outline California’s more than 50 years of innovative pollution-reduction measures for cars and trucks to protect public health. The state’s efforts are a model for other states to follow and a reason why Americans today breathe cleaner air.
California’s ability to create pollution-reduction standards without sacrificing mobility is a model for the nation. Recently adopted clean car standards, when fully phased in by 2025, will result in new cars that emit half the carbon pollution, use half the gasoline, and emit only a quarter of the smog-forming gases as today’s new cars.
The technologies developed by major U.S. automakers to halve carbon pollution will be available in all U.S.and foreign-brand cars. This is because two national clean car agreements, brokered by the Obama administration, brought together California, the U.S. Environmental Protection Agency, the National Highway Traffic Safety Administration, and 13 automakers to coordinate a harmonized set of national standards to reduce carbon pollution and increase fuel economy.
One of the most important lessons from the California experience is that good climate policy is also good for the economy, consumers, and public health. Creating this win-win result was fundamental to gaining widespread support for efforts such as the Sustainable Communities and Climate Protection law (known as SB 375), which will reduce sprawl and the need to drive. In this case, cutting carbon pollution also means developing more-livable and -walkable neighborhoods and saving cities money on costly road infrastructure. When the business community, builders, chambers of commerce, local governments, and environmentalists jointly support climate policy, it creates a model for other local and state governments to adopt.
California has proven time and again that there isn’t one solution to fight climate change, but rather an arsenal of smart policies and innovative technologies. The main ingredients for success are political leadership and courage. In California, there is plenty of both, including enough to challenge the multitrillion-dollar oil industry to start phasing out investments in dirtier fuels, such as tar sands, while phasing in cleaner fuels under the state’s Low Carbon Fuel Standard. The state has recognized that energy and climate security will remain at risk if the oil industry continues spending less than half a penny on alternative fuels for every dollar invested in oil. That is a system designed to keep America addicted to polluting fossil fuels instead of clean energy of the future.
These are just a few of the many reasons why the Natural Resources Defense Council, with its 1.3 million members and online activists, advocates for climate policies in California, including sponsoring AB 1493, Senator Pavley’s Clean Car bill; and AB32, California’s Global Warming Solutions Act. California’s pioneering efforts continue to serve as a model for others going forward, proving once again that thinking globally means acting locally.
Reducing nitrogen emissions
We were pleased to read “The Climate Benefits of Better Nitrogen and Phosphorous Management” by Alan R. Townsend, Peter M. Vitousek, and Benjamin Z. Houlton in your winter 2012 issue. They address the difficult balance we must seek between minimizing environmental effects from reactive nitrogen leakages while ensuring sustainable growth of our food supply, particularly in poorer countries. Greater efficiency of nitrogen use can provide additional benefits through climate change mitigation.
And yet, our recent work on the California Nitrogen Assessment (CNA) shows that both the challenges and the opportunities for improved nitrogen management depend crucially on context. As nitrogen science progresses, it must address the variability of ecosystems, practices, and media, and the array of stakeholders that encompass them.
The CNA, coauthored by myself and other members of the University of California, Davis, faculty, is an ongoing project of the Agricultural Sustainability Institute at UC Davis, designed to comprehensively examine existing knowledge about nitrogen science, practice, and policy in California. The results will lay the groundwork for informed discussion and decisionmaking on nitrogen management, including policy options and field-level practices. Following the assessment model of the Intergovernmental Panel on Climate Change and the Millennium Ecosystem Assessment, the CNA began by engaging with more than 300 individuals from 50 organizations.
Engaging stakeholders throughout the process was critical to shape the CNA’s approach and ensure that its outputs are considered useful and legitimate. The CNA involves a rigorous scientific review, currently under way, as well as a subsequent stakeholder review process.
The CNA identifies key drivers of nitrogen use decisions, including global demand for California’s commodities and the prices of fuel and fertilizers, and examines how these drivers influence the statewide mass balance of nitrogen: how much enters the state through new sources and the multiple media through which these compounds enter the environment. We investigate nitrogen’s effects on environmental health and human well-being and examine technological and policy options to minimize nitrogen leakage while sustaining the vitality of agriculture.
We found that the most troublesome nitrogen leakage in California is not climate change forcing by nitrous oxide, but groundwater pollution by nitrates. Whereas about 2.5% of nitrogen inputs in California are emitted as nitrous oxide, nitrate leakage to groundwater accounts for roughly one-fourth of the state’s nitrogen flows, with more than 80% of leakages arising from cropping and livestock production. Unlike eutrophication, the effects of which are readily observed, groundwater contamination is a slow process unfolding over decades and largely hidden from view. Even if nitrate leakage were halted immediately, groundwater pollution would persist as nitrates move through our aquifers.
Although the essential role of nitrogen in food production is well known, the CNA shows that aspects of nitrogen’s negative effects are context-specific. Understanding and addressing climate change forcing, air pollution, contamination of ground- or surface water and coastal zones, crop production costs, and dairy manure management require accounting for distinct ecosystems, stakeholders, and their roles in the nitrogen cycle.
We agree with Townsend, Vitousek, and Houlton that change requires neither “phantom technologies” nor “massive social upheaval.” Technologies available now can lessen nitrogen’s negative effects. But to give that action momentum, whether in science, policy, or practice, the multimedia nature of nitrogen needs to be understood contextually and in collaboration with those most affected.
New directions for climate talks
In “A Course Adjustment for Climate Talks” (Issues, Winter 2012), Ruth Greenspan Bell and Barry Blechman join other analysts in expressing frustration with the United Nations Framework Convention on Climate Change (UNFCCC) process and seek more effective negotiating methods. They draw lessons from the disarmament sphere, although its characteristics appear quite different from those of the climate change process: few countries around the table, evident mutual threats, and consequent scope for reciprocal concessions. Nevertheless, some ideas they advance fit well into the climate change context.
For example, separating issues for independent resolution is an approach that is established under the UNFCCC umbrella (for example, on deforestation emissions) and can be further encouraged (for example, in agriculture and other sectors). Moreover, interest in controlling short-lived climate-warming agents such as black carbon and methane can be pursued alongside a long-term focus on the transition to low–carbon dioxide (CO2) economies.
Several other improvements in the UNFCCC process, some of which are alluded to by the authors, can be contemplated, such as less peripatetic diplomacy, more resident technical expertise, space for conversation among the essential economic heavyweights about their mitigation commitments, and rules for majority voting.
But is it the process that should be the target of frustration? Or do the difficulties of achieving international agreement lie in the climate change phenomenon itself?
The core objective of the UNFCCC is to “prevent dangerous anthropogenic interference with the climate system.” But science cannot define what degree of climate change is dangerous. That is a societal judgment. Thus, in a fragmented global society, the danger of climate change is in the eyes of the beholder, subject to national assessments of vulnerability and economic interests that will vary according to coping capacity, geography, and the valuation of future generations.
This subjectivity tends to disconnect demand and supply in the negotiations: the demand of vulnerable countries for mitigation action and the supply of such action by countries with the capability to provide it. It is difficult to discern in this disconnected dynamic the scope for reciprocal quid pro quo concessions, such as those that occur in negotiations on disarmament or trade.
Can one conclude from these summary observations that (1) there is no inherent tendency to convergence in negotiations on mitigating climate change? (2) The drive for effective, fair, and ambitious outcomes must therefore come from abstract factors: reason, ethics, responsibility, and political will, backed by a long-term profit motive? (3) The key institutional need is for common accounting methods and for building the culture of accountability that is the foundation of trust?
Therein might be an interesting debate.
Beyond these questions is a more familiar stumbling block: the constitutional aversion of the United States to tying its hands in treaties and the role of the U.S. Senate in treaty approval. Perhaps the greatest setback in two decades of climate change negotiations was the inability of the Clinton administration to win approval from the Senate for the U.S.-engineered Kyoto Protocol. Currently, U.S. politics make it very difficult for the United States to engage in multilateral negotiations.
If the authors’ idea of playing up the security dimension of climate change and engaging national security officials in the negotiations is aimed at sensitizing the Senate to the urgent need for action, that is a fair point to make in the U.S. context. However, it should not be generalized indiscriminately beyond the Beltway.
Lead actors outside the United States include powerful ministries for planning (China), economics (Japan), external relations (Brazil), and even petroleum (Saudi Arabia). The positions of the European Union’s (EU’s) environmental standard-bearers are carefully filtered by their economic counterparts. Judged against these facts, the article’s closing shots at the “environmental ghetto” and “politically powerless environmental officials” are off target.
In the end, the fundamental institutional question to be addressed is whether or not a global forum is needed to impart coherence, legitimacy, and collective ambition to efforts to address a quintessentially global problem. The United States answered in the affirmative more than 20 years ago by agreeing to negotiate within the UN framework. Above all, what is needed to make the resulting process work is U.S. political leadership.
Ruth Greenspan Bell and Barry Blechman persuasively argue for a more focused approach to climate change negotiations, one that would draw on the lessons of arms control. Of course, some might argue that the parallel be-tween arms control and climate change control is strained and artificial. The high level of threat and anxiety aroused by nuclear weapons was more conducive to concentrated effort and creativity. The threat posed by climate change is much more distant, and politicians tend to be motivated by short-term considerations such as reelection. But this contrast is overdrawn. The arms control negotiations were primarily aimed at controlling the arms race rather than reducing the risk of nuclear war. Moreover, some techniques that were used in arms control negotiations were drawn from social psychological research on conflict resolution, suggesting that these methods would have broader applicability than arms control.
Experience with arms control suggests the advisability of separating issue areas, the parties to the agreement, and the time frame. Bell and Blechman cite the Limited Test Ban Treaty (LTB), in which success required distinguishing atmospheric tests, which could be reliably monitored, from underground tests, where verification was questionable. A more limited agreement was more politically feasible. If underground testing had not been excluded, a treaty could not have been ratified in the U.S. Senate, which was concerned about Soviet cheating. With the LTB, each could test the other’s intentions at relatively low cost and risk. If the Soviets had violated the treaty, they would not have acquired much of an advantage before their cheating was discovered.
It may also be worthwhile to focus on those states most likely to reach agreement. Large multilateral negotiations increase the role of potential veto players. Neither France nor China favored the LTBT, but both eventually signed the 1996 Comprehensive Test Ban Treaty, largely because of pressure from world opinion. Agreement may also be built up over time, with less costly measures of compliance in the first phase, followed by more onerous obligations at a later date, to provide time for building trust.
Another valuable suggestion made by Blechman and Bell is to politicize the issue so that it can receive high-level political attention. The past history of arms control indicates that presidential involvement was essential to reach important agreements. President Kennedy believed that it was urgent to reach agreement on a test ban to serve as the basis for other agreements and prevent the proliferation of nuclear weapons. President Nixon saw the SALT I treaty as the centerpiece of his détente policy. Finally, President Reagan was motivated to reach agreement with Soviet leader Gorbachev on any measures that would reduce the number of nuclear weapons.
The disadvantage of deconstructing the issue is that smaller agreements may remove the urgency to reach a more a comprehensive treaty. But because reaching agreement on climate change in a universal forum appears ever more elusive, this objection carries less weight.
Ruth Greenspan Bell and Barry Blechman note with reason the concern that the UNFCCC process may not meet the challenge to close the gap between proposed emission reductions and what is needed to keep the world below a 2oC temperature increase. Because of this concern, they outline how a multifaceted approach might be necessary to achieve progress. Without debating the merits of segmentation, it is important to note that in many ways the UNFCCC Durban Platform facilitates a varied approach. It does this by a series of decisions that can encourage a range of actors to move forward, while awaiting future legal agreement.
Some of these key decisions from Durban relate to the Conference of Parties (COP) agreements regarding a wide range of reporting responsibilities, continuing the Kyoto Protocol through a second commitment period, and perhaps most importantly, setting in place a series of incentives for early action. The Durban Platform decisions will further the degree to which the UNFCCC is a repository for crucial data on amounts and trends in greenhouse gases (GHGs), including reporting by sector. These key data will support future decisionmaking and citizen engagement. The new UNFCCC registry may also make it easier for clean-energy and climate-friendly projects to attain sources of finance, whether from the future Green Climate Fund or existing sources.
The decision to establish a second commitment period for the Kyoto Protocol garnered perhaps most media attention for the decisions of Japan, Canada, and Russia to stay out, thereby reducing the number of countries with pledged emission reductions. What received less attention was that Durban made decisions that will make the protocol’s clean-development mechanism function more smoothly and credibly than under the first commitment period, added its first new gas, and agreed that carbon capture and storage could be added as a new eligible technology. The parties recognized that there could be scenarios in which the protocol links with other mechanisms, a key ambition of those who sought to improve its relevance. A positive outcome from these decisions could someday be realized at a greater scale if the parties continue these modifications and make it more attractive for all to reengage.
Perhaps most importantly, Durban went beyond earlier COP decisions in the degree to which it recognized the role of markets, and that market-based approaches could occur at the individual- or joint-country level. The acceptance of markets and their connection to results-based financing should offer public and private actors some confidence that although a global regulatory framework is not yet in place, they may be able to consider mitigation activities today as a form of early action. The Durban Platform also negotiated and then decided to consider whether the critical agriculture and transport sectors could be included to enhance implementation under the convention. This willingness described above and in many other aspects of the Durban Platform can encourage countries and private actors to move forward, with reason to believe that what they undertake may link later to a global system.
Many things are wrong with the UNFCCC process, and the interminable COP meetings appear to go nowhere. Ruth Greenspan Bell and Barry Blechman rightly call for changes in the structure and direction, perhaps even the goals, of the global climate negotiations. However, I believe they err in a number of ways.
The UNFCCC process may be many things, but futile it is not. The convention and COPs have been mainly responsible for highlighting the global character and imminent threat of climate change and also for bringing governments the world over to acknowledge the crisis and work toward its solution. For most governments, especially in developing countries, the UN system has a legitimacy and persuasiveness that other multilateral forums do not. Although it is true that powerful developed nations throw their weight around in UN forums, and that COP decisions too have been distorted as a result, the UN framework gives smaller, weaker nations a voice and weight they lack elsewhere, as is clearly seen by the decisive role of small-island states at Durban. Jettisoning the UN framework at this stage in pursuit of an ephemeral alternative would only turn the clock back two decades, bringing into question every painfully gained point of agreement.
It is not self-evident that the all-ornothing approach of the Kyoto Protocol, or of any legally binding successor under the UNFCCC, is inimical to a solution and that less stringent and voluntary pathways stand a better chance. The Copenhagen Accord, formalized at Cancun, proposed a bottom-up, pledge-and-review framework and is seen as a more pragmatic approach. Agreement may have been easier, but is the solution any nearer? Estimates by several reputable organizations show that these pledges will result in a temperature rise of 3o or even 4oC, not the stated goal of 2oC maximum.
There are many arguments against the authors’ nuclear arms control analogy, but one stands out. In the Non-Proliferation Treaty (NPT), one could claim partial success if 80% of nuclear weapons were eliminated, but in climate change, getting to 4oC is no success at all, especially if populated islands are submerged or food production drops by 20% in some nations. And the devastation may be even worse. If legally binding promises are not adhered to by some nations, what compliance can be expected with voluntary pledges?
Maybe this is a gestalt problem. Things look very different from a U.S. vantage point. The United States has historically been more comfortable with multilateral agreements than with international treaties and has repeatedly asserted its own sovereignty. Does this mean that the world should not strive toward international agreements? The United States also prefers self-monitored rather than regulated systems, despite its success with a capand-trade program to curb sulfur dioxide emissions. In contrast, European nations are far more regulation-driven, and most developing countries are also happier with regulatory frameworks.
The arms control analogy holds at least one lesson, though. The NPT architecture, whatever its weaknesses, has been retained, not abandoned, and strengthened by supplementary multilateral and sectoral agreements. Why can’t the same approach work for the UNFCCC process?
In considering how the global community should move forward in tackling climate change, it is clear that the status quo is not working. Emissions continue to climb, and pledges for reductions from countries are far from what are needed to stay below a 2oC temperature rise in comparison to preindustrial levels. It is time to step back, especially after the last major international climate meeting in Durban, to assess how we can change course to accelerate the pace and scale of change.
In that assessment, we need to think through what the actual underlying problems are that are slowing down progress, consider what each different forum can deliver, and then build a regime that leverages initiatives against each other most effectively. One of those forums is the UNFCCC, and although it is clear that it cannot do the whole job, it is also clear that it plays an important role. It is also important to note that although it is certainly possible to try to engage additional forums, some efforts are already underway. Whether it is the discussions in the Major Economies Forum of the G20 countries or bilateral initiatives, there are attempts to bring new voices into the climate debate and pull some functions and issues away from the UNFCCC. For example, the EU is now, against significant pressure, implementing an aviation approach regionally. We need to think more deeply about which issues should be tackled in which groups and how they can add up to a greater impact. We need new messengers and players that hold greater sway with their publics.
After the Durban outcome, the importance of having a voice for the most vulnerable countries, which is possible only in the UN system, became more evident. The new Durban Alliance, made up of the EU and the most vulnerable countries, brought a new dynamism and created pressure on all major economies to act, something that had been absent in previous years. Building on that alliance is critical to continue the push into the uncomfortable politics of climate change. That alliance need not be vocal only in the UNFCCC. Having a group of countries that represent the progressive and the vulnerable could be a motor for whichever forum one is operating in.
In fact, one cannot really speak of a change in course without talking about how to change the underlying politics of this issue. Although we must learn from other treaties, such as arms control agreements, we also have to recognize that until the power dynamics around fossil fuels and the interests that represent fossil fuels are changed, it will be very difficult to get anything done in any forum. Ignoring this factor risks pushing old debates into new forums without any results.
Ruth Greenspan Bell and Barry Blechman raise a number of points worthy of serious consideration regarding the structure and dynamics of the current UN-led climate negotiations. They point out how nations made progress in tackling the problem of nuclear nonproliferation by moving forward on smaller targets, such as a partial test ban, and first reaching agreement among the smaller subgroups of countries willing to lead. In his 2011 book Global Warming Gridlock, David Victor similarly pointed to the dynamic of the World Trade Organization (WTO) negotiations, in which countries agree to reduce their trade barriers and abide by a common set of rules in order to enjoy the benefits of WTO membership, as a potentially more productive model for moving international climate cooperation forward.
Countries have made progress in the 20 years since agreeing to the UNFCC in 1992, but the pace has been slow and insufficient to meet the urgency of rising GHG emissions and our increasing scientific understanding. Seeking agreement among key countries makes sense given that the majority of GHG emissions arise from a handful of players: The top five emitting countries/entities (China, the United States, the EU, Russia, and India) account for 66% of global CO2 emissions, and the top 10 account for nearly 82%. The great majority of these emissions come from burning coal, oil, and natural gas for power, industry, and transportation, and from deforestation. Just two countries, Brazil and Indonesia, account for nearly 60% of CO2 emissions from deforestation.
But how exactly could the climate negotiations be broken down into more actionable pieces, and which countries could take the first key steps for moving each of these pieces forward? Compared to the negotiations for arms control or phasing out ozone-depleting chemicals, which are more discrete problems, climate change touches on the entire mode of production of economies, insomuch as they depend on fossil fuels for growth. Countries will need to develop new and more efficient non–fossil fuel—based technologies and growth models to address climate change. Fortunately, the 10 to 15 highest-emitting countries, both developed and developing, are also the ones most likely to play the largest role in developing, building, and distributing the needed technologies.
These countries could strengthen the use of smaller forums such as the G-20 or the Major Economies Forum on Energy and Climate to establish more concrete commitments than have been made in the UN climate negotiations and implement concrete plans to act on them. Rather than focusing solely on setting the numerical targets and timetables that have been the focus of the UN negotiations, these commitments should focus on very specific policies and actions, such as funding the Green Climate Fund, phasing out fossil fuel subsidies and inefficient lighting, implementing efficiency and renewable energy incentives and programs, reducing black carbon from diesel and cook stoves, and supporting clean-technology innovation. The input of the private sector (the primary source of most emissions) and nongovernmental organizations (which can help ensure the integrity of the process) is also critical. As Bell and Blechman suggest, the negotiations should not be limited to foreign affairs and environmental officials, but should include representation from countries’ energy, finance, and political ministries.
These agreements would inject a welcome boost of momentum into the UN-led negotiations by developing complementary pathways for speedier and more effective agreements among key countries to take action. The UN negotiation process itself could also be made more effective by establishing a tradition of requiring heads of state to participate in the meetings every five years (as they did in Copenhagen) and replacing the rule of decision by consensus with a more realistic rule that attaining 90 or 95% of country votes would be sufficient to pass a COP decision. Improving the dynamics of international climate cooperation could help us to more rapidly take the actions needed to mitigate the effects of climate change.
Rapidly approaching is the 20-year follow-up to the UNFCCC treaty that was signed at the Earth Summit in Rio de Janeiro, making this a propitious time to ask two core questions: How effective has the treaty been? What other options might be pursued for meeting the stabilization goal, set at the 15th COP in Copenhagen, of holding global warming to 2oC?
As virtually every knowledgeable observer knows, the effectiveness of the treaty, because of a lack of enforcement mechanisms as well as political and other structural factors, has been painfully disappointing. This reality magnifies the importance of the second question. What else can be done? The challenge embedded in the question is further magnified by the uncertainty about unknown tipping points in climate systems that could lead to an irrevocably changed global environment.
An important first step in addressing this question is provided by Ruth Greenspan Bell and Barry Blechman. They argue for the augmentation or perhaps even abandonment of the pervasive top-down, unanimous consensus framework of the UNFCCC in favor of other more narrowly focused paths. Although circumscribed in scope and impact, such other paths may be more immediately successful in not only abating GHGs but also in establishing proving grounds of cumulative knowledge useful in other domains and with a broader scope of applicability. To illustrate the effectiveness of taking on parts of a problem one step at a time, Bell and Blechman trace the incremental successes in controlling weapons of mass destruction, especially nuclear weapons. Today, the world has 9 nations with nuclear weapons, a number far below the 25 projected by nuclear experts before the Non-Proliferation Treaty (NPT) was signed—a significant reduction in risk.
How might the Bell-Blechman idea work within a more general conceptual framework? And how might the framework enhance chances for successful policies?
Nobel Laureate Lyn Ostrom addresses the first question in her background paper to the 2011 World Development Report of the World Bank (http://siteresources.worldbank.org/
WDR2010_BGpaper_Ostrom.pdf). Taking the failure of top-down global approaches to climate policy as her launching pad, as do Bell and Blechman, Ostrom develops a case for a “polycentric approach” to climate solutions. This framework recognizes that climate drivers and actions as well as climate effects take place at diverse locations and scales, from the global to the local and in between. Governance takes place at all of these scales, too. Hence, it makes perfect sense to address climate issues at these multiple levels.
Another important feature of the polycentric framework is its scholarly foundation. It derives from the considerable amount of knowledge we have about collective action problems and the application of that knowledge to common pool resources such as the global atmosphere. That understanding derives not simply from armchair thinking but from a large body of empirical studies allowing us to distinguish between successful and unsuccessful governance. Hence, the framework is founded on a set of principles for guiding policymaking and for evaluating policy effectiveness.
Neither the Bell-Blechman nor the polycentric framework is the final word on new directions in climate governance, but they do lead us to refocus our efforts along more promising paths.
The loss of momentum in climate change negotiations is undoubtedly worrying, and Ruth Greenspan Bell and Barry Blechman are right to urge the environmental community to look for models elsewhere. There is also much to be said for trying to engage diplomatic heavyweights as well as scientific specialists to get the debate moving, and for breaking up the big environmental issues into manageable proportions. A good rule of thumb on almost any topic is that the fewer parties involved and the sharper and more immediate the focus, the greater the chance of some agreement.
The authors are correct to note that the earliest and most idealistic schemes for complete disarmament were dashed by the Cold War and then an arms race, which meant that nuclear arsenals grew rapidly. Against this background, the move away from the more grandiose schemes for disarmament did not simply reflect a pragmatic determination to make progress with a number of small moves rather than one giant step, but an important conceptual change as well. The new field of arms control developed as a direct challenge to established notions of disarmament. The arms controllers pointed to the stability in East-West relations as a result of a shared fear of nuclear Armageddon. From about 1956, when this view became U.S. policy, an effort was made to clarify the meaning of living with nuclear weapons and to determine how best to stabilize the nuclear age rather than finding a means of escape. From this perspective, the number of weapons was unimportant compared to the risks of first strikes or accidental launches. Certainly, the 1962 Cuban missile crisis shocked Moscow and Washington into looking for ways to improve relations. The intense period of arms control that followed benefitted from the pathbreaking conceptual and policy work that had been undertaken during the previous decade. The aim was more mitigation than elimination.
Moreover, all of the negotiations were conducted within the terms of a hierarchical view of the international system. The deals between the superpowers had to be self-policing, although the more-multilateral agreements brought into being international organizations to police them. Although the Non-Proliferation Treaty has been significant in reducing the risk of nuclear war, it has also had the effect of reinforcing the political status quo and has always caused resentment among nonnuclear powers as an unequal treaty. The importance of a U.S.-Soviet understanding during the Cold War also meant that negotiations on matters of substance were regularly entangled with broader signaling about the views being taken in Washington and Moscow about the other’s overall conduct. Arms control negotiations tended to reinforce underlying trends, whether negative or positive, in superpower relations. When relations were tense, which was when agreements were most needed, they were much harder to obtain. None of this is to challenge the core thesis of Bell and Blechman, but it does warn that any agreements require a common conceptual basis and a favorable political context.
International relations scholars have long argued that advocates of strong and meaningful policies to mitigate GHG emissions and facilitate adaptation to the effects of climate change have placed too much faith in the Holy Grail of global cooperation, given the now well-known challenges to reaching and implementing global agreements. Meanwhile, proponents of climate change action and clean-energy transitions have long decried the failures of citizens and politicians to adopt a long-term perspective and act accordingly. It is well past time to take both of these problems seriously and change course in climate change policy advocacy.
It is all well and good to assert that people and states share an interest in avoiding catastrophic climate change. This sentiment is usually accompanied by a further claim that global cooperation is needed to achieve shared interests. In fact, individuals and states have many competing interests, and most of them are more immediate than averting or reducing climate change. Rather than complain about this situation, climate change policy advocates would do well to accept it and get to work. Put simply, if we cannot simultaneously address immediate challenges faced by citizens and states while reducing GHG emissions and enhancing adaptation capacities, we will continue to fail to induce the energy, social, economic, and political transformations necessary to address climate change.
By all means, let’s learn more lessons from the rich history and the accomplishments of arms control negotiations, as suggested by Ruth Greenspan Bell and Barry Blechman. Frankly, there is also much to learn from international trade negotiations and international cooperation to improve airport safety and deliver the mail. But we cannot confine our gaze to instances of successful international cooperation. Even if we start smaller and break up negotiations into more manageable bits, or sing the praises of global volunteerism, as many analysts are doing now, the changes needed are too many and the time available is too short. International cooperation is needed, but it is not enough, and it is probably not the most important venue for action. Instead, advocates of climate change action must work to meet peoples’ needs, wants, and concerns at every level of authority and community, from the proverbial kitchen table to the UN. We must take seriously the fact that addressing climate change requires action and change at every level of social organization. Taking multilevel climate change governance seriously means that environmental activists and scientific analysts must curb their inclinations to tell citizens and states what they should do and care about, and start asking them what they want and need. How can we solve problems now in a way that helps curb climate change later?
In virtually every wealthier country, public officials are grappling with how to pay for health care and education, how to create jobs and reduce deficits, and how to address worrying dependence on others for energy and other resources, even as food and energy prices remain higher and more volatile than they have been in decades. In fast-growing emerging economies, citizens and states also grapple with these issues. Generally, in the rapidly developing nations, energy efficiency is even lower than in wealthier states, and the massive human and economic costs of severe air pollution are rising exponentially. The problems in peoples’ everyday lives need solving now.
Let’s close with four brief and interconnected examples: carbon taxes, strict air pollution controls, energy efficiency, and networks to improve urban governance. Even relatively low carbon taxes could help fund schools, universities, and health care, because they reduce deficits and encourage energy efficiency, carbon reductions, and renewable energy. Carbon and energy taxes and fees can be recycled into local and national economies with energy efficiency programs that put people to work, save money, and reduce pollution. Those worried about the global climate might get busy helping their friends and neighbors fund schools and improve energy efficiency in cities and provinces. Rather than complain that Chinese and Indian officials won’t agree to limit their emissions growth, those of us from wealthier countries should scale up our efforts to help them address horrible air pollution and make huge energy efficiency gains; things we have a lot more experience in actually doing as compared to our rather mediocre records in setting and meeting GHG reduction goals. Finally, this is an urban century, and working together, across borders and within hundreds of professions, to improve urban governance and urban life is necessary for many reasons beyond climate change. Megacities may well have more to learn from each other than from their national governments.
Global conferences, summits, and agreements are needed, but people, states, local governments, and firms make and implement decisions with more authority than most international organizations. There are hundreds of ways to work with our neighbors, fellow citizens, and far-flung associates everywhere in our many personal and professional networks, to help address the problems in peoples’ lives right now. Let’s get busy doing that and curb emissions along the way.
Stabilizing climate change has long been the chief objective for the countries that come to the UNFCCC table to negotiate. It is clear that climate change–related events represent a vital threat to the survival of many. However, when countries speak climate they think economy and that is, in fact, the real driver behind the negotiations. This is good news because the climate negotiations can succeed only if they credibly reflect the interests of the parties.
After Durban, the UN climate change negotiation process deserves some credit, but we don’t know whether it will perform up to expectations. It hasn’t in the past, and therefore it would not be wise to place a great deal of trust in the UNFCCC while exploring alternatives. The goal of policymaking is not the preservation of structures but achieving the goals these structures were set up to reach.
Whatever the final agreement, it is vital to all negotiating parties that it does not hamper their economic development prospects. The players want to be sure that while assuming pledges, they will not take on more burden than others. In my view, the problems of the least-developed countries and small-island states should be treated separately and with due attention to make sure that these parties, whose survival is at stake, do not become hostages of the negotiation game between the big economic players.
The core question therefore seems to be how to achieve a balance between the interests of the major players, most notably the United States and other advanced economies, China with the other major emerging powers, and the EU. Ruth Greenspan Bell and Barry Blechman suggest that much inspiration to answer this question may be found in the nonproliferation negotiations.
Nonproliferation negotiations have to address the problem of how to maintain inequality in status—nuclear haves versus have-nots—without creating inequality in security. The nuclear powers can prevent others from going nuclear only if they can ensure that both groups enjoy the same level of security. This implies that all nuclear actors are responsible players and do not pursue purely egoistic interests at the expense of other countries. An informal guarantor of this system is the United States. Opposition to Iran going nuclear stems from fear that the country’s behavior would destroy the entire system. The awareness of the system’s volatility led to the initiative of nuclear zero—the idea of a world free of nuclear weapons.
A key element of any international negotiation is the question of trust. It played a significant role in the disarmament and arms control process in the Cold War period. The answer to the problem was confidence-building measures. Their credibility was based on transparency and verification mechanisms. Trust is also a core problem of the climate process.
How can trust be built in the climate negotiations? Measurement and verification mechanisms are expected to play an important part in global climate agreements. However, they can work only if all parties perceive the agreement as fair. The idealistic approach is, however, generally a handicap rather than an advantage. Fear of a climate catastrophe remains the basic moral motive of the climate process. If it succeeds, it will be a clear moral victory. The question of how to achieve this goal is a political, not a moral one. The language of moral rigor can easily be used to camouflage real interest. The language of interests is much more appropriate for finding a compromise.
A change in language may be helpful, but is far from being a grand design for successful negotiations. The climate process has lost the initial sense of urgency. The fear of a future catastrophe must compete against the fears of present economic and financial trouble. And the incentives of the low-carbon economy are not perceived as strong enough to cause countries to give up the advantages of the traditional economic model.
The nonproliferation process is no blueprint for the climate change negotiations. Its role is inspirational instead. There is another international process that offers interesting but mostly negative experiences: the Doha Round of trade negotiations. These lessons should be studied carefully. International trade is not less complex than climate protection. Both processes are closely interlinked. And the next conference of the UNFCCC will take place in Doha. If we draw conclusions from the Doha Round, we may help prevent failure at COP 18 in Doha.
Looking at the essence of global anthropogenic climate change, I am led to believe that, as individuals consider the issue, they will necessarily conclude that it is in their best interest to limit their emissions of GHGs and promote such limitation on the part of other individuals as well. If some individuals insist on not limiting their emissions, this would be sufficient cause for them to be sued for damages by the ones who do so, based on their objective responsibility. It is possible to establish a causal link between emissions over a given period of time and the fraction of climate change caused by them, and therefore the fraction of the losses sustained.
If what I stated above is correct, I can then conclude that the majority of individuals on the planet have not considered the issue of climate change sufficiently. Otherwise, we would already have a comprehensive implementation of the UNFCCC. It is reasonable to assume that this will happen over time, as the magnitude of climate change increases and knowledge about it spreads further.
The experience of the past 20 years with the UNFCCC indicates that this process will not occur simultaneously all over the world, because concern about climate change must compete for attention with other priorities of societies. It is in this sense that I agree with the authors, for the UNFCCC process with its requirement for consensus will tend to follow the pace of the laggard countries.
On the other hand, a global treaty with emission limits for all countries should continue to be the goal of all negotiations. Any partial agreement should be designed so that it can be incorporated without difficulty into the global treaty. Any partial agreement should also contain incentives for outsiders to join it and/or disincentives for outsiders to stay out of it. It could even contain penalties for those outside it, such as trade restrictions.
Some of the progress made from implementation of the UNFCCC thus far can be attributed to the use of other groupings of countries, rather than only those that are customary in the negotiations themselves. The results achieved in Copenhagen were preceded by agreement in the Major Economies Forum earlier in the year. The Berlin Mandate, which originated the Kyoto Protocol, was made possible with the establishment of a Green Group. Unfortunately, the group worked only during the first COP to the UNFCCC.
Another avenue that could be exploited further is the inclusion of GHG emissions control in the agenda and in the regulatory framework of regional economic and trade agreements. The success of the EU in dealing with the control of GHG emissions is an extreme example, of course. There are, however, many other regional agreements that could and should be used to deal with the issue, even if at a more superficial level.