John F. Kennedy’s Space Legacy and Its Lessons for Today

Fifty years ago, on May 25, 1961, President John F. Kennedy, only four months in office, proposed before a joint session of Congress that “this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the Moon and returning him safely to Earth.” Kennedy was blunt; he said that agreeing to his proposal would involve a burden that “will last for many years and carry very heavy costs,” and that “it would better not to go at all” if the United States was not “prepared to do the work and bear the burdens to make it successful.”

In the 30 months remaining in his tragically shortened presidency, Kennedy proved willing to follow through on his proposal, approving an immediate 89% increase in the National Aeronautics and Space Administration (NASA) budget and then, in the next year, another 101%. These increases started the lunar landing program, Project Apollo, on its way to becoming the most expensive peacetime mobilization of U.S. financial and human resources ever undertaken in pursuit of a specific goal. In 2010 dollars, Apollo cost $151 billion; by comparison, the Manhattan Project cost $28 billion and the Panama Canal, $8.1 billion.

In my new book John F. Kennedy and the Race to the Moon, I trace the factors that convinced Kennedy that the United States had to undertake what he termed a “great new American enterprise” and the steps he took to turn his decision to go to the Moon into the effort that led to Neil Armstrong’s first step onto the lunar surface in July 1969. I also reflect on what lessons the Apollo experience may have for today’s situation, in space and elsewhere.

Before Kennedy decided that the United States should send people to the Moon, the U.S. reaction to a series of Soviet Union space successes, beginning with the launch of Sputnik 1 in October 1957, had been relatively muted. President Dwight Eisenhower did not believe it wise to try to compete with the Soviets in space achievements undertaken primarily for prestige purposes and thus was unwilling to approve a fast-paced U.S. effort in response to Soviet successes. In reality, there was in 1957 no “Sputnik moment” that led to accelerated government support of innovative space technology. That acceleration came only after Kennedy, seeing the global and domestic reaction to the first orbital flight of Soviet cosmonaut Yuri Gagarin on April 12, 1961, decided that the United States by default could not cede control over outer space to the Soviets and thus must enter a space race with the intent of winning it. It was a “Gagarin moment” rather than a “Sputnik moment” that precipitated massive government support for the technological innovations needed for success in space.

In retrospect, the impression is that Apollo moved forward without political problems; this is not correct. In 1961 and 1962, there was widespread political and public support for Kennedy’s lunar initiative, in part propelled by the enthusiasm of the initial flights of Project Mercury, including Alan Shepard’s suborbital mission on May 5, 1961, and John Glenn’s three-orbit flight on February 20, 1962. But by 1963, there was rising criticism of Apollo from several fronts. Eisenhower called the race to the Moon “nuts.” Many Republicans suggested that Kennedy should be spending more money on military space efforts nearer the Earth rather than on a lunar adventure. Leading scientists and liberals joined forces to suggest that Project Apollo was a distortion of national priorities and that there were many more worthy uses for the funds being spent on going to the Moon. Congress cut the NASA budget by 10% in 1963, slowing down its exponential increase.

Kennedy was quite sensitive to these criticisms, and in April, August, and October 1963 mandated major reviews of the Apollo commitment. The last of these reviews examined the options of slowing down Apollo, giving up on the Moon goal but continuing to develop the heavy-lift Saturn V Moon rocket, or canceling Apollo altogether. It concluded that none of these options were preferable to staying the course.

This review was not completed until November 29, 1963; by then, Kennedy had been dead a week. It is probable that Kennedy would have agreed with its conclusion; he was speaking of the space program in very positive terms in the days before his assassination. But Kennedy was also in the fall of 1963 pursuing another option: turning Apollo into a cooperative project with the Soviet Union. This is another aspect of the lunar landing program that has disappeared from memory.

Indeed, the 1961 decision to race the Soviet Union to the Moon was a reversal of Kennedy’s preference as he entered the White House. In his inaugural address he suggested “let us explore the stars together,” and in the first months of the Kennedy administration a White House task force worked on identifying areas of U.S.-Soviet space cooperation. Gagarin’s flight demonstrated to Kennedy that the United States had to focus on developing its own leading space capabilities, but the hope for cooperation never completely vanished from Kennedy’s thinking. As he met Nikita Khrushchev face to face in Vienna on June 3–4, 1961, Kennedy suggested that the United States and the Soviet Union join forces in sending people to the Moon. Khrushchev in 1961 was not open to such a prospect.

By 1963, the context for U.S.-Soviet space competition had changed. The United States had demonstrated to the world its technological and military power; the Soviet Union in 1961 backed off from a confrontation over access to Berlin and then in October 1962 yielded to U.S. pressure to remove its missiles from Cuba. Sobered by how close the two superpowers had come to nuclear war, Kennedy in 1963 proposed a new “strategy of peace” to reduce U.S.-Soviet tensions; an early success of this strategy was the signing of the Limited Test Ban Treaty in August 1963.

Kennedy, returning to his original point of view, thought that space cooperation might be a good next step in his strategy. He also was bothered by the increasing costs of Apollo and the chorus of criticisms of the lunar landing program. In a September 20, 1963, address to the General Assembly of the United Nations (UN), he made an unexpected, bold proposal. “Why,” he asked, “should man’s first flight to the Moon be a matter of national competition?” and suggested that the United States and the Soviet Union explore the possibility of “a joint expedition to the Moon.” Kennedy was quite serious in this proposal. When NASA seemed to be dragging its feet in coming up with approaches to U.S.-Soviet cooperation, Kennedy on November 12, 1963, directed NASA Administrator James Webb to take charge of government-wide planning for “cooperation in lunar landing programs.” With Kennedy’s death 10 days later, Apollo became a memorial to the fallen young president, and any possibility of changing it into a cooperative U.S.-Soviet effort disappeared. The country remained committed to the goal set for it by Kennedy.

Post-Apollo decline

One conclusion of John F. Kennedy and the Race to the Moon is that the impact of Apollo on the evolution of the U.S. space program has on balance been negative. Apollo turned out to be a dead-end undertaking in terms of human travel beyond the immediate vicinity of this planet; no human has left Earth orbit since the last Apollo mission in December 1972. Most of the Apollo hardware and associated capabilities, particularly the magnificent but very expensive Saturn V launcher, quickly became museum exhibits to remind us, soon after the fact, of what once had been done.

By being first to the Moon, the United States met the deadline that had provided the momentum that powered Apollo; after Apollo 11, that momentum rapidly dissipated, and there was no other compelling rationale to continue voyages of human exploration. In 1969 and 1970, even as the initial lunar landing missions were taking place, the White House canceled the final three planned trips to the Moon. President Richard Nixon had no stomach for what NASA proposed: a major post-Apollo program aimed at building a large space station in preparation for eventual (in the 1980s!) human missions to Mars. Instead, Nixon decreed, “we must think of them [space activities] as part of a continuing process … and not as a series of separate leaps, each requiring a massive concentration of energy. Space expenditures must take their proper place within a rigorous system of national priorities … What we do in space from here on in must become a normal and regular part of our national life and must therefore be planned in conjunction with all of the other undertakings which are important to us.” Nixon’s policy view quickly reduced the post-Apollo space budget to less than $3.5 billion per year, a federal budget share one-quarter of what it had been at the peak of Apollo. With the 1972 decision to begin the shuttle program, followed in 1984 with the related decision to develop a space station, the United States basically started over in human spaceflight, limiting itself to orbital activities in the near vicinity of Earth.

IN A SEPTEMBER 20, 1963, ADDRESS TO THE GENERAL ASSEMBLY OF THE UNITED NATIONS, KENNEDY MADE AN UNEXPECTED, BOLD PROPOSAL. “WHY,” HE ASKED, “SHOULD MAN’S FIRST FLIGHT TO THE MOON BE A MATTER OF NATIONAL COMPETITION?” AND SUGGESTED THAT THE UNITED STATES AND THE SOVIET UNION EXPLORE THE POSSIBILITY OF “A JOINT EXPEDITION TO THE MOON.”

The policy and technical decisions not to build on the hardware developed for Apollo for follow-on space activities were inextricably linked to the character of Kennedy’s deadline for getting to the Moon “before this decade is out.” By setting a firm deadline, Kennedy put NASA in the position of finding a technical approach to Apollo that gave the best chance of meeting that deadline. This in turn led to the development of the Saturn V launcher, the choice of the lunar orbit rendezvous approach for getting to the Moon, and the design of the lunar module spacecraft optimized for landing on the Moon. None of these capabilities were relevant to any politically feasible post-Apollo space effort.

The Apollo program also created in NASA an organization oriented in the public and political eye toward human spaceflight and toward developing large-scale systems to achieving challenging goals. It created from Texas to Florida the large institutional and facility base for such undertakings. Reflecting that base, which remains in place today, is a coalition of NASA and contractor employees, local and regional politicians, and aerospace industry interests that has provided the political support that has sustained the space program in the absence of a Kennedy-like presidential commitment to space achievement. With the Nixon White House rejection of ambitious post-Apollo space goals, NASA entered a four-decade identity crisis from which it has yet to emerge. Repetitive operation of the space shuttle and the extended process of developing an Earth-orbiting space station have not been satisfying substitutes for another Apollo-like undertaking. NASA has never totally adjusted to a lower priority in the overall scheme of national affairs; rather, as the Columbia Accident Investigation Board observed in its 2003 report, NASA became “an organization straining to do too much with too little.” All of this is an unfortunate heritage of Kennedy’s race to the Moon.

Lessons from Apollo?

Project Apollo also became the 20th-century archetype of a successful, large-scale, government-led program. The success of Apollo has led to the cliché “if we can put a man on the Moon, why can’t we …?” This is not a useful question. What was unique about going to the Moon is that it required no major technological innovations and no changes in human behavior, just very expensive mastery over nature using the scientific and technological knowledge available in 1961. There are very few, if any, other potential objectives for government action that have these characteristics.

The reality is that attempts to implement other large-scale nondefense programs during the past 40 years have never been successful, in the space sector or in the broader national arena. Both President George H. W. Bush in 1989 and President George W. Bush in 2004 set out ambitious visions for the future of space exploration, but neither of those visions became reality; the political and budgetary commitments needed for success were notably missing. In 2010, President Obama proposed a dramatic move away from the Apollo approach to space exploration, stressing the development of new enabling technologies and widespread international collaboration. He also declared that the Moon would not be the first destination as humans traveled beyond Earth orbit. This proposal has been met with skepticism and substantial political controversy. Even in its modified form as reflected in the 2010 NASA Authorization Act, its future is at best uncertain. The strength of the political coalition created by Apollo is very resistant to change.

In the nonspace sector, there have been few opportunities for large-scale government programs that do not require for their success a combination of technological innovation and significant changes in human behavior. The attempts to declare a “War on Cancer,” for example, required not only research breakthroughs but also changing the smoking habits of millions of Americans. Attempts to move toward U.S. “energy independence” run afoul of limited R&D spending and the complex ties between non-U.S. energy suppliers and the U.S. financial and government sectors. Providing adequate health care for all Americans turns out to be primarily a political, not merely a technical, challenge. Managing global environmental change has high technical uncertainties and challenging social inertia to overcome. And so on.

This record of nonachievement suggests that the lunar landing decision and the efforts that turned it in into reality were unique occurrences, a once-in-a-generation or much longer phenomenon in which a heterogeneous mixture of factors almost coincidentally converged to create a national commitment and enough momentum to support that commitment through to its fulfillment. If this is indeed the case, then there is little to learn from the effort to go to the Moon that is relevant to 21st-century choices. This would make the lament “if we can put a man on the Moon, why can’t we …?” almost devoid of useful meaning except to suggest the possibility that governments can succeed in major undertakings, given the right set of circumstances. Other approaches to carrying out large-scale government programs will have to be developed; the Apollo experience has little to teach us beyond its status as a lasting symbol of a great American achievement.

What future for space?

No one aware of today’s government deficits and the overall economic situation can suggest that the United States in 2011 commit the type of financial support to future space efforts that Kennedy made available to carry out Apollo. Kennedy made and sustained his commitment to developing the capabilities needed to reach the Moon before the Soviet Union because doing so was clearly linked to enhancing U.S. global power and national pride in the Cold War setting of the 1960s. Today, there most certainly is no pressing national security question, the answer to which for which the answer is “go to an asteroid,” or indeed anywhere else beyond Earth orbit. Space exploration is now a discretionary activity, not a national imperative. This country’s leaders need to decide, under very difficult circumstances, whether their image of the U.S. future includes continued leadership in space exploration, and then make the even harder choice to provide on a continuing basis resources adequate to achieving that leading position.

What faces the country today with respect to the future in space is in many ways a more challenging decision than that which faced Kennedy a half-century ago. In his final months in the White House, Kennedy was prescient enough to discern one path toward a sustainable space future: making space exploration a cooperative global undertaking. In the September 1963 UN speech, Kennedy observed that “Surely we should explore whether the scientists and astronauts … of all the world cannot work together in the conquest of space, sending some day … to the Moon not representatives of a single nation, but representatives of all our countries.” That admonition remains relevant today.

Medical Devices: Lost in Regulation

The implanted medical device industry was founded in the United States and has been a major economic success and the source of numerous life-saving and life-improving technologies. In the 1950s and 1960s, technological innovations such as the cardiac pacemaker and prosthetic heart valve meant that thousands of suffering Americans had access to treatment options where none had existed before. And because so many breakthrough devices were developed in the United States, the nation’s citizens usually had timely access to the latest technological advances. In addition, U.S. physicians were at the forefront of new and improved treatments because they were working alongside industry in the highly dynamic innovation process. In fact, they rose to worldwide preeminence because of their pioneering work on a progression of breakthrough medical therapies.

But that was then. Although the United States is still home to numerous medical device companies, these companies no longer bring cutting-edge innovations to U.S. patients first. And U.S. clinical researchers now often find themselves merely validating the pioneering work that is increasingly being done in Europe and elsewhere in the world. Worse still, seriously ill patients in the United States are now among the last in the world to receive medical innovations that have secured regulatory approval and clinical acceptance elsewhere in the developed world.

What’s behind this erosion of leadership and late access to innovations? Simply stated, an overreaching, overly burdensome, and sometimes irrelevant Food and Drug Administration (FDA) regulatory process for the most sophisticated new medical devices. To be fair, occasional device recalls have caused great political pressure to be placed on the FDA for somehow “allowing” defective products to harm patients. The agency’s response to political pressure has been to add additional requirements and to ratchet up its tough-cop posture in order to assuage concerns that it is not fulfilling its responsibility to the public. It is presumed, incorrectly, that a lax approval process is responsible. In most instances, however, the actual cause of a recall is outside the scope of the approval process. The most frequent causes of recalls are isolated lot-related subcomponent failure; manufacturing issues such as operator error, processing error, or in-process contamination; latent hardware or software issues; and packaging or labeling issues. In addition, company communications that describe incorrect and potentially dangerous procedures used by some medical personnel are also considered a recall, even though the device is not faulty. Face-saving implementation of new and more burdensome clinical trial requirements, often called added rigor by the FDA, is an ineffective and wrong answer to such problems.

Excessive approval burdens have caused a once-vibrant medical innovation engine to become sluggish. Using the FDA’s statistics, we learn that applications for breakthrough approvals are near an all-time low. It is not that companies have run out of good ideas, but the regulatory risks have made it impractical to invest in the truly big ideas. A slow but inexorable process of added regulatory requirements superimposed on existing requirements has driven up complexity and cost and has extended the time required to obtain device approval to levels that often make such investments unattractive. It must be noted that the market for many medical devices is relatively small. If the cost in time and resources of navigating the regulatory process is high relative to the anticipated economic return, the project is likely to be shelved. The result is that companies will instead shift resources toward making improvements in existing products, which can receive relatively rapid supplemental approval and start generating revenue much sooner. Some patients will benefit from these updated devices, but the benefits are likely to be much less impressive than those that would result from a major innovation.

THE RESEARCH COMMUNITY SHOULD TAKE THE INITIATIVE TO ENSURE THAT ALL DOCTORAL AND POSTDOCTORAL TRAINEES RECEIVE INSTRUCTION IN THE ETHICAL STANDARDS GOVERNING RESEARCH.

Perhaps the best measure of the FDA’s stultifying effect on medical device innovation is the delay, often of several years, between device approval in Europe (designated by the granting of the CE mark) and approval in the United States. The Europeans require that so-called Class III medical devices (products such as implanted defibrillators, heart valves, and brain stimulators) must undergo clinical trials to prove safety and functionality as well as compliance with other directives that relate to product safety, design, and manufacturing standards. In addition, the European approach relies on decentralized “notified bodies,” which are independent commercial organizations vetted by the member states of the European Union for their competence to assess and control medical device conformance to approval requirements. The primary difference in the U.S. system is a requirement for more and larger clinical trials, which can be extremely time-consuming and difficult to assemble. Ultimately, the European approach places more responsibility on physicians and their clinical judgment rather than on government officials who may have little appreciation of or experience with the exigencies of the clinical circumstance.

These Class III devices are complex and can pose a risk of significant harm to patients if they are unsafe or ineffective. It is for this reason that the FDA’s pre-market approval (PMA) pathway for these products is arduous and rigorous. It should be. Rigor, however, must be tempered with expert judgment that compares the demonstrable benefits with the possible risks to patients. And in setting requirements for evidence, regulators must distinguish between data that are essential for determining device safety and effectiveness and data that are nice to have.

Not to be lost in the FDA’s quest to avoid possible patient harm, however, is the reality that PMA devices offer the greatest potential for patient benefit. Delays in the approval of effective devices do result in harm to patients who need them. If we examine the date of approval for the identical device in Europe and the United States, we see that most devices are approved much later in the United States. Three examples illustrate this point. Deep brain stimulation for ineffectively managed symptoms of tremors and Parkinson’s disease was approved for use in the United States 44 months after European approval. A novel left ventricular assist device that permitted patients with severe heart failure to receive critical circulatory support outside the hospital was approved 29 months later. A pacemaker-like device that resynchronized the contraction sequence of heart muscle for patients suffering from moderate to severe heart failure was approved 30 months after it became available for patients in Europe.

These examples are drawn from experiences over the past 20 years. Each has matured into a treatment of choice. Table 1, which is based on data from the first 10 months of 2010, shows that delays continue to be long. Of the 11 new devices approved in this reporting period, 9 received the CE mark between 29 and 137 months earlier. It is not known whether the sponsor of the other two devices applied for a CE mark. In the case of an intraocular lens listed in the table, the FDA noted that more than 100,000 patients had already received the implant overseas. This level of utilization is significant by medical device standards and suggests strongly that its attributes have made it part of routine clinical practice. Yet U.S. patients had to wait more than five years for it to be available.

A legitimate question is whether the hastier approval of Class III devices in Europe harms overseas patients. A study conducted by Ralph Jugo and published in the Journal of Medical Device Regulation in November 2008 examined 42 PMA applications that underwent review between late 2002 and 2007. Of the 42, 7 resulted in FDA disapproval, of which 5 had received prior CE mark approval. Reasons for disapproval were attributed to study design, failure to precisely meet primary study endpoints, and the quality of the collected data in the FDA’s opinion. In other words, the problem was that these devices failed to satisfy some part of the FDA protocol, not that the FDA found evidence that they were not safe. The majority (34 of 42) of applications garnered both European approval and a subsequent, but considerably later, PMA approval.

Examples of Class III devices that received the CE mark and were subsequently pulled from the market are few. In recent testimony before the health subcommittee of the Energy and Commerce Committee, the director of the FDA’s device branch cited two examples. One involved certain breast implants. The other was a surgical sealant. These events indicate that the European approval process is imperfect, but hardly one that has subjected its citizens to a large number of unsafe devices. It is simply unrealistic to expect an event-free performance history, given the complexities and dynamic nature of the device/patient interface and the incomplete knowledge that is available.

But what about the harm caused by delaying approval? Delay may not be of much consequence if the device in question serves a cosmetic purpose or if there are suitable treatment alternatives. Delay is of major significance if the device treats an otherwise progressive, debilitating, or life-threatening disease for which medical alternatives don’t exist or have only limited effects. Such afflicted patients can’t wait for what has become an inefficient process to run its course. The paradox is that the FDA’s current regulatory approach may be causing unnecessary patient suffering and death by virtue of the regulatory delay imposed by its requirements.

It is particularly frustrating that devices invented and developed domestically are unavailable here for significant periods of time whereas patients elsewhere receive tangible benefit. It is not unusual for second and third generations of some products to be available internationally before the now outdated device finally secures U.S. approval.

The example of a minimally invasive transcatheter heart valve for the treatment of inoperable aortic stenosis illustrates the implications of excessive delay on the well-being of ill patients. Patients suffering from severe aortic stenosis have an estimated 50% mortality within 2 years after symptom onset if they do not undergo open-heart surgery for valve repair or replacement. Quality of life is adversely affected because of shortness of breath, limited exercise capacity, chest pain, and fainting episodes. A definable subset of affected patients includes those who are too frail to undergo the rigors of open-heart corrective valve surgery. The transcatheter approach, whereby a new replacement valve is inserted via the vasculature, much the way in which coronary balloon angioplasty is done, offers a much less invasive and less traumatic therapeutic option for the frail patient. Even though the technology and procedure are still evolving, clinical results have been impressive, and thousands of patients have received it. In a recently published clinical study, one-year mortality has been reduced by 20 percentage points when compared to the mortality of patients in the standard medical care group. Quality-of-life measures also improved substantially. The transcatheter heart valve was approved in Europe in late 2007; it is still awaiting FDA approval. A transcatheter valve of different design was approved in Europe in March 2007 and has produced impressive results in high-risk patients. Over 12,000 patients in Europe and 40 other countries where approval has been granted have received this valve. It too is still not approved in the United States. In the case of a disease with a poor prognosis, years of delay do not serve the best interests of affected U.S. patients, especially if there is credible clinical evidence that a new intervention performs well.

A more subtle effect of over-regulation is the loss of a leadership position by U.S. physicians and clinical researchers. Whereas pioneering clinical trials used to be the province of U.S. physicians at major academic medical centers, today non-U.S. physicians and medical centers are conducting a substantial and growing number of safety and effectiveness trials. As a result, overall clinical expertise and identification of ways to further improve a new technology have shifted overseas. International physicians increasingly supplant U.S. clinical researchers as medical pioneers. The United States can no longer be assured that its physicians are the preeminent experts at the cutting edge or that U.S. patients are receiving world-class treatments.

The peer-reviewed medical literature serves as a good indicator of where innovation in clinical practice and technology is taking place. The role of journals is to publish findings that are new, true, and important. Reported findings inform the future course of medical practice. A review of the current medical literature concerning transcatheter heart valves, as an example, shows that non-U.S. investigators and centers dominate the field. Published reports not only document the initial clinical experience but also identify advances in technique, refine indications for use, and propose next-generational improvements. High-caliber clinical studies are, without question, being performed in the United States as part of the data package for the FDA, and they are producing valuable information. The point is that although they are adding layers of relevant confirmatory data, they are not driving the cutting edge of medical practice.

A rigorous approval process for medical devices is absolutely necessary. However, the process must be relevant for the safety and effectiveness questions that pertain to the product under review. The process must be efficient, streamlined, administratively consistent, predictable, and conducted with a sense of urgency. It must limit its scope of requirements to those data that are central to demonstrating safety and effectiveness. There are always more questions that could be asked of a new product. A patient-centered regulatory process prioritizes and limits questions to those that are essential to the demonstration of safety and effectiveness in the context of the disease. The FDA has a very legitimate role to play in ensuring that new technologies are sufficiently safe and effective for patient use. This is a relative, not absolute, standard. Benefits must be balanced against risk. As practiced today, the regulatory process is unbalanced at the expense of innovations that could help patients.

Current FDA processes for the approval of medical device innovations need to be reengineered to balance the quest for avoidance of possible harms with the potential for helping today’s seriously ill patients. The agency must also limit the scope of studies to address necessary questions rather than to aspire to scientific elegance and excessive statistical certainty. As Voltaire said, “The perfect is the enemy of the good.” The European experience demonstrates that it is possible to make safe and effective new medical devices available to patients much more quickly. Actual clinical experience demonstrates that an excessively cautious and slow regulatory process conflicts with the interests of patients suffering from serious and progressive diseases. They simply don’t have the luxury of time.

New Voices, New Approaches: Drowning in Data

I was at the most undignified moment of moving into my new office—barefoot and on tiptoes on my desk, arranging books on a high shelf—when one of my fellow professors at the University of Washington–Bothell walked in to introduce himself. Pulling my shirt firmly over my waistband, I clambered down to shake his hand and exchange the vital information that begins academic acquaintanceships: Where had I come from? What kind of research did I do?

I felt my shoulders tense, bracing for the question I knew was probably coming next. I explained that I studied communities living next to oil refineries, especially how residents and refinery experts make claims about the effects of chemical emissions on people’s health. My colleague replied with what I’d been hoping he wouldn’t: “But is it really the emissions from the refineries that are making those people sick?”

An important question, to be sure—essential, even, to policymakers deciding how refineries and petrochemical plants ought to be sited and regulated. So it’s hardly a surprise that in the decade since I started my research, I’ve been asked The Question scores of times, in settings that range from conference presentations to New Orleans dive bars. Yet it’s a vexed question, and I have always been frustrated and often struck dumb with my inability to answer it. “There’s a lot of controversy over that,” I explained to my colleague in my best anthropologist-of-science manner. “The truth is that we don’t really know enough to say for sure.”

But as I returned to the solitary work of shelving books, I sought refuge in a place that had recently become my favorite environmental fantasy: A brown, windswept hill at the edge of a refinery in the San Francisco Bay area, topped by a small white trailer the size of a backyard tool shed. In my imagination, the trailer glows in the California sun as the state-of-the-art monitoring instruments inside it hum and flash, measuring minute by minute what’s in the air. In my imagination, a cadre of scientists peers at computer screens to turn these data into a more satisfying answer to The Question, an answer that matches real-time chemical concentrations with the health concerns of people living nearby.

My fantasy is set in a real place, though I’ve never seen it. The hill of my imagination overlooks the town of Benicia, a bedroom community of 30,000, where people who drive tight-lipped to San Francisco jobs all week stroll past the antique shops to First Street for scones and lattes on Saturday morning. It’s a charming place, yet Benicia’s industrial past persists; a slim smokestack pokes up like a flagpole beyond the trailer, its white plume meandering off toward the Carquinez Strait. Benicia is home to one of the 150 or so oil refineries that feed the nation’s appetite for energy. Less than a mile from downtown, an Oz of tanks and towers on 800 acres churns away day and night, turning up to 170,000 barrels of oil per day into gasoline, asphalt, jet fuel, and other petroleum products. The Valero facility is the town’s biggest employer and the major denizen of Benicia’s industrial park. The trailer sits on its southern edge.

These “fenceline communities” are places where people cough. Where they carry asthma inhalers. Where every resident has a handful of neighbors who have died of cancer. Where refinery and government officials insist that chemicals in the air don’t harm them, and residents are sure that they know better.

Most of the communities I have studied are clustered in the South and are smaller, poorer, and more economically dependent on their refineries than is Benicia. For them, the trailer and the data it offers are even more urgent than they are for Benicia residents. These “fenceline communities” are places where people cough. Where they carry asthma inhalers. Where every resident has a handful of neighbors who have died of cancer. Where refinery and government officials insist that chemicals in the air don’t harm them, and residents are sure that they know better. These communities are places where conflict lingers in the air along with the smell of sulfur.

Data that can show how chemical exposures are related to health symptoms could help these communities. It could suggest the kinds of protection they need, could show the real extent of emissions reductions necessary on the part of the refineries, could point the way to improved environmental policies. In my mind, Benicia’s trailer gleams with the possibility of new knowledge that helps everyone.

But a few weeks after my colleague’s visit, my hopes for the trailer dimmed. As I was putting the finishing touches on a syllabus in my office, by now already messy, the phone rang. It was Don Gamiles, an engineer whose company installed Benicia’s trailer. He had been excited about the project in Benicia from the time he first mentioned it to me earlier in the summer.

Gamiles has been involved in air monitoring since the aftermath of the Persian Gulf War, when he ran equipment to detect potential poison gas releases during United Nations inspections of Iraqi facilities. He’s invented two instruments that can measure concentrations of toxic gases in real time, both of which are part of the suite of monitors that he pulled together for the trailer in Benicia. But these days, Gamiles’s business really centers on mediating conflicts between facilities that release those gases and neighboring communities concerned about them. Affable and unassuming in his characteristic polo shirt and khakis, Gamiles works with both sides to design and install suites of monitors, like the one in Benicia, that incorporate his instruments and produce solid data about what’s in the air so that neither side can exaggerate. “Everyone’s a little bit right,” he says. “The refinery guys tend to over-trivialize what’s coming out. But communities want to make them the villain.”

Though he’s been involved in other projects (one major refiner is even talking about making Gamiles’s monitors a standard part of their environmental best practices), the Benicia project is what Gamiles raves about: “The sampling station’s the best in the world,” he said, reminding me that it can monitor hydrogen sulfide, black carbon, and particulates in addition to hazardous air pollutants such as benzene, xylene, and toluene, all for a very reasonable price tag. And the best part: “Everybody’s happy!” He chuckled and I imagined his self-effacing grin. “This is a model of how to do things right.”

“There’s just this one sticking point,” he added. He’d called to ask for my help. The refinery and the community group that pushed for the monitors were having trouble figuring out how to present the data. If the monitors detected chemicals, how could they best explain what that meant to someone looking at that data on a public Web site?

The refinery, it seemed, wanted to avoid alarmism and irate hordes at their gates; on the other hand, it was in no one’s interest if they swept real risks under the rug. “Everybody has a valid point,” Gamiles said. “What would be helpful to have is a listing of standards for all of this stuff ”—all of the chemicals that the monitoring station could be detecting, starting with benzene, toluene, xylene, and sulfur dioxide. Could I work with a student to put together a list?

My heart sank. Here was The Question again, in a more nuanced form. Gamiles was asking, “At what exposure levels do emissions from refineries make people sick?” Worse, this wasn’t the first time I’d been asked to take stock of the available information, and what I’d found the last time had driven me to my fantasies of fancy new monitors in the first place.

Buckets of data

In the summer of 2001, I was halfway through my 20s and a Ph.D. program when I walked into the Oakland, California, offices of a nonprofit organization called Communities for a Better Environment (CBE). After years with my nose in a book, I was dying to do something “real” and antsy about finding a focus for my thesis project. I hoped that interning for CBE, whose lawyers, scientists, and organizers worked with Northern California communities to advocate for environmental justice, might address both problems at once.

No one was at the reception desk, so I hung by the door, fingering pamphlets and newsletters announcing the organization’s latest successes, including its work helping refinery-adjacent communities establish “bucket brigades” to monitor air quality with do-it-yourself air samplers made from hardware store supplies. Eventually someone bustled past and directed me to the Science Department at the end of one of the office’s warren-like hallways.

My first assignment seemed simple enough: Communities were getting data with their bucket samples, but they were having a hard time saying what the numbers meant. My job was to compile a list of the state and federal air standards for different chemicals that might show up in a bucket sample. The list would be like a yardstick that citizens could use to put air quality readings in perspective, showing how the numbers measured up to the thick black line that separated “safe” from “dangerous.”

As a starting place, my supervisor handed me a second-generation photocopy of a fax containing a table of numbers. The fax was from Wilma Subra, a MacArthur “genius grant”–winning chemist and legend among refinery-adjacent communities in Louisiana. Subra’s document listed “levels of concern”; specifically, the regulatory standards set by Louisiana and nonenforceable “screening level” recommendations from the neighboring state of Texas. I was to expand the table, adding comparable standards from other agencies, to give bucket users a straightforward way to know when the concentrations they measured were cause for alarm.

Squinting at a computer screen from the corner of a borrowed desk, navigating through one agency Web page after another in search of air quality standards, I had no problem adding columns to Subra’s chart. Agencies such as the Louisiana Department of Environmental Quality (LDEQ), its counterparts in Texas and North Carolina, and the American Toxic Substances and Disease Registry set standards or made recommendations for acceptable ambient air levels of individual chemicals. But each included only a subset of the chemicals I was looking for. The federal Clean Air Act, for example, set limits on total volatile organic compounds, a category that includes these chemicals, but not on the individual air toxins under that umbrella, such as benzene, toluene, and xylene: monoaromatic hydrocarbons known or suspected to cause cancer.

As the table grew, I was surprised to find that there was no consensus on what constituted a safe or permissible level for any of the chemicals. Even after I’d converted the disparate standards into a common unit of measurement, reading across any one row (for benzene, say, or hydrogen sulfide), there were numbers in the single digits, in the double digits, decimal numbers. The lack of consensus was apparent even in the table’s header row: One agency set limits on 8-hour average levels, the next on annual averages, the next on 24-hour averages. There didn’t even seem to be agreement on what period was most appropriate for any given chemical. I didn’t have a single yardstick; I had several of them, each for a different kind of measurement, each with multiple black lines. How would this help anyone figure out what chemical concentrations they should worry about?

At my boss’s urging, I made some phone calls to find out how the agencies could arrive at such different standards. A scientist at the LDEQ explained that his agency used occupational health studies—studies of how workers were affected by the chemicals—and multiplied the results by a scaling factor. I remembered the number from my graduate class in risk analysis: it adjusted risk levels based on 8-hour-a-day, 5-day-a-week worker exposures to numbers appropriate for populations such as people living near refineries that could be exposed to the same chemicals for as much as 24 hours a day, 7 days a week.

A Texas regulator, in contrast, told me that her agency based its recommendations mostly on laboratory studies. I knew about this process from my class, too. Groups of mice or rats or other small animals would be exposed to varying levels of a chemical to determine the highest dose at which the animals didn’t appear to suffer any adverse health effects. The agency scientist would have looked at a number of different studies, some of them with incompatible results, made a judgment about which numbers to use, then applied a safety factor in case human populations were more sensitive to the chemical than other mammals. But what neither she nor her counterpart in Louisiana had to work with were studies of what these chemicals did to people who breathed lots of them at a time, in low doses, every day.

In the end, digging into the standards and learning how incomplete and uncertain they were convinced me that we don’t really have a good answer about exactly what the chemical levels mean for health. Anyone who professes to know with certainty is operating as much on belief as on data. So by the time Don Gamiles asked me, nine years later, if I could assemble the standards for the chemical that his shiny new monitoring station was detecting, I wanted to tell him that all he was going to get was a whole bunch of yardsticks. What he needed was an additional stream of data, health data that could put chemical concentrations in the context of real people’s experiences and, over time, help put those standards on a firmer footing.

But Gamiles is an engineer, not an epidemiologist. I knew that his contract would not have funding for what I was proposing. And explicitly mentioning the health concerns wasn’t likely to help Gamiles maintain the collegiality between the Valero refinery and its neighbors in Benicia.

I took a deep breath and agreed to look for a student who would investigate the standards. Maybe, I told myself, if we could show Gamiles and the engineers at Valero the uncertainties in the standards, we could start a richer conversation about what the data coming from the new monitoring station meant, and how to figure it out.

Having that conversation, or at least trying to, seemed especially important since more and more refineries, especially in environmentally conscious parts of the country such as the San Francisco Bay area, have been seeking Gamiles’s services, installing their own monitors before an increasingly vigilant Environmental Protection Agency (EPA) can require them to. And yet part of me knew that imagining I could get refiners and communities to talk about the issue was overly optimistic, if not downright naïve. I already knew that petrochemical companies weren’t troubled by the limitations of the standards. In fact, years earlier in Louisiana, I‘d seen how they use those very uncertainties and omissions to their advantage.

The lowdown in Louisiana

Many of the air monitors in the trailer in Benicia hadn’t yet been developed when Margie Richard decided to take on the Shell Chemical plant across the street from her home in Norco, Louisiana, in the late 1980s. But what was in the air, and what it could do to a person’s health, were very much on her mind.

Richard’s front windows looked out on an industrial panorama: tall metal cylinders and giant gleaming spheres connected by mazes of pipes, all part of the processes that turn crude oil into gasoline, ethylene, propylene, and industrial solvents. Half a mile away, at the other edge of the 3,700-person town, an oil refinery loomed. On good days, a faint smell of motor oil mixed with rotten eggs hung in the air; on bad days, chemical odors took Richard’s breath away.

Throughout Richard’s eight-square-block neighborhood of Diamond, the historic home of Norco’s African-American population, people were getting sick. Richard’s young grandson had asthma attacks that landed him in the emergency room on more than one occasion. Two streets over, Iris Carter’s sister died in her forties of a disease that doctors told the family they only ever saw in people living near industrial facilities.

Barely five feet tall and bursting with energy even in her early sixties, Richard led her neighborhood in confronting Shell about its plant’s ill effects. Every Tuesday afternoon, she and a few other women with picket signs walked up and down the far side of her street, in front of the chain link fence that separated Shell from the community, demanding that representatives from the company meet with residents to discuss a neighborhood relocation. Concerned about their health and safety, she and other residents wanted out.

By 1998, Richard and her neighbors finally started to get some quantitative data to support their claims that Shell’s emissions were making them sick. Denny Larson, then an organizer with CBE in Oakland, arrived with buckets. With the low-tech air sampler—little more than a five-gallon plastic paint bucket with a sealed lid and a special bag inside—Richard documented an incident at Shell Chemical that emitted potentially dangerous concentrations of an industrial solvent called methyl ethyl ketone (MEK). She also gathered evidence that residents of her community were exposed to toxic chemicals when odors were inexplicably bad, and even personally presented a high-ranking Shell official with a bag of air from her community at a shareholder’s meeting in the Netherlands.

In 2002, Richard and her group triumphed. Shell agreed to buy out any Diamond residents who wanted to leave. But Richard had succeeded in more than winning relocation. She had also put air monitoring on Shell’s agenda, where it had not previously been. That fall, even as families in Diamond were loading moving vans and watching bare ground emerge where their neighborhood had been, Shell Chemical and its Norco counterpart, Motiva Refining, launched their Air Monitoring…Norco program.

Good neighbors

One muggy September afternoon, I picked up a visitor’s badge at the guardhouse at Shell Chemical’s East Site and made my way to the company’s main office building. The rambling, two-story beige-and-brown box could have been in any office park in suburban America, except that in place of manicured gardens and artificial lakes, it was surrounded by distillation towers and cracking units.

David Brignac, manager of Shell’s Good Neighbor Initiative, which was overseeing the Air Monitoring…Norco program, greeted me with a boyish grin and a slight Louisiana drawl and led me upstairs to his roomy office. We sat at a small round table with Randy Armstrong, the good-natured but no-nonsense Midwesterner in charge of health, safety, and environment for Shell Norco.

Brignac walked me through a printed-out PowerPoint presentation: Surveys showed that Norco residents thought that there were dangerous chemicals in the air and that they had an impact on people’s health. Air Monitoring…Norco sought hard data about what really was in the air.

Scribbling frantically on a legal pad, I noted what he left out as well as what he said. There was no mention of the bucket samples; no suggestion that Shell’s decision to relocate Diamond residents may have fueled the perception that the air was somehow tainted; no hint at the regulatory enforcement action, taken in the wake of the MEK release, that required a “beneficial environmental project” of Shell; in short, there was no acknowledgement that the monitoring never would have happened if not for the Diamond community’s activism.

Using their pencils to move me through their talking points, the two engineers described how the data produced by the program would be “objective, meaningful, and believable.” Brignac described a planning process that had included not only Shell and Motiva engineers, but also state regulators, university scientists, and community members. Armstrong outlined a sampling procedure that replicated the one used by the LDEQ in their ambient air monitoring program: Each sample would be taken over a 24-hour period, on rotating days of the week (Monday this week, Sunday next), and their results averaged together, all to ensure that the data gave a “representative” picture of Norco’s air quality and not anomalous fluctuations.

Like all good scientists, Brignac and Armstrong acknowledged that they didn’t know what their study would find. They monitored emissions leaving the plant, Armstrong explained, and used computer models to predict how they would disperse into surrounding areas. Those models gave them every reason to believe that the air quality was fine. And the company had done studies of its workers’ health, which also gave them confidence that their emissions weren’t making anyone sick. But we all knew that models aren’t measurements, and the health of adult plant workers may or may not say anything about the health of residential populations that include the very young and very old. So with a slightly nervous laugh (or was that my imagination?), Armstrong assured me that Shell would be releasing the results even if they showed that air quality was worse than they had thought.

Nearly six months later, I followed Margie Richard, now a resident of the nearby town of Destrehan, into Norco’s echoey, warehouse-like American Legion Hall. Brignac and Armstrong milled with their colleagues near the table of crackers, cheese, and that unfathomable Louisiana delicacy, the shrimp mold. They greeted us warmly as the facilitator began to usher people to their seats for the presentation of Air Monitoring…Norco’s first set of results.

A nervous young African-American man from Brignac’s Good Neighbor Initiative team began by explaining the rationale and process of the program, using more or less the same slides that I had seen in September. Then a white 30-something from the independent firm that had carried out the monitoring, less polished than his Shell counterparts and looking uncomfortable in his tie, gave us the results. The headline: “Norco’s air meets state standards.” They had compared the concentrations measured in Norco, he explained, to limits on chemical concentrations set by the LDEQ, and the measured levels were below the regulatory limits.

Neither the contractor nor the assembled Shell representatives said so explicitly, but the conclusion they wished us to draw was clear: Air quality in Norco met the state’s standards, so it was perfectly healthy to breathe. I wanted to object. How could they say that when there were no standards for some of the chemicals that they measured? When Louisiana’s standards represented just one version of where scientists drew the line between “healthy” and “dangerous”? I sat on my hands and held my tongue; rabble-rousing at public meetings is not an anthropologist’s mandate, especially when she hopes to continue interviewing all sides.

But I wasn’t the only one inclined to question the implication that “meets standards” was the same as “safe.” In the question-and-answer period, a middle-aged African-American woman, her graying cornrow braids piled neatly in a bun, stood up and asked just how good those standards were. How could we know that they were strict enough? One of the university scientists involved in the project, a public health researcher from Tulane, reassured her that the standards were based on the best available scientific studies and updated as new information became available. Shell’s engineers nodded their approval. For them, it seemed, Air Monitoring…Norco had settled the matter: There was no reason to think that emissions from Shell were making anyone sick.

Elsewhere in the audience, Margie Richard pursed her lips. I couldn’t tell what she was thinking, but the fact that she was there at all, even after having moved away from Norco, suggested that the Air Monitoring…Norco program had been an important aspect of her group’s victory. For years, her group had been calling for hard data about the chemicals they were exposed to, and they had gotten it. But in the drafty warehouse, the victory seemed hollow. Shell had interpreted their data in the context of questionable standards in order to prove what they had believed all along. I wondered if Richard was disappointed. I was.

The story didn’t have to end there, of course. Residents of Diamond and other fenceline communities had challenged the industry’s science before with their bucket samples. They could likewise have launched an attack on the idea that “meeting standards” was the same as “safe” and insisted on health monitoring to go along with the air monitoring. But their relocation victory meant that Diamond’s activists were already scattered to new neighborhoods. Battles over the adequacy of standards were not likely to be fought in Norco.

Yet the question remains for other communities: As more and more facilities set up air monitoring programs to satisfy the demands of concerned neighbors, will community activists continue to push to see that monitoring data are used to get better answers about how chemicals affect their health? Or will they accept comparisons to existing standards that rubber-stamp the status quo? Whether the trailer in Benicia turns out to be the breakthrough I’ve been imagining it to be rests on what residents do with its data.

California dreaming

When I talked to Don Gamiles in the fall, I had my own favor to ask of him: Would he talk to my colleague, Oakland-based writer Rachel Zurer, and introduce her to the people he had been working with in Benicia? We were working together on a story about monitoring and wanted to know more about the exemplary collaboration that he was involved in. Valero, it turns out, wasn’t ready to talk about the project; perhaps they didn’t want anyone wondering why the public didn’t have access to the data yet. But Marilyn Bardet, the founder of the citizen’s group in Benicia that helped pressure the company to install the air monitoring trailer, was more than happy to meet with Zurer.

On a blustery morning in October 2010, Bardet welcomed Zurer into her manicured bungalow on Benicia’s east side, then retreated to her office to finish an e-mail. Zurer was left to nose around in the dining room, where Bardet’s dual identities were on display.

Bardet, 62, is professional artist who seems to spend as much time as a community activist as she does painting and writing poems. The walls, shelves, end tables, and cupboards of the dining room were decorated with paintings, sculptures, and shells. But the wood of the dining table hid beneath stacks of papers and files relating to Bardet’s newest project: a bid to help her town qualify for federal funding to clean up an old munitions site in town, money she said that city employees hadn’t known to request.

Bardet assumes that she has a yardstick that shows where “safe” levels of toxins and particulates in the air become dangerous ones, and that there are reliable benchmarks that would tell teachers when they should close their windows and city officials when more traffic would be too much.

Bardet returned in a few minutes, talking quickly. That afternoon she had a meeting scheduled with some Valero officials to keep working out the details of the air monitor’s Web site—trying to work through the problem that Gamiles had brought up on the phone, of how to present the data publicly—and she’d been sending them a last-minute memo reiterating her goals for the project. As she gathered her car keys and led Zurer out the door for a tour, she caught her guest up on the details.

Some Benicia residents don’t think about the refinery, Bardet explained as she drove under the freeway, past an elementary school, and turned left and uphill just before reaching the Valero property’s southern border. It doesn’t fit the image of their quaint, comfortable town, and as luck would have it, the prevailing winds tend to sweep refinery odors away from the people, out to sea. The refinery has a good safety record and no history of major conflicts with its neighbors. From many places in town, it’s invisible.

Yet Bardet and her fellow members of the Good Neighbors Steering Committee (GNSC) keep a sharp eye on Valero. Keenly conscious of the toxic problems other fenceline communities such as Norco have faced, they are wary of the industrial giant in their midst. The air monitoring station is a product of their vigilance. In 2008, the company made changes to some construction plans without going through the full environmental review that those changes required. Dexterous in navigating the intricacies of bureaucratic requirements, Bardet and the GNSC used Valero’s mistake to require the refinery to pay for environmental benefits in Benicia. A single letter Bardet wrote detailing Valero’s missteps, plus many hours of work by the GNSC, netted the community $14 million. The monitoring trailer was part of the package.

Bardet parked the car at the end of a residential cul-de-sac and escorted Zurer to a spot under an ash tree in the vacant lot between number 248 (white picket fence, a baby-blue Volkswagen Bug in the driveway) and number 217 (single-story ranch with gray siding, two boats, and a satellite dish). She pointed toward the minor white bump on the horizon, curtained by tall stalks of thistles atop a small brown hill a hundred yards across an empty field. It was the monitoring station that I’d been conjuring in my imagination since Gamiles first mentioned it.

“You wouldn’t know that this is a big deal,” Bardet said. And it was true. In person, the trailer looked like nothing special. But back in the car again, through lunch at a restaurant in town, all the way until Bardet zoomed off to her meeting with Valero, Bardet shared with Zurer her vision of what the monitors might mean for her community, and for her future as an activist.

“It’s not just the refinery,” she explained. She pointed out that, for example, while Benicia’s elementary school is less than a mile from Valero, it’s also near a corporation yard, a gas station, a highway cloverleaf, and the major road through town. The air monitors and weather station could expose exactly which pollutants are infiltrating the school, from where, and under what conditions.

“With that information, you can give a school district an idea of how to improve their site, so you can mitigate it,” she said. Teachers could avoid opening windows during rush hour. Or community activists like Bardet would have the data they’d need to evaluate the effect of a new development that would add more traffic to the road. “Policy needs to be evidence-based,” Bardet explained to Zurer. “That’s what we’re after.”

Scientific realities

Zurer called with her report on her meeting with Bardet as I was answering a flurry of e-mails from students worried about their final papers. Hearing Bardet’s vision for the monitoring station, my hopes sank further. It wasn’t that they weren’t going to use the data; indeed, it seemed that the information that the monitoring station produces will be something that Bardet can leverage in her myriad projects to improve her community. But in her pursuit of evidence-based policy, Bardet takes for granted the same thing that the engineers at Shell did and that Gamiles does. She assumes that she has a yardstick that shows where “safe” levels of toxins and particulates in the air become dangerous ones, and that there are reliable benchmarks that would tell teachers when they should close their windows and city officials when more traffic would be too much.

Maybe my pessimism is ill-founded. Maybe the ongoing struggle between Valero and residents over how to present the data will ultimately open the Pandora’s box of questions surrounding air quality standards—how they’re set, how good they are, how they could be improved—and convince Bardet that she needs a better yardstick. Maybe an enterprising epidemiologist will be seduced by the vast quantities of exposure data that this monitoring station, and others around the Bay area, are producing and persuade Bardet and her group to institute complementary health monitoring in order to create a better yardstick. Maybe the Centers for Disease Control’s National Conversation on Public Health and Chemical Exposures, which acknowledges the importance of environmental health monitoring, will help convince government agencies to sponsor such a study.

Maybe, in the end, it was just the stack of grading on my desk that had sucked my hope away. But despite the piles of new information that Benicia’s monitoring station will produce—is, indeed, already producing—I couldn’t convince myself that any new knowledge would be made, at least not in the absence of more fundamental changes. I wandered off to the faculty holiday party conjuring a new daydream: The National Institute of Environmental Health Sciences would call for proposals for studies correlating air monitoring with environmental health monitoring; the EPA, making ambient air toxics standards a new priority, would demand that data from fenceline communities be a cornerstone of the process; and Marilyn Bardet would seize on the new opportunities and make her community part of creating a better answer to The Question.

Is Climate Change a National Security Issue?

Around the planet there is growing momentum to define climate change as a security issue and hence as an agenda-topping problem that deserves significant attention and resources. In December 2010, for example, while poised to start a two-year term on the United Nations Security Council, Germany announced its intention to push to have climate change considered as a security issue in the broadest sense of the term. Germany’s objective captures a sentiment that has been expressed in many venues, including several recent high-level U.S. national security documents. The May 2010 version of the National Security Strategy repeatedly groups together violent extremism, nuclear weapons, climate change, pandemic disease, and economic instability as security threats that require strength at home and international cooperation to address adequately. The February 2010 Quadrennial Defense Review links climate change to future conflict and identifies it as one of four issues in which reform is “imperative” to ensure national security. This sentiment has met resistance, however, and today there is a serious debate about whether linking climate change to security, and especially to national security, makes sense.

The case in support of this linkage integrates three strands of argument. The first builds on efforts to expand a very narrow definition of the term “national security” that was dominant during the 20th century. The narrow meaning was shaped by a specific set of events. After World Wars I and II, a third major war involving nuclear weapons was widely regarded as the single greatest threat to the survival of the United States and indeed to much of the world. In response to this perception, the National Security Act of 1947 sought “to provide for the establishment of integrated policies and procedures for the departments, agencies, and functions of the Government relating to the national security.” Its focus was on strengthening the country’s military and intelligence capabilities, and the government was supported in this effort through the rapid buildup of independent think tanks and security studies programs at colleges and universities throughout the country. National security was seen by most experts as a condition that depended on many factors, and hence the broadest goals of the national security community were to build and maintain good allies, a strong economy, social cohesion and trust in government, democratic processes, civil preparedness, a skilled diplomatic corps, and powerful, forward-looking military and intelligence agencies. For more than four decades after World War II, however, efforts to improve national security were assessed against estimates of the threats of nuclear war and communist expansion, and invariably emphasized the paramount importance of military and intelligence assets. National security was largely about the military and intelligence capabilities necessary for preventing or winning a major war.

Because some resources are becoming increasingly scarce and others increasingly valuable, the prospects for environmental factors gaining weight in the security arena appear robust.

In the 1990s, this powerful architecture was challenged in several ways. First, with the rapid and largely unexpected collapse of the Soviet Union came the question: Since there were no other countries likely to launch a full-scale nuclear attack against us, could we now reduce our large military and intelligence expenditures and invest in other areas? Second, as the 20th century drew to a close, it became evident that the nature of violent conflict had changed from short, brutal, and decisive interstate wars to long, somewhat less brutal, and frequently inconclusive civil wars. Under the quagmire conditions of this new generation of warfare, superior military capability did not translate inexorably into victory.

Finally, having spent so much time focused on the particular threat of military-to-military conflict, analysts asked if we should now be looking at threats more broadly and even considering alternative ways of thinking about security. By mid-decade, human security and some variant of global security had gained support as alternative or complementary ways of thinking about security. Further, in the United States and abroad, conceptions of security threats expanded to include issues such as terrorism, disease, and global economic crisis.

As the era of great wars receded, some observers concluded that violence was now mainly structural, a fact hidden or ignored during the Cold War, when the threat of large-scale violence was linked to an ideologically based power struggle. From the structuralist perspective, victory and defeat were unproductive ways of thinking about security. Instead, improvements in security depended on extensive reform of the global economy, the international system of states, the divide between nature and civilization, and entrenched patterns of gender and ethnic inequality. Many others agreed that our new era of security underscored the limits of military force, which had been the centerpiece of much 20th-century security policy. Hence, at the very least, we needed to carefully rethink security and reconsider what was needed to provide it, a reflection that would certainly lead to important, if not necessarily structural, change.

One of the issues invigorating all of these challenges to Cold War security thinking (challenges that, incidentally, were not entirely new and had been voiced at various times throughout the 20th century) was a growing concern about environmental degradation and stress. Indeed, just as the Cold War ended, the Rio Summit on Environment and Development catalyzed global attention around climate change, biodiversity loss, and deforestation; underscored the need for national, regional, and global conservation strategies; and introduced a transformative vision that involved shifting the entire planet onto the path of sustainable development. In this context, a handful of observers argued that, in light of the trends observed by scientists from multiple disciplines, the Cold War peace dividend should be redirected toward environmental rescue, and that failing to do this could push the world toward higher and higher levels of insecurity.

The second strand woven into the case for integration picks up on this latter intuition. A central question of this strand of analysis is: What could happen if we fail to act to promote sustainable development and allow alarming environmental trends to continue more or less unchecked? Building on arguments that extend at least as far back as 18th-century demographer Thomas Malthus, who worried that population growth would outstrip increases in food production, leading to a period of intense famine, war, and disease, a contemporary generation of scholars used case studies and other methodologies to explore linkages between environmental stress and two national security challenges: violent conflict and state failure. Although simple, causal relationships have proved elusive—a generic problem in the study of war and peace—patterns have been identified that many have found compelling. To simplify what is becoming a rich field of inquiry, certain natural resources, especially when they suddenly become scarce (water or arable land) or acquire high value (diamonds or timber) can become a significant factor affecting government behavior, development prospects, population flows, and forms of competition. Under certain conditions, such challenges trigger innovation and adaptation, but under other conditions they contribute to violent conflict and other types of insecurity. Because some resources are becoming increasingly scarce and others increasingly valuable, the prospects for environmental factors gaining weight in the security arena appear robust.

TABLE 1

Climate change and national security

National security concerns Weakening of elements of national power State failure Disruption and violent conflict
Climate change impacts
Changes in water distribution Job loss in rural areas Reduce agricultural outputs, basic needs unmet Increased competition for water

Severe weather events Undermine economic strength Funds diverted to disaster relief, away from infrastructure, etc. Displace people into areas where they are not welcome

Heat waves Pandemics Greater demands to meet basic needs Riots in urban areas

Drought Undermine economic development Deepen social inequality as some groups control food and water Displace people into areas where they are not welcome

Sea-level rise Destroy coastal military bases Increase inequality and promote extremism as some people lose land Put the survival of states such as the Maldives and Bangladesh at risk

Flooding Reduce military effectiveness in the field Destroy critical infrastructure Increase urban strife

The examples in Table 1 are not meant to be definitive but rather to indicate how climate effects could affect national security. Clearly many of these examples could be reiterated in many boxes.

Scholars such as Thomas Homer-Dixon, for example, focus on the adverse social effects of scarcity of water, cropland, and pasture. Scarcity, he argues, results from a decrease in the supply of a resource, an increase in the demand for a resource, or a socially engineered change in access to a resource. Under conditions of resource scarcity, Homer-Dixon contends that developing countries may experience resource capture (one group seizes control of the resource) or ecological marginalization (people are forced to move into resource-poor lands), either of which may contribute to violent conflict. Continuing this trajectory of thought, Colin Kahl argues that resource scarcity may generate state failure ( a collapse of functional capacity and social cohesion) or state exploitation (in which a collapsing state acts to preserve itself by giving greater access to natural resources to groups it believes can prop it up). Although some researchers are not persuaded by arguments linking environmental stress to state failure and violent conflict, many others regard them as compelling, and many policymakers and practitioners have absorbed these arguments into their world views.

The third strand of analysis involved in integrating climate change and national security builds on the environment and security literature by focusing on the real and potential societal effects of climate change. Climate change scientists are observing changes in the distribution of water, increases in the intensity of severe weather events, longer heat waves, longer droughts, and sea-level rise and flooding. Some worry that continued global warming will move the planet across critical thresholds, causing “black swan” events such as massive gas releases, rapid glaciation, or microbial explosions. There are several ways in which such changes could generate threats to national security.

Summarizing the discussion above, challenges to national security can be organized into three groupings: anything that weakens the elements of national power; contributes to state failure; or leads to, supports, or amplifies the causes of violent conflict. Climate change has the potential to have a negative impact in each of these domains (see Table 1).

National power. National power depends on many variables, including environmental factors such as geography and resource endowment, military capacity, intelligence capacity, and a range of social factors, including population size and cohesiveness, regime type, and the size and performance of the national economy. Climate change has the potential to affect all of these elements of national power. For example, militaries may be less effective at projecting and exercising power if they have to operate in flooded terrain or during a heat wave. Warming that affects land cover could reduce a country’s renewable resource base. Intelligence is difficult to gather and analyze in a domain marked by uncertainty about social effects.

Perhaps the area of greatest concern, however, is that climate change might undermine economic development, especially in poor and fragile states. The economist Paul Collier has argued that the bottom billion people on the planet currently live in states that are failing to develop or are falling apart. He contends that these states are often enmeshed in interactive conditions and processes that inhibit development: chronic violent conflict, valuable natural resources such as oil or diamonds that groups vie to control, unstable neighboring countries creating chronic transboundary stress, and government corruption and inefficiency. An increase in costly and hard-to-manage events such as floods, droughts, heat waves, fires, pandemics, and crop failures would probably be an enormous additional burden on these countries, introducing a daunting new layer of development challenges and hence weakening a central element of national power.

State failure. The authors of the 2009 report of the International Federation of the Red Cross and Red Crescent Societies wrote that, “The threat of disaster resulting from climate change is twofold. First, individual extreme events will devastate vulnerable communities in their path. If population growth is factored in, many more people may be at significant risk. Together, these events add up to potentially the most significant threat to human progress that the world has seen. Second, climate change will compound the already complex problems of poor countries, and could contribute to a downward development spiral for millions of people, even greater than has already been experienced.” The 2010 report notes that the cost of climate-related disasters tripled from 2009 to 2010 to nearly $110 billion. Disasters are costly, and the costs appear to be mounting dramatically. From the perspective of state failure, disasters are deeply alarming because they shift scarce funds away from critical activities such as building infrastructure, investing in skills development, and implementing employment and poverty-reduction programs, and into emergency relief. Such a shift can have a direct and very negative impact on a government’s functional capacity.

The same argument can be advanced for the diffuse longer-term effects of climate change that might affect food security, public health, urban development, rural livelihoods, and so on. Under conditions of either abrupt or incremental change, people may be displaced into marginal lands or unwelcoming communities, enticed by extremist ideology, compelled to resort to crime in order to survive, or take up arms, all of which risk overtaxing the government, deepening social divisions, and breeding distrust and anger in the civilian population.

The gravest climate change threat, however, is that states will fail because they can no longer function as their territories disappear under rising seas, an imminent threat to the Maldives and some 40 other island states. Glacial-outburst floods might cause similar devastation in countries such as Nepal, and a change in the ocean conveyer that warms the northeast Atlantic Ocean could cause countries such as the United Kingdom to disappear under several feet of ice within a few years. These starkly existential threats have become the single most important national security issue for many vulnerable countries. Last year, the president of the Maldives held a cabinet meeting underwater to bring attention to this type of threat.

Violent conflict. Building on the insights of Homer-Dixon, Kahl, and many others, it is reasonable to suggest that climate-induced resource scarcities could become key drivers of violent conflict in the not too distant future. On this front, another area of particular concern has to do with so-called climate refugees. In 2006, Sir Nicholas Stern predicted that 200 million people could be permanently displaced by mid-century because of rising sea levels, massive flooding, and long, devastating droughts. Large flows of poor people from rural to urban environments and across ethnic, economic, and political boundaries would cause epic humanitarian crises and be extremely difficult to manage. One can easily imagine such stress becoming implicated in violent conflict and other forms of social disruption.

Stern’s prediction is of the back-of-the-envelope variety and has faced criticism from researchers such as Henrik Urdal, who argues that the “potential for and challenges related to migration spurred by climate change should be acknowledged, but not overemphasized. Some forms of environmental change associated with climate change like extreme weather and flooding may cause substantial and acute, but mostly temporal, displacement of people. However, the most dramatic form of change expected to affect human settlements, sea-level rise, is likely to happen gradually, as are processes of soil and freshwater degradation.” The bottom line, however, is that nobody knows for sure what the scale and social effects of climate-increased population flows will be.

The basic concerns suggested above are well captured in the many publications that followed the publication of the 2007 Intergovernmental Panel on Climate Change (IPCC) reports. For example, the CNA Corporation report National Security and the Threat of Climate Change concluded that “climate change acts as a threat multiplier for instability in some of the most volatile regions of the world.” Further, it predicted that “projected climate change will add to tensions even in stable regions of the world.” Similarly, the German Advisory Council on Global Change’s report World in Transition: Climate Change as a Security Risk said that “Climate change will overstretch many societies’ adaptive capacities within the coming decades.” The tenor of much recent writing is that climate change will weaken states that are already fragile, and it will contribute to violent conflict, intensify population displacement, increase vulnerability to disasters, and disrupt poverty alleviation programs, especially in South Asia, the Middle East, and sub-Saharan Africa, where large numbers of people, widespread poverty, fragile governments, and agricultural economies conspire to create heightened vulnerability.

The counterargument

The case against linking climate change to national security raises concerns about each of the strands of argument outlined above and is rather intuitive. Insofar as the language of national security itself is concerned, three important criticisms have been advanced. In a series of editorials in Foreign Policy magazine, Stephen Walt contends that a careful reading of the arguments about climate change made in the CNA report and in similar documents makes it clear that this is simply not a national security issue, at least not for the United States. In the foreseeable future, climate change may cause serious problems in places such as Bangladesh that spill over into places such as India, but these problems and the responses they will trigger are better described as humanitarian issues. For Walt and other realist thinkers, national security is about the survival of the state, and apart from black swan events we can imagine but not predict or prepare for, threats of this magnitude have been and continue to be threats of military aggression by other states. Walt asks us to consider what we gain in terms of analysis, strategy, and policy formulation by expanding the domain of national security into areas where immediate or near-term threats to the survival or even well-being of the United States are vague or unknown, even though the rhetoric used to describe them is often urgent and dramatic.

The tenor of much recent writing is that climate change will contribute to violent conflict, intensify population displacement, increase vulnerability to disasters, and disrupt poverty alleviation programs.

A very different concern comes from scholars such as Daniel Deudney, Barry Buzan, and Ole Waever, who worry about militarizing or securitizing climate change and the environment. Like Walt, they are not suggesting that climate change is a trivial matter; rather, they worry about whether framing it as a national security issue and thus linking it to military and intelligence tools is wise. This linkage, they suggest, might inhibit certain forms of global cooperation by drawing climate change into the zero-sum mentality of national security. It might encourage Congress to authorize significant funds, a good thing in principle, but insofar as these funds are expended through the defense community, this may prove a costly and inefficient way of promoting adaptation, mitigation, and disaster response. It might encourage the government to conclude that extraordinary measures are acceptable to fight climate change—actions that could make many scientists, development specialists, social entrepreneurs, business leaders, and environmentalists uncomfortable.

Finally, a third concern has been expressed within the United Nations (UN), largely in response to efforts by the secretary general and by countries such as Germany to frame climate change as an issue that should be considered by the UN Security Council. On the one hand, this could give the five countries of the world that are permanent members of the Security Council—China, France, Russia, the United Kingdom, and the United States—enormous leverage over this issue, and not all of the other member countries are convinced that this would lead to good, fair, and effective outcomes. On the other hand, some countries in the UN, especially the G77 countries, think that it may prove to be in their long-term interest to have climate change framed as primarily a development issue rather than as a national or even global security issue. Such a frame could serve as the basis for lucrative compensation payments, development assistance, and special funds for adaptation and mitigation. In short, then, linking climate change and national security may compromise responses to the former, muddy the rationale of the latter, reinforce global inequities, and reduce development assistance as resources are transferred to humanitarian and military activities.

The second strand of argument has to do with the relationship between environmental stress and major outcomes such as violent conflict and state failure. Critics of this literature, such as Nils Petter Gleditsch and Marc Levy, point to its methodological and analytical weaknesses. To date, studies have been inconclusive. There appears to be a correlation between certain forms of environmental change, such as sudden changes in water availability, and violent conflict or state failure, but the findings are tentative and must compete with other variables that correlate nicely with disastrous social outcomes. Case studies are often quite persuasive, but they are in some sense easier to shape and their authors may be selecting for relationships that in fact are atypical.

Insofar as the case for integrating climate change and national security draws on arguments that environmental stress contributes to violent conflict and state failure, these skeptics emphasize that this literature is young and flawed by speculation. A frequent concern is that after the initial outburst of largely theoretical claims advanced in the 1990s, there has not been much progress in weeding through these claims and bolstering and clarifying those that are most promising from the perspective of empirical data. Moreover, very little has been done to estimate the extent to which environmental stress has generated effective positive responses such as innovation, adaptation, and cooperation. If for every Haiti there are a dozen Costa Ricas, then the alarm bells may be ringing too loudly.

Finally, the third strand of the case for integrating climate change and national security is rooted largely in the IPCC reports, and especially AR4, released in 2007. But although increases in the amount of carbon in the atmosphere, the severity of storms, the average global temperature, and so on are well documented, the social effects of these trends are far more speculative. Will climate change really tend to intensify the (possibly weak) relationships between environmental stress and national security? Even if it does, is securitizing these relationships wise, or should they be cast more explicitly in terms of humanitarian crises, global inequities, development challenges, population displacements, and poverty alleviation?

The Danish economist Bjorn Lomberg has been vocal in this arena, arguing that the environmental/climate security community underestimates the vast stocks of human ingenuity that are available to ease adaptation. Lomberg argues further that it is not at all clear that investments in climate change response are the best investments to make in terms of the safety and welfare of the human species. Here the idea of the fungibility of different forms of capital is relevant. If over the next 50 years we can make great gains per dollar invested in technologies that can be used for multiple purposes, and much smaller gains in terms of shifting the alarming global trend in carbon emissions, is the latter really a wise course of action? A large stock of technological capital, enabled by shrewd investments today, might be far more beneficial to the current poor and to all future generations than steps that marginally reduce greenhouse gas emissions or add small amounts of forest cover, or than steps that do much more along these lines but only by radically reducing investments elsewhere.

Action or lethargy?

The case for linking climate change and national security is robust but imperfect. This is partly because there remains considerable uncertainty about how climate change will play out in different social contexts and partly because the term national security is loaded with expectations and preferences that some analysts find worrisome.

If one finds the linkage persuasive, then there is much the United States can and should be doing on this front. For the past decade, innovation and response have taken place mainly at the state and city levels. Although this activity has in many ways been remarkable, it has not been uniform across the United States, and it connects poorly into larger global initiatives. In this latter regard, the United States has been particularly lethargic, a lethargy nourished by massive but not clearly successful investments in the war on terrorism and the financial bailout.

A few more years of lethargy could be detrimental to the United States in several ways. It could strengthen China, which has an enormous amount of capital to invest and is directing some of this into alternative energy and green technology—far more than the United States is. With or without climate change, the world’s need for new sources of cheap and reliable energy is growing, and China is positioning itself for an emerging market that could be huge. Delaying might force the United States to contend with a considerably more robust multilateral framework for addressing climate change, a framework that it has not helped to design or synchronize with other multilateral institutions that it does support. Delaying could impose huge long-term costs on the U.S. economy, as it finds itself compelled to deal with water shortages, dust bowls, and hurricanes in an emergency mode. Katrina disabused everyone, except perhaps politicians and other government officials, of the notion that the nation is adequately prepared for the severe events that climate science predicts. Even if the United States does not increase its own vulnerability to megadisasters, inaction may not be cheap, as the country finds itself embroiled in costly humanitarian efforts abroad. And finally, in the worst-case scenario, lethargy might enable the sort of global catastrophe that climate scientists have described as possible. It is hard to imagine what competing investments of the nation’s resources would warrant ignoring this issue.

So is climate change a national security issue? Climate change is the most protean of science-based discourses, with an admixture of confidence and uncertainty that allows it to be integrated into any political agenda—from calls for sweeping reforms of the international system to those for more research and debate. Climate change does not mobilize agreement or clarify choices so much as engender reflection on the values we hold, the levels of risk we are comfortable assuming, the strengths and weaknesses of the institutions and practices that exist to meet our personal needs and allocate our shared resources, and the sort of world we want to bequeath to our children and grandchildren.

Archives – Winter 2011

DAVID MAISEL, The Lake Project 3, Chromogenic print, 29 × 29 inches, 2001.

Image courtesy of the artist/Haines Gallery.

The Lake Project 3

Photographer David Maisel’s early work, such as the one reproduced here, focuses primarily on environmentally impacted sites. His images show the physical impact on the land from industrial efforts such as mining, logging, water reclamation, and military testing. Because these sites are often remote and inaccessible, Maisel frequently works from an aerial perspective, thereby permitting images and photographic evidence that would be otherwise unattainable. The absence of easily recognizable points of reference within the image eliminates any sense of scale, causing the viewer’s mind to oscillate between the damaged landscape and the beautiful abstract composition.

Time for Climate Plan B

Policymakers in the United States and elsewhere have assumed for 15 years that putting a price on carbon would be an effective strategy for addressing climate change. Nations would price carbon emissions, cap carbon production levels, ratchet down the cap over time, allow carbon emitters to pay for their continued use of carbon but at an ever-increasing cost, and use this market mechanism to price carbon emissions into a steep decline. This market-forcing would spur the introduction of innovative new technologies into energy markets, displacing fossil fuel technologies.

The approach derived from the successful use in the United States of cap-and-trade against acid rain under the Clean Air Act Amendments of 1990. At the Kyoto Framework Convention in 1997, the United States persuaded participating nations that the approach would work worldwide against carbon emissions. The subsequent Kyoto Protocol eventually obtained enough signatures to, in theory, go into effect, and Europe has enacted a cap-and-trade program. However, the biggest carbon emitters—the United States and China—have been AWOL. Concerned policymakers have continued to assume that the United States would adopt carbon cap-and-trade over time and then lure in China and other emerging economies. During the most recent session of Congress, they came close. The House passed cap-and-trade legislation in 2009, but the Senate in 2010 came up short of the votes needed to break a filibuster. In one of those periodic tidal shifts in congressional politics, cap-and-trade worried coal and manufacturing states suffering job losses from the recession and became anathema to many conservatives, indefinitely postponing the legislation.

Remarkably, U.S. policymakers never assembled a backup Plan B for carbon pricing. Although late, now is the time to develop such a plan, because the nation cannot afford to suspend climate efforts for years more in hopes of a congressional consensus.

A new Plan B will need to be more politically viable than the approach recently rejected. Aside from its political vulnerability, cap-and-trade as a stand-alone policy had a structural problem. The acid rain cap-and-trade program in the 1990s was possible because technological solutions were readily available. But few of the technological solutions for climate change are near maturity or ready for deployment on a large scale. The cap-and-trade solution for climate, if prices are set at sound levels, is strong on creating potential markets for innovations (demand pull) but weak on developing needed innovations (technology push) to feed the new markets. The United States will need a program that encompasses technology push as well as new means for demand pull, supporting all the technology stages in between, from research through deployment.

R&D agencies should fast-forward their research agendas to develop technologies to a stage that can be cost-competitive at the outset of their market launch.

Problems in innovation

To understand why needed technologies have been neglected, it is useful to consider the pattern of innovation in the United States more broadly. Take computers. There was nothing quite like computing before the advent of the computer, which opened up an entirely new economic sector. This is the U.S. way: A comparative advantage in innovation, painstakingly built since World War II, has operated largely at the frontier of innovation, fostering new sectors and new markets. The United States has not been good at incorporating innovation into complex, established sectors. The nation does biotech rather than health care services; it does not go back and innovate in entrenched sectors.

Energy is the poster child for this problem. It is a complex, established sector, reinforced by technological, economic, and political barriers that limit new entry. The first generation of new energy technologies will largely be secondary technologies: components in existing energy systems or platforms. For example, advanced batteries will be built for cars; enhanced geothermal energy will be part of utility systems. This means they will have to be cost-competitive at the moment they attempt to launch into established energy technology markets—a hard task for new technologies. In addition, they must scale up rapidly if they are to make a difference in an energy sector that constitutes more than 10% of the world economy. This profound price/scale problem is why “the moment of market launch” is as important as the traditional “valley of death” problem for new technologies entering established sectors such as energy.

The nation therefore must design a different kind of innovation system for new technologies entering the established energy sector. The system built for information technology assumed that new entrant technologies would be able to command a premium price because they offered a new functionality, then would drive down the production cost curve over time. But because this will not work for largely component-based energy technologies, the nation’s innovation system, which historically has been organized to focus on the “front end” of R&D, will also have to focus on the “back end” of demonstrations, test beds, and the creation of initial commercial markets for new energy technologies. Each will reinforce the other: A stronger back end will inform and improve front-end R&D and vice versa.

In moving ahead, building public and political support will be the key. Building such support may require casting some familiar ideas in new ways. For example, many people do not grasp the idea or importance of the changing climate; a Pew survey finds that only 35% of the public in 2009 viewed it as a “very serious problem,” and 49% believed that either there is no global warming or it is created by natural causes. But the public does understand the global politics of oil and the economic and political damage it inflicts. The nation’s 2008 oil trade deficit approached $400 billion. People know that the relationship between oil supply and the two wars in the Middle East is not a coincidence. Energy security carries political resonance. It offers a potential political driver on the petroleum side of the energy economy, but what about the rest of the energy economy?

As nations such as China build major economic sectors around new energy technologies, the U.S. public is increasingly aware that there are competitive stakes attached to leadership of these new markets. After Japan missed leadership of the information technology innovation wave in the 1990s, its economy never quite recovered. Because the U.S. economy is organized around a comparative advantage at the innovation frontier, if it misses an innovation wave the results will be problematic. Expert observers anticipate that energy will be the center of the next major wave, yet the United States is starting to fall behind. For example, the United States developed solar photovoltaic technology but has less than 10% of the world market. It also developed key advances behind the lithium ion battery but is only now struggling to enter this market. China attracted $34 billion in clean energy private investment in 2009, almost twice the U.S. level. The public relies on U.S. economic leadership; it will understand the consequences of ceding it.

Energy security and economic competitiveness are viable political drivers for progress on energy if they can be articulated to the public. What are the program elements they can drive?

Strengthening the front end

Although many studies have called for major new investments in energy R&D, funding is still not at robust levels. In 2007, federal investment was only about half its peak level of 1980. Private-sector R&D has similarly declined, and studies indicate that the energy industry invests less than 1% of annual revenues in R&D for new technology. This is far below the U.S. industry average and far below the 15 to 20% levels in the most innovative sectors such as semiconductors and pharmaceuticals. Although venture capital funding has grown significantly in energy technology (reaching $3 billion before the recession), venture funding supports commercialization and is not a substitute for R&D. Low private-sector R&D dictates a larger initial public-sector R&D role. Seeing this, the Obama administration made a major investment in the 2010 federal budget in stimulus legislation in energy technologies. The Department of Energy (DOE) funded some $5 billion in energy R&D and $34 billion in technology implementation above baseline appropriation levels. However, that was a one-time spurt, and those levels have not continued in the 2011 budget. The additional funding was welcome, but it risks building up the energy technology enterprise and then pushing it off a funding cliff.

Although the R&D funding issues persist, there has been significant progress in the institutional organization of the federal energy R&D enterprise. The DOE has a history of developing and maintaining institutions in separated stovepipes of laboratories, basic research, and applied agencies. The technology handoffs between basic and applied agencies are few, and technologies transitioned into markets are fewer.

The DOE has been working to improve the connections among the stovepipes. Within the Office of Science’s $5 billion basic research program, 46 Energy Frontier Research Centers (EFRCs) have been created, providing $3 million to $5 million a year to competitively selected university and laboratory teams working on basic research problems, tied to advances in energy technologies. The DOE also is creating energy innovation hubs in areas such as solar, advanced nuclear, batteries, and buildings. Whereas EFRCs are searching for new opportunities in the basic research space, the hubs will work in areas of promise to push related advances at a larger scale and move them toward commercialization. Reflecting their scaling role, the three hubs budgeted so far will receive around $20 million in annual funding. Both the EFRCs and the hubs will engage and build up new talent at universities and also connect lab and university teams to enhance lab productivity.

The third new DOE front-end innovation entity, the Advanced Research Projects Agency–Energy (ARPA-E), may prove the most interesting. The agency, which received $400 million in stimulus funds, is modeled on the Defense Advanced Research Projects Agency (DARPA). It has adopted DARPA’s “right-left” model of seeking particular technology advances on the right side of the innovation pipeline and then looking on the left side for revolutionary breakthroughs to get there. Its projects will aim at accelerating innovation, cutting technology costs and speeding ideas to proof of concept or prototype in three to five years. ARPA-E also is instituting DARPA’s hybrid model of building groups of smaller companies and university researchers to ease technology transition and working to connect its emerging technologies with private-sector development customers. It is working in what it calls “the white space” of technology opportunities: higher-risk projects that could be transformational where little work previously has been undertaken. In addition, ARPA-E is working as a technology connector within the DOE, drawing on basic ideas from the Office of Science that could be accelerated and building ties with the Office of Energy Efficiency and Renewable Energy and other applied agencies to hand off the prototype and demonstration stages. Energy Secretary Steven Chu has argued that if just a few of ARPA-E’s high-risk, high-reward projects are commercialized, the energy technology landscape could be transformed.

Although a recent President’s Council of Advisors on Science and Technology (PCAST) report appropriately proposes further DOE stovepipe streamlining, efforts to date represent considerable progress in filling gaps on the federal side of the energy innovation system. After the 2010 stimulus funding runs out, however, DOE energy technology R&D will remain inadequately funded; current levels are about $5 billion a year. When the federal government embarks on a major technology thrust [the Manhattan and Apollo projects, the Carter/Reagan defense buildup, and the recent Department of Defense (DOD) ballistic missile defense program are examples], it spends far more. These examples were more single-focus technology projects undertaken on a government contractor model. They were organizationally simpler than spurring the development of a range of new technologies in one of the economy’s largest and most technically complex sectors, but their investment levels inform us of the scale needed for energy.

The nation is not going to be able to achieve an energy technology revolution on the cheap. President Obama’s 2010 budget request for a $15 billion–per– year energy technology program, premised on cap-and-trade auction revenues, which was dropped from his 2011 budget, appears to be on the right scale. But it would still be less than half of what the National Institutes of Health spends on R&D each year. Although the deficit-control focus of the new Congress may rule out a ramping up of funding in the immediate future, front-end energy R&D funding levels need to be revisited. Bundling smaller-scale steady revenue streams aligned to meet particular R&D sectoral needs (the PCAST report notes that a small 0.1 cent per kilowatt hour charge on utility bills yields $4 billion a year) could be a new way to look at this problem.

But advances in energy technology will not be achieved simply by throwing more money at the problem; investments must be allied to a sounder innovation system. The new government entities designed to fill innovation gaps should be strengthened and made major recipients of this funding. ARPA-E was authorized to be a $1 billion–per–year agency, but currently is far from that. Even if it reached that number, it would still be only a third the size of DARPA, and the nation’s energy technology challenges are at least as daunting as those involving national security. The EFRCs and the research hubs should be at multiples of current funding levels, and other key DOE programs should be similarly increased.

The United States will also need to make a variety of adjustments in its R&D portfolio. For example, R&D agencies should seek and support technologies that may offer new functionality at the outset and so initially command a premium price. The goals of energy innovation have been societal not individual: An electric car is an improvement for energy security, not for the driver. If energy R&D included a focus on new functionality where possible, that might help turn the corner on consumer acceptance of energy technologies. Similarly, agencies should fast-forward their research agendas to develop technologies to a stage that can be cost-competitive at the outset of their market launch. For example, the DOE is now aiming its solar agenda at an installed cost of $1 per kilowatt hour. If carbon pricing will not be available to induce new innovations into energy technology markets, R&D strategies will have to be adjusted accordingly to include these features.

In addition, the nation must move toward a true energy technology strategy. The DOE periodically issues such a document, but it is usually a list of projects the agency already is doing. A real strategy would be future-oriented and involve the key government players engaged in energy procurement and regulation, including the Environmental Protection Agency (EPA), DOD, Department of Agriculture (USDA), and Department of Housing and Urban Development (HUD). This effort also will require private-sector leadership from established and new energy firms, because new technologies must be adopted by the private sector. The strategy should lead over time to a public/private technology roadmap for energy R&D and implementation, updated frequently because the technology opportunities will shift as advances occur and markets evolve.

Beefing up the back end

With improvements to the front end of the innovation system, new ways are needed to encourage the market entry of new energy technologies. To reemphasize, strengthening and ensuring better connections between front and back ends will bolster the performance of both. Areas for improving the back end of the system include:

Financing, incentives, and subsidies. The DOE has had a significant loan guarantee program since 2005 but did not issue loans until 2009. It has a mandate to “facilitate the introduction of new or significantly improved energy technologies with a high probability of commercial success in the marketplace.” Although the program is aimed at helping move technologies past the initial commercialization barrier, the mandate’s language builds in potential contradictions. It is limited to deployment-ready projects, so it excludes demonstrations, and the “high probability of commercial success” clause, perhaps due to the legacy of failed 1980s synfuels projects, significantly limits the risks that the program can take with innovative technologies. In addition, there is the complex definitional problem of determining what constitutes a “significantly improved” technology. By fall 2010, the loan program had committed to funding 19 projects worth some $25 billion, from nuclear power to electric vehicles to renewable energy. However, it has disbursed only about $9 billion across four of these commitments, and demand remains far higher than lending.

Problems have plagued the program in the past. The DOE’s credit committee and the Office of Management and Budget have set stringent requirements that restrict the criteria for project approval to projects that are commercially creditworthy. The program’s new leadership inherited a sizable task in reforming burdensome and time-consuming application forms, slow turnaround times, a shortage of personnel with expertise in project financing, and a lack of demonstrable, portfolio-wide lending goals and loan performance benchmarks. Loan guarantees with their requisite thresholds and cumbersome processes can be too costly for smaller firms and projects; additional, nimbler financing vehicles should also be available. Recognizing the problems, the energy committees in the House and Senate have proposed legislation for a Clean Energy Deployment Administration—a “green bank”—that would have authority to overcome some of these hurdles. Consideration should be given to making this institution a self-financing, separate government corporation with a range of tools, a wider risk portfolio, authority to support commercial-scale demonstration projects, and protection from congressional vagaries.

Arrays of tax incentives are available to consumers, businesses, builders, and manufacturers for efficient homes, appliances, vehicles, and power systems. Fossil fuels continue to receive the lion’s share of energy tax subsidies, and corn ethanol receives tax benefits and exemption from fuel taxes. The efficiency of existing tax incentives should be examined and steps taken to rationalize them according to a hierarchy ranging from high-carbon to low-carbon technologies. The tax savings from the former could help support the latter. Layered incentives that offer additional benefits for the next stages of efficiency gains could speed new technology deployment.

Government procurement. The DOD has long played a central role in implementing new technology, funding the research, development, demonstrations, test beds, and initial markets for many of the most important innovation waves, including aviation, electronics, nuclear power, space, computing, and the Internet. DOD historically has supported technology development only if it fits its direct mission, but there are signals that energy fits within this restriction. Energy needs the DOD’s historic innovation function.

The DOD increasingly sees its energy dependence as a strategic vulnerability; if a major oil-supply nation goes down, the military can suffer operational problems. The department also knows that the nation’s oil dependence forces it to maintain highly vulnerable lines of communication to the Mideast; military leaders have estimated that from one-third to one-half of the DOD’s expenditures are aimed at ensuring the nation’s global oil supply. Aside from its strategic energy concerns, the DOD faces significant tactical problems as well. The widely seen images in September 2010 of burning Army oil trucks in Pakistan illustrate this vulnerability. The Army’s dependence on oil forces it to maintain systems of convoys and fixed strong points for constant oil resupply, which expose its troops and counteract the tactical flexibly and mobility it needs to succeed in asymmetric conflicts. Moreover, the DOD has significant facilities cost problems. It controls the largest block of buildings in the nation, in every kind of geographic and climatic area. In a period of cost cutting, improving the military’s energy efficiency at its 507 installations, in its 300,000 buildings, and for its 160,000 nontactical vehicles translates to cost savings. The DOD therefore set a very aggressive energy savings requirement—a 34% greenhouse gas reduction for its facilities by 2020—under Presidential Executive Order 13514.

Local governments control building standards, but federal model codes could be tested and developed for localities to consider implementing.

Every year, the DOD receives approximately $20 billion in military construction appropriations for facilities construction and renovation. If the DOD would apply sound efficiency standards for buildings and equipment to its funding stream, it could leverage a major initial market for new technologies. Buildings represent some 40% of U.S. carbon emissions, yet the construction sector is highly diversified and fragmented, dominated by small firms that undertake little R&D and are slow to implement innovations until their reliability and cost are fully demonstrated. If the DOD used its military construction revenue stream to boost innovation in the building sector, it would be an early beneficiary, and this could provide a significant test bed and initial market for new energy technologies in general. Although the DOD has begun an interesting test bed for new building energy technologies, building demonstrations are a big innovation gap, and this program should be ramped up.

The DOD is not the only federal agency that could leverage existing funding streams to support an energy shift on the back end. USDA, which funds biofuels R&D, could expand its support system to assist farmers in raising new cellulosic biofuel crops, and its rural development role could aid the entry of energy-efficient technologies. The General Services Administration, which operates federal, nonmilitary buildings and facilities, and the Departments of HUD, Interior, and Transportation are other agencies with funding streams that could play a role in energy efficiency and introducing new energy technologies.

Regulatory authority and standards. Regulatory authorities already have powers to drive significant energy savings and carbon reductions. They represent traditional command-and-control regulation, less economically efficient than a carbon pricing system, but their effect could be significant.

As a result of the U.S. Supreme Court decision in Massachusetts v. EPA, the EPA found that greenhouse gas emissions endanger public health and welfare under the Clean Air Act and has implemented new regulations. The first round focused on vehicles. The EPA and the National Highway Traffic Safety Administration in May 2010 issued new standards for fuel economy and greenhouse gas emissions for passenger cars and light-duty trucks requiring 35.5 miles per gallon for model years 2012–2016. The agencies estimate that this regulation will save 960 million metric tons (mmt) of greenhouse gas emissions and 1.8 billion barrels of oil over the life of the vehicles covered by the program. In September 2010, the agencies began rulemaking for light-duty vehicles for model years after 2017, and in the following month they began the first program to reduce greenhouse gas emissions and improve fuel economy for medium-and heavy-duty trucks, supported by the major auto and truck manufacturers.

Under the Clean Air Act, the EPA may be considering new regulations to impose “best available control technology” on “stationary sources,” including coal-fired utility plants and large industries,. Members of Congress from coal states are poised to battle this effort if it imposes carbon capture and sequestration requirements, although the Obama administration has vowed to stave off such efforts.

Apart from the EPA, the DOE has efficiency standard–setting authority over a vast array of appliances. Proponents argue that these standards are important cornerstones of U.S. energy efficiency, and the potential savings from new standards being introduced between 2009 and 2013 could result in carbon reductions equivalent to 63 large coal-fired plants. Standards exist for most residential appliances, and updated standards are pending for nearly all of them. The 2007 energy act in effect barred incandescent lighting as of 2012, and standards are pending for a wide range of lighting products.

Local governments control building standards, but federal model codes could be tested and developed for localities to consider implementing; if Congress agreed, over time these could be turned into minimum energy-efficiency standards. Utility regulation is a state responsibility, but standards to enable the smart grid, which may encourage the adoption of an energy services as opposed to an energy consumption model for utilities, could be promoted by the federal government. Energy efficiency is the cheapest way toward progress and offers direct savings benefits to consumers. The feds need to promote a utility energy services model that rewards energy efficiency, not power sales, coupled with new financing tools to help consumers to achieve these savings.

Regional “nation-size” markets. The federal government is not the only actor. There are regional economies the size of major nations that could be sources for advance. California’s Global Warming Solutions Act requires significant carbon reductions by 2020 from a cap-and-trade program that covers large industrial sources initially, limiting their output to 147 mmt. The legislation eventually will cover transport, setting a low carbon fuel standard of 15 mmt from ethanol and biodiesel efforts and a 33% renewal portfolio standard (21 mmt). In addition to its climate initiatives, California has led in state clean energy regulatory efforts, switching its utilities to an energy services model. California’s initiatives could be integrated with the efforts of the Western Climate Initiative, which was started by states and provinces along the western rim of North America to combat climate change caused by global warming, independent of their national governments.

In the northeastern United States, 10 states have joined in the Regional Greenhouse Gas Initiative. Its cap-and-trade program covers solely the power sector and provides it generous allowances, so it is less stringent than the climate bill passed by the U.S. House. But it represents another source of regional climate innovation, and the Northeast is spawning its own competitive clusters of new energy companies. Some 20 states have been contemplating their own climate initiatives. In general, encouraging state regulators to adopt an energy services model for their utilities would enable efficiencies.

The Electrification Coalition has proposed another federalism approach in supporting electric vehicles to achieve energy security goals. Because current electric vehicle ranges are best for small nations, such as Denmark and Israel, the coalition began by contemplating whether there were Denmark-sized markets in the United States that could support electric vehicles. It has supported bipartisan legislation in the House and the Senate under which metropolitan areas would compete for a multibillion-dollar pot of federal funds by designing and backing the optimal charging infrastructure, promotion, and support systems for electric vehicles. This competitive federalism, relying on and promoting regional leadership, could be another way to press energy progress. In July 2010, the Senate Energy Committee approved a bill on a 19 to 4 vote. The legislation’s ultimate passage remains uncertain, but the effort suggests that progress on a competitive federalist model is possible.

Public-goods investments. Governments have long invested in public goods, particularly where there is a market failure. For example, the federal government spends some $60 billion annually on public infrastructure, such as road, transit, and water projects; far more on defense systems; and billions on public health and vaccines. There are significant arguments that a public-goods rationale should apply for large-scale energy infrastructure; an improved loan guarantee program discussed above could be a facilitator.

The energy infrastructure list would include carbon capture and sequestration systems for coal-fired utilities. The United States has approximately 1,500 such coal plants at some 600 sites; they produce one-third of U.S. carbon dioxide emissions. Demonstrations at operating scales of integrated capture and sequestration for retrofitting and rebuilding existing plants are urgently needed, given the long lead times for revamping this infrastructure. Similarly, the government has a long history of supporting the development of nuclear power, although no U.S. plants have been built for three decades. A new generation of more advanced nuclear power units is pending, offering zero emissions for baseline power, although waste and proliferation issues remain. Congress has already recognized that this technology needs financing assistance. Improvements to the electrical grid, including smart grid features, to enable the scaling of renewable sources and electric or plug-in hybrid vehicles, received federal stimulus funds, and continuing support is needed. The public-goods rationale is already recognized and being applied, but it will need expansion.

Registry for private-sector efficiencies. Most Fortune 500 firms are implementing clean energy initiatives. For example, Walmart, the nation’s largest retailer, is making progress on far-reaching efficiency goals, which include operating on 100% renewable energy, creating zero waste, and encouraging its suppliers to provide sustainable products.

The federal government may revisit some form of carbon pricing in the future, so it would be valuable to develop a registry where private firms file (and extol) their efficiencies in a transparent system, based on sound metrics and monitoring, for potential credit later. The United States is going to need better data about progress, so voluntary collection might as well begin now, collaboratively between government and interested firms. Although some consider this approach window-dressing, it could provide practical encouragement for firms that want to make progress as well as practical knowledge about best practices to reduce emissions.

Can a Plan B work?

Cap-and-trade is strong on demand pull but short on technology push. Both are needed. The amalgam of policies that could constitute a Plan B is a good start. The plan includes technology push mechanisms that are stronger than those considered in the past, with a focus on both front and back ends. But the plan relies for needed demand pull on current regulatory tools and incentives, which are less economically efficient than cap-and-trade. Clearly, more Plan B elements will need to evolve. Although it is more palatable politically, and comes in a series of more manageable policy bricks (unlike the far-reaching construct of cap-and-trade), parts of the plan are still a stretch, and they will have to be built up sequentially.

To understand where the nation stands and where it needs to go, it will be necessary to quantify the energy savings for the elements included in a Plan B. Such knowledge about the energy savings from each element will indicate what needs to be altered or expanded. It cannot be stressed enough that as part of this effort, the United States needs to develop a detailed and far-reaching technology strategy. The strategy will be needed to see the technology options more clearly, to understand where progress can be made, and then to move the technologies toward commercialization at scale. Transforming the nation’s complex, established energy sector is the most difficult technology implementation problem the nation has faced. But the cap-and-trade setback offers the opportunity for a fresh look.

Contesting climate

According to its dust jacket, The Climate War is “an epic tale of an American civil war.” Eric Pooley is said to “do for global warming what Bob Woodward did for presidents.” Although such claims are overstated, the book offers a thoroughly researched and engaging account of a major political conflict: the struggle in Congress to limit carbon dioxide emissions. Pooley takes his readers into the conversations, negotiations, and public relations campaigns conducted by both environmentalists and industrialists, along with their respective allies on Capitol Hill and in the media. In so doing, he illuminates the workings of government and the fraught relations among members of Congress, advocacy groups, and constituents. The author has done his homework, and it shows.

If Pooley’s account of climate negotiations does not match Woodward’s thrilling dissection of Watergate, the failings lie more in the story than in the reporting. Drama is limited by the fizzling of the so-called war. Battles were waged, coalitions fractured and reformed, and tactics and strategy were reformulated as conditions changed, but in the end, the struggle ended quietly and with little notice. The last major attempt to rein in carbon emissions, the Waxman-Markey Bill (the American Clean Energy and Security Act of 2009), squeaked by the House and never came to a vote in the Senate. On February 26, 2010, Sen. Lindsay Graham (R-SC) declared cap-and-trade policy—the heart of the bill—dead. During the past year, global warming has receded from the national political debate, pushed aside by economic worries and marginalized by the resurgence of a Republican Party that is increasingly dominated by global warming deniers.

But if the effort to cap carbon emissions in Congress died in early 2010, the demise is not necessarily permanent. The next Congress may shun the issue, but ongoing climatic deterioration may eventually force another political realignment. If unemployment declines, the global scene stabilizes, and extreme weather events galvanize public attention, global warming could quickly return to the agenda.

The Climate War in this sense recounts merely the first phase of a protracted struggle. It is still important to recognize that this phase was lost, and lost decisively. As Pooley shows, meaningful cap-and-trade legislation had a real chance of clearing Congress during the second term of George W. Bush, and would probably have been enacted into law in 2009 if President Obama had made the issue a priority. As recently as 2007, influential Republican politicians, including Sen. John McCain (R-AZ), Mitt Romney, and Newt Gingrich, supported cap-and-trade legislation. According to Pooley, 60% of Republicans favored immediate climate action in that year. But the opportunity was lost, because of the machinations of the opposition, legislative miscues by the reformers, and arrogance on the part of some environmentalists.

Meaningful climate legislation, Pooley implicitly contends, requires a broad coalition of support. Environmentalists alone do not command the requisite constituency; a significant segment of the energy industry must be brought on board as well. As deputy editor of Bloomberg BusinessWeek, Pooley takes what might be termed a radically centrist position. On the one hand, he is passionately concerned with safeguarding the planet and its atmosphere. On the other hand, he is a hard-headed political realist who not merely embraces compromise but contends that the power industry must remain profitable in order to generate the revenues needed to underwrite the expensive transition to a post-carbon energy system.

Such a position is not exactly in the middle of the current U.S. political spectrum. A large segment of the electorate regards the idea of human-induced climate change as a subversive plot to destroy the country and impoverish its population. But among what might be called the rationalist community, composed of the majority of Americans who accept scientific findings even when they challenge articles of faith or threaten established comforts, such a pro-environment, pro-business stance is thoroughly centrist, and is potentially capable of rallying the broad support that will be required for far-reaching climate legislation.

Heroes and villains

Such legislation came close to passing, Pooley contends, precisely through the creation of such a moderate coalition. The key organization pushing the alliance was the Environmental Defense Fund (EDF), led by Fred Krupp. The organization had long championed market-based solutions to environmental problems, successfully advocating a cap-and-trade approach to acid rain in the 1980s. Initially, the notion of a market in pollution permits was anathema to most environmental advocates, who viewed it as legitimating destructive behavior and hobbling governmental enforcement. As a result, EDF was widely regarded in the broader eco-community as a barely respectable organization inclined to fraternize with the enemy. The success of the acid rain legislation, however, gradually convinced others of the wisdom of EDF’s environmental economics approach, although the more strident green organizations remained wary of both EDF and its market-oriented policies.

Whereas Krupp is the author’s main environmentalist hero, his principal protagonist on the industry side is James Rogers, the chief executive officer of Duke Energy. Pooley consistently lauds Rogers for not just recognizing human-induced climate change, but for embracing the need to first cap and then reduce carbon emissions. Under his leadership, Duke Energy has been at the forefront of cleaner energy technologies. Such actions have come at a price. Rogers and his company have been reviled by environmentalists as major polluters that continued to build coal-burning power plants and castigated by other energy firms for their readiness to work with EDF and compromise with the “enviros” (as Pooley styles them). Just as EDF has been snubbed in the green community for sleeping with the enemy, Duke Power has been vilified in the energy sector.

A dramatic story needs villains as well as heroes, and bad characters are abundant in The Climate War. The main scoundrels are the global warming denialists seeking to derail any efforts to staunch carbon emissions, but relatively little of the book details their actions. The author focuses as much attention on the left-wing members of Congress who obstructed legislation by refusing to compromise. During the waning days of the Bush administration, many on the left preferred to let the issue die, anticipating the election of more responsive colleagues. Others saw climate legislation as an opportunity to win other battles. Sen. Barbara Boxer’s ideal version of the climate bill, for example, would have generated $6.7 trillion for the government by selling permits to emit; such funds would then have been distributed to a variety of favored causes, not all of them climate-and energy-related. As Duke Energy’s Rogers quipped, “This is just a money grab. Only the mafia could create an organization that would skim money off the top the way this regulation would.”

Under the new Obama administration, self-styled congressional progressives again pushed for a bill that would have imposed ruinous costs on the energy industry. In particular, they wanted all carbon emission permits to be sold, a procedure that would have generated massive revenues for the government but which would have forced electrical companies to vastly increase their rates—and generate public furor. President Obama initially concurred; his first budget called for all permits to be auctioned, a procedure that would have raised $650 billion between 2012 and 2019. Sen. Harry Reid (D-NV), the Senate Majority Leader, figured he could use this money to help pay for health care reform. As the Waxman-Markey Bill worked its way through Congress, compromises were made, allowing it to squeak though the House. The Obama administration, however, offered little support, in part because White House chief of staff Rahm Emanuel was convinced that the climate issue was “a drag on Obama’s popularity and political power.” Meanwhile, the Senate passed a resolution proclaiming that any climate bill would have to achieve its results without raising gasoline or energy prices, virtually ruling out meaningful action. Throughout the negotiations, Obama retained a hands-off approach. Pooley implies that he bears some of the responsibility for the loss of this opportunity to confront global warming.

Although The Climate War focuses on a few main protagonists, its total cast of characters is long indeed. Keeping the various actors straight is a bit of a struggle, a task not helped by the author’s penchant for introducing people through personal descriptions. Pooley’s characterizations are also colored by his political take on the individual under scrutiny. Those with whom he disagrees tend to be cast in an unfavorable light.

To his credit, Pooley usually focuses on the ideas and tactics of his opponents, rather than on their personalities or appearances. Yet on this score as well, his assessments can be harsh, dismissing important actors with sharp quips. Such treatment is meted out to enemies on both the left and the right. Curiously, the eco-radicals whose direct-action antics are periodically recounted are spared from such flip appraisals. Presumably the business-friendly eco-centrist Pooley would find much to object to here, but when it comes to activists engaged in civil disobedience, the tone moderates to one of guarded respect, leavened with a hint of detached amusement.

Not quite fair

The deniers of global warming are granted no such respect. Myron Ebell, the Competitive Enterprise Institute’s director of global warming policy, comes in for particular scorn as the “superstar of the Denialsphere” and “chief mouthpiece of the movement.” According to Pooley, Ebell has been the principal strategist of a highly successful but deeply disingenuous campaign of misinformation.

Unfortunately, Pooley tends to slot all opponents of strict climate legislation within the same category as Ebell, a strategy that compromises his own centrist mode of analysis. The prime case here is Bjørn Lomborg, the Danish statistician who styles himself a “skeptical environmentalist” and who advises everyone worried about climate change to simply “cool it.” As Pooley acknowledges, Lomborg does not deny global warming and is certainly not opposed to scientific inquiry or reasoned debate. Yet Poole dispenses with Lomborg in a couple of disparaging paragraphs, mocking him as the “darling of the deny-and-delay crowd,” a “glib and mediagenic” pseudo-environmentalist serving as an “agent behind enemy lines.”

Lomborg deserves more respect—and more considered criticism. Perhaps more than any other person, it is Lomborg whose intellectual efforts have sidetracked the effort to control global warming. Climate change is real, he argues, but there is little that we can do about it without undermining the economy, the growth of which is necessary both to mitigate the effects of warming and to eventually devise benign forms of energy generation. Lomborg’s prescriptions may make a certain amount of sense, but only if the lower-range forecasts of temperature increase turn out to be accurate. If the higher-range climate change scenarios come to pass, following the Lomborgian path of R&D without carbon limitations could prove disastrous. He is asking us, in other words, to take a huge gamble on the future of the planet, one that few if any genuine environmentalists are willing to countenance.

Left unsaid by Pooley is the manner in which the unwritten rules of environmentalist discourse play into the hands of Lomborg and other eco-skeptics. In order to galvanize an unresponsive public, the usual approach is to employ scare tactics, trumpet worst-case scenarios, and avoid acknowledging any possible benefits of a warming world. Such a strategy of condescension is highly vulnerable to the critical analysis that Lomborg so abundantly supplies. Intensifying heat waves may kill thousands, but diminishing cold snaps could easily save many more. Overall, the costs of a glacier-free planet would clearly outweigh the potential advantages, but that does not mean all of the accounting is strictly negative.

Most environmental writers do not want to acknowledge such complexities, both because global warming benefits seem trivial compared to the detriments, and because they are unwilling to concede any terrain to the enemy. Even instances of intellectual malfeasance, such as those revealed in the so-called Climategate scandal of 2009, in which intemperate email messages from climate scientists were stolen and released to the public, are usually dismissed by environmental writers as insignificant. Yet Climategate was disastrous to the effort to address global warming, as it seemingly confirmed the allegations of groundless scaremongering leveled by anti-environmentalists. Many fence-sitters opted against climate legislation, undermining the near-term possibility of reform. Pooley firmly grasps this dynamic, arguing that “the real lesson of Climategate was that climate scientists needed to behave with absolute transparency…” and that “anything short of complete disclosure fueled the skeptics and fed public suspicion…” The scientific method, I would add, demands as much.

Despite my misgivings about the author’s tone and characterizations, The Climate War is an important and engaging book on a vitally significant topic. Eric Pooley’s analysis of the politics of global warming is insightful, and his political prescriptions are wise. If we are to address this most pressing issue, he shows how it can be done; indeed, how it must be done.

The Myth of Objective Scientists

In Science, Policy and the Value-Free Ideal, Heather Douglas of the University of Tennessee–Knoxville seeks to challenge the belief that science should be “value-free,” meaning that it is guided only by those norms internal to science. This is a worthwhile task that would make a welcome contribution to debates about the role of science in policy and politics. Unfortunately, the book fails to fully deliver on its promise. Instead, it ultimately takes the conventional position of endorsing the idea that value-free science is actually possible in certain circumstances.

Douglas discusses three types of values in science: ethical/social, epistemic, and cognitive. Ethical values have to do with what is good or right, and social values cover public interest concerns such as justice, privacy, and freedom. She argues that ethical and social values are often but not always compatible with one another. Cognitive values include simplicity, predictive power, scope, consistency, and fruitfulness in science. Douglas is somewhat less clear about what she means by epistemic values, which she associates with the notion of truth, describing them as “criteria that all theories must succeed in meeting” in order to develop reliable knowledge of the world. Presumably, epistemic values are those internal to the conduct of science.

Douglas argues that values external to science, such as those associated with societal problems, should play a direct role in science only in the process of deciding what research to pursue. If the public is particularly concerned by the harm caused by sexually transmitted diseases, then that priority should influence how much government decides to spend on research in this field. But once the research begins, it should be guided only by the rules that govern the conduct of research and evaluated objectively by disinterested experts. She warns that if values external to the practice of science were to play a direct role in the process of conducting research, “science would be merely reflecting our wishes, our blinders, and our desires.” Our scientific knowledge would then reflect the world as we’d like it to be, rather than how it actually is.

Douglas chooses not to pursue this line of thinking very far, even though she cites various cases that indicate that scientific understanding is often precisely about our wishes, blinders, and desires. She cites the example of diethylstilbestrol, a synthetic version of the female hormone estrogen that was recommended for women in the late 1940s for “female problems” such as menopause, miscarriages, and metal health problems. After evidence surfaced that the drug was ineffective and dangerous, it remained approved because of regulators’ preconceptions about what it meant to be a “woman.” Douglas explains that the “social values of the day ” about gender roles served to reinforce the view that “female hormones are good for what ails women” and made it easier to discount evidence to the contrary, especially when the societal values appeared to reinforce cognitive values such as theoretical simplicity and explanatory power. The lesson Douglas draws is of the “dangers of using any values, cognitive or social, in a direct role for the acceptance or rejection of a hypothesis.” A different lesson that she might have drawn is that societal values are all mixed up in how we practice research and consequently in the conclusions that we reach through science, whether we like it or not.

In her analysis, Douglas routinely conflates what are conventionally called scientific judgments with political judgments and even asserts that the latter should influence the former. For instance, she writes that it is not morally acceptable for scientists “to deliberately deceive decision makers or the public in an attempt to steer decisions in a particular direction for self-interested reasons.” Yet, she asserts that scientists have an obligation to consider societal outcomes in making scientific judgments that entail some uncertainty. She describes the example of a hypothetical air pollutant that is correlated with respiratory deaths but where no causal link has been identified. If the costs of pollution control are low, Douglas argues, it would be “fully morally responsible” for the scientist to recognize the uncertainties but to “suggest that the evidence available sufficiently supports the claim that the pollutant contributes to respiratory failure.”

This logic turns the notion of precaution fully on its head. It suggests that if an action is worth taking according to the scientist’s judgment, then the evidentiary base can be interpreted in such a way as to lend support for that action. Political considerations would thus dictate how we interpret knowledge.

Douglas considers the option that in a situation of high uncertainty, the scientist should simply answer the questions posed by policymakers rather than to try to promote a particular course of action, but she ultimately rejects the distinction between a scientist who arbitrates questions that can be resolved empirically and one who renders policy advice. She argues that because science carries such authority in policymaking, scientists have a responsibility to consider the societal consequences when the scientific evidence is uncertain and ultimately to “shape and guide decisions of major importance.” Such guidance presupposes that scientists will speak in one voice about decisions, or if they did, their policy proposals would be consistent with public values.

It is difficult to see such advice as anything but an invitation to an even greater politicization of scientific advice. In Douglas’s hypothetical air pollution case, the policy remedy is cheap to implement, so that policymakers would probably take this step even if the evidence is uncertain. But what if the politics are hard? Specifically, what if the costs of pollution control are not cheap or involve various winning and losing stakeholders? What should the advisor do in such a case? And what happens when the situation is characterized by experts offering legitimate but competing views on certainties and uncertainties, areas of fundamental ignorance, and a diversity of interests who pick and choose among the experts?

Douglas hints at a remedy to this quandary when she argues for greater public participation in scientific advisory processes as a way to build a consensus around knowledge claims among competing interests. Although this strategy may have merit in some contexts, to suggest that knowledge claims should be negotiated by nonexperts would be interpreted by many experts as an even further politicization of the process of scientific advice. Douglas admits that in many cases, such public engagement and deliberation would not be practical.

Surely, in many cases we want to preserve the distinction between scientific advice on questions that can be addressed empirically and advocacy for particular outcomes. Policy questions involving empirically unresolvable questions inevitably involve a wide range of expertise and interests, and are, as Douglas notes, rightfully the territory of democratic decisionmaking. But Douglas does not follow this principle consistently, as can be seen in her hypothetical air pollutant case. To be consistent, she would have to advise the scientist to simply explain honestly what the science does and does not prove, including the uncertainties, and then to allow the political process determine how to proceed.

Douglas winds up on the wrong track by asking the wrong questions. In the hypothetical air pollution case, she characterizes the question as whether the pollutant is a public health threat. But science cannot determine what is an acceptable level of risk for a society. Instead, a policymaker could ask when the pollutant reaches certain atmospheric concentrations and what is known about the health effects at various concentrations, or could propose alternative regulatory actions and ask a scientist what the result would be in each case, with the degree of uncertainty clearly stated. Framing the questions posed to scientists in ways that separate scientific judgments from policy advice is one way to avoid falling prey to the value-free ideal. Recognizing that there are different roles that expert advisors can play in decision-making can help improve the quality of advice and sustain the integrity of advisory processes.

Douglas has written a useful book with a clear thesis. However, in the end it offers far more support for the myth of the value-free ideal than might be expected by the overall thesis. Douglas is right in saying that moving beyond the value-free ideal indeed makes good sense, for the practice of both science and policy. But her analysis would be more consistent and more helpful if she pointed out that the goal should not be the unattainable value-free science but the more realistic value-transparent science. Scientists certainly should feel empowered to advocate for a course of action where the scientific evidence is weak or inconclusive, but they ought to be explicit in explaining that this is what they are doing. The alternative is not value-free science, but a science that hides values from view.

New Voices, New Approaches

About six months ago in Tempe, Arizona, about two dozen young scientists, policy wonks, and communicators gathered for a “pitch slam.” In a hotel meeting room near the Arizona State University campus, teams that each included a writer (or blogger or radio producer) and an academic expert lined up to give brief descriptions of articles they planned to write. The judges at the front of the room included a literary agent, a Simon and Schuster editor, the editor of Nature, a Smithsonian editor, and me. Doing our best American Idol imitations, we reacted to each of the pitches. Later in the day we met individually with each of the pitch teams to discuss how to best translate their ideas into effective articles.

The slam was part of an innovative program designed by Lee Gutkind, a writer in residence at ASU’s Consortium for Science, Policy, and Outcomes and the editor of Creative Nonfiction magazine. With financial support from the National Science Foundation, Gutkind and CSPO co-director David Guston assembled an outstanding group of early-career academics and professional communicators (writers, bloggers, radio producers, filmmakers, museum curators) to conduct an experiment in introducing the next generation of policy experts to a new way of explaining policy. The goal is to develop a way to make science policy more accessible and engaging to a large audience. The method is to incorporate the policy analysis in a narrative structure because, though this is hard to believe, some people would rather read a compelling story than a meticulously organized piece of rigorous academic argument.

The Arizona workshop, “To Think, To Write, To Publish: Forging a Working Bond Between Next Generation Science Communicators and the Next Generation of Science and Technology Policy Leaders,” enabled the young communicators and experts to spend a weekend sharing ideas and experiences, testing their proposals with their peers, and working collectively to advance a new approach to stimulating interest in science and technology policy. After the workshop they all stayed in Tempe to participate in The Rightful Place of Science conference at which experts from around the world participated in a wide-ranging discussion of the full spectrum of policy concerns, no doubt planting the seeds for future articles.

Issues is dedicated to broadening participation in policy discussions and has tried to make its articles more appealing by eliminating the footnotes, jargon, and excessive formality that characterizes scholarly writing. We believe that we have had some success, but we have also noted that most Americans are still more likely to pick up the New Yorker. So we will be attempting to go even further by allowing authors to use stories and flesh and blood characters that will highlight the human dimensions of policy debates.

A few gifted writers have already demonstrated that narrative can enrich a story about a scientific or technical subject so that it becomes more understandable and more palatable to a large audience. Books such as John McPhee’s Curve of Binding Energy, Tracy Kidder’s Soul of a New Machine, and Lewis Thomas’s Lives of a Cell are serious and important books that have reached a wide audience. A new generation of writers such as Malcolm Gladwell, Atul Gawande, Michael Spector, and Jonah Lehrer regularly produce entertaining and influential articles on scientific and technological issues for the New Yorker. We science policy wonks are forever bemoaning the dearth of broad public discussion of the meaning, value, and use of science and technology. One way to stimulate that discussion is for more people to write creatively and engagingly about these concerns.

An object lesson. In Spring 2008, we published “Learning to Deliver Better Health Care” by Elliott S. Fisher of Dartmouth Medical School. The piece provided eye-opening evidence of the disparity in health care costs across the country where there was no evidence that higher cost was linked to higher quality. The article included detailed cost information about UCLA, Johns Hopkins, the Cleveland Clinic, and the Mayo Clinic. The analysis was clear, the evidence solid, and the response several orders of magnitude short of overwhelming. In June 2009 the New Yorker published “The Cost Conundrum” by Atul Gawande, an outstanding writer who also happens to be an associate professor of surgery at Harvard Medical School. Gawande tells the fascinating story of how the relatively poor town of McAllen, Texas, became the most expensive place in the country to obtain health care. He cites the work that Fisher and his Dartmouth colleagues have done, but he integrates with interviews and observations from his visit to McAllen.

According to a report in Kaiser Health News a few weeks after the article appeared, the response was a tad more impressive: “The resulting article is now being called one of the most influential health care stories in recent memory. The New York Times reported that President Obama made it required reading for his staff and cited it at a meeting with Democratic senators last week. His budget chief, Peter Orszag, has written two blog posts about the article. Health and Human Services Secretary Kathleen Sebelius referred to it in a speech at the John F. Kennedy School of Government last week. Lawmakers on the Hill also are discussing it. Congressman Jim Cooper (D-TN), for instance, says the article has ‘shifted perceptions on the health care industry.’”

Even Gawande and the New Yorker do not regularly make that big a splash, and it didn’t hurt that the health care reform debate was raging at the time, but there is no doubt that good stories attract readers. So over the next few editions Issues will give narrative, and a few representatives of the next generation of science and technology policy experts and communicators, a chance. We will be publishing articles written by some of the teams that participated in the pitch slam. Appropriately, the first installment, by Meera Lee Sethi and Adam Briggle, explores the importance of how one frames the story of synthetic biology in the policy debate. You will see that the analysis is as perceptive as the story is engaging. We’ve been reading early drafts of some of the articles that follow, and we are confident that they will maintain this high standard of insight and readability. Issues is very pleased and proud to be able to introduce you to a host of new policy experts and lively voices.

Fighting Innovation Mercantilism

Despite the global economic downturn, indicators of global innovative activity have remained strong during the past two years. The total global output of scientific journal papers in all research fields for the United States, the European Union (EU), and Asia-Pacific nations reached historic highs in 2009. The World Intellectual Property Organization’s 2010 World Intellectual Property Indicators index found that in 2008, the most recent year for which information is available, both the total number of global patent applications received and patent awards granted topped any previous year. And global trading volume in 2010 is on pace to rebound 9.5% over 2009 levels. Yet notwithstanding these indicators of strength, all is far from well with innovation and trade in the global economy.

To be sure, during the past decade, countries worldwide have increasingly come to the realization that innovation drives long-term economic growth and quality of life improvement and have therefore made innovation a central component of their economic development strategies. In fact, no fewer than three dozen countries have now created national innovation agencies and strategies designed specifically to link science, technology, and innovation with economic growth. These nations’ innovation strategies comprehensively address a wide range of individual policy areas, including education, immigration, trade, intellectual property (IP), government procurement, standards, taxes, and investments in R&D and information and communications technology (ICT). Consequently, competition for innovation leadership among nations has become fiercely intense, as they seek to compete and win in the highest-value-added sectors of economic activity, to attract R&D and capital expenditure investment from multinational corporations, to field their own globally competitive and innovative firms and industries, and to generate the greatest number of high-value-added, high-paying employment opportunities possible for their citizens.

However, countries’ focus on innovation as the route to economic growth creates both global opportunities and threats, because countries can implement their innovation strategies in either constructive or destructive ways. Specifically, as Figure 1 shows, countries can implement their innovation policies in ways that are either: “good,” benefitting the country and the world simultaneously; “ugly,” benefitting the country at the expense of other nations; “bad,” appearing to be good for the country, but actually failing to benefit either the country or the world; or “self-destructive,” failing to benefit the country but benefitting the rest of the world.

Ideally, countries would implement their innovation policies in a positive-sum fashion that is consistent with the established rules of the international trading system and that simultaneously spurs domestic innovation, creates spillover effects that benefit all countries, and encourages others to implement similar win-win policies. But another path countries are unfortunately all too often taking seeks to realize innovation-based growth through a zero- or negative-sum, beggar-thy-neighbor, export-led approach. At the heart of this strategy lies a misguided economic philosophy that many nations have mistakenly bought into: a mercantilism that sees exports in general, and high-value-added technology exports in particular, as the Holy Grail to success. This approach is designed around the view that achieving growth through exports is preferable to generating growth by raising domestic productivity levels through genuine innovation. Indeed, there is disturbing evidence that the global economic system has become increasingly distorted as a growing number of countries have embraced what might be called innovation or technology mercantilism, adopting beggar-thy-neighbor innovation policies in an effort to attract or grow high-wage industries and jobs, making the global economy less prosperous in the process.

These nations, of which China is only the most prominent, are not so much focused on innovation as on technology mercantilism, specifically the manipulation of currency, markets, standards, IP rights, and so forth to gain an unfair advantage favoring their technology exports in international trade. Such policies aren’t designed to increase the global supply of jobs and innovative activity, but rather to induce their shift from one nation to another, usually from consuming to producing nations. Specifically, countries practicing innovation mercantilism use their trade and innovation policies to help their manufacturing and ICT sectors move up the value chain toward higher-value-added production by applying a number of trade-distorting measures. Governments implement these policies to keep out foreign high-technology products while benefiting their own in an effort to boost local production (ideally to fill demand in both domestic and foreign markets) in order to satisfy domestic labor demand.

Practicing innovation mercantilism

A number of ugly and bad innovation practices pervade countries’ currency, trade, tax, intellectual property, government procurement, and standards policies. Perhaps the most pervasive and damaging mercantilist practice is the rampant and widespread currency manipulation that many governments engage in today. Countries manipulate their currencies, either by pegging them to the dollar at artificially low levels or by propping them up through government interventions in an attempt to shift the balance of trade in their favor. Mercantilist countries’ artificially low currencies are a vital component of their export-led growth strategies, making their exported products cheaper and thus more competitive on international markets while making foreign imports more expensive. The overall intent is to induce a shift of production from more productive and innovative locations to less productive and innovative ones.

For example, the Peterson Institute for International Economics argues that China undervalues the renminbi by about 25% on a trade-weighted basis and by about 40% against the U.S. dollar. China’s government strictly controls the flow of capital into and out of the country, as each day it buys approximately $1 billion in the currency markets, holding down the price of the renminbi and thus maintaining China’s artificially strong competitive position. China has actually doubled the scale of its currency intervention since 2005, now spending $30 billion to $40 billion a month to prevent the renminbi from rising. This subsidizes all Chinese exports by 25 to 40%, while placing the effective equivalent of a 25 to 40% tariff on imports, discouraging domestic purchases of other countries’ products. Such currency manipulation is a blatant form of protectionism. Nor is China alone in intervening in markets to manipulate the value of its currency. Hong Kong, Malaysia, Singapore, Taiwan, South Korea, and even Switzerland—in part in an effort to remain competitive with China—also intervene in currency markets and substantially undervalue their currencies against the dollar and other currencies. For instance, on September 16, 2010, Japan intervened in world currency markets to drive down the exchange rate of the yen by selling an estimated 2 trillion yen ($23 billion)—the largest such intervention ever—in an effort to devalue the yen against the dollar and make Japanese exporters more competitive.

Although a major focus of the international trading system in recent decades has been to remove tariff barriers, countries have gone to great lengths to evade tariff-reduction commitments, and high tariffs persist on a number of high-tech products and services. For example, despite being a signatory to the World Trade Organization’s (WTO’s) Information Technology Agreement (ITA), the EU has attempted to rewrite descriptions of certain ICT goods in an effort to circumvent their coverage under the ITA. In 2005, the EU applied duties of 14% on liquid crystal display televisions larger than 19 inches, and in 2007 the EU moved to allow duties on set-top boxes with a communications function as well as on digital still-image and video cameras. Although the United States won this trade dispute with the EU on August 16, 2010, when a WTO panel ruled that the EU’s imposition of duties on flat-panel displays, multifunction printers, and television set-top boxes violated the ITA, the case was emblematic of countries’ attempts to circumvent trade agreements to favor domestic production from ICT firms and industries.

Meanwhile, a number of countries, even many that are signatories to the ITA, including Indonesia, India, Malaysia, the Philippines, and Turkey, continue to impose high tariffs on ICT goods. India imposes tariffs of 10% on solid-state, non-volatile storage devices; semiconductor media used in recording; and television cameras, digital cameras, and video camera recorders. Malaysia imposes duties of 25% on cathode-ray tube monitors. The Philippines imposes tariffs of up to 15% on telephone equipment and computer monitors. Countries that have not acceded to the ITA place even higher tariffs on ICT products. China, despite its massive trade surplus with the rest of the world and its signing of an ITA-accession protocol, places 35% tariffs on television cameras, digital cameras, and video recorders; 30% tariffs on cathode-ray tube monitors and all monitors not incorporating television reception apparatus; and 20% on printers and copiers.

A favored practice of innovation mercantilists is to force foreign companies to accept IP, technology transfer, or domestic sourcing of production requirements as a condition of market access. Although the WTO prohibits countries from requiring companies to comply with specific provisions as a condition for market access, such tactics are nevertheless popular because they help countries obtain valuable technological know-how, which they can then use to support domestic technology development in direct competition to the foreign firms originally supplying it.

China is a master of the joint venture and R&D technology transfer deal. In the 1990s, when the country began aggressively promoting domestic technological innovation, it developed investment and industrial policies that included explicit provisions for technology transfer, particularly for collaboration in production, research, and training. The country uses several approaches. One is to get companies to donate equipment. Others include requiring companies to establish a research institution, center, or laboratory for joint R&D in order to get approval for joint ventures. Because the WTO prohibits these types of deals and China has since become a WTO member, it now hides them in the informal agreements that Chinese government officials force on foreign companies when they apply for joint ventures. They also sometimes require other WTO-violating provisions, such as export performance and local content, to approve an investment or a loan from a Chinese bank. China thus continues to violate the WTO, only more covertly, acquiring U.S. and other countries’ technology and paying nothing in return. Foreign companies continue to capitulate because they have no choice; they either give up their technology or they lose out to other competitors in the fast-growing Chinese market.

One of the most insidious forms of innovation mercantilism concerns outright IP theft. Yet many countries continue to pilfer others’ IP because it’s easier than making expensive investments themselves, and because, according to research by Gene Grossman and Elhanan Helpman, at least in the short-term, IP theft works. However, IP theft stifles incentives to embark on home-grown technology development, thus hurting countries and making IP theft a bad strategy in the long-run. For example, in 1999 Brazil passed its Generics Law, which allowed companies to produce generic drugs that are perfect copies of patented drugs, a clear violation of the WTO’s Trade-Related Aspects of Intellectual Property Rights (TRIPS) Agreement. Although Brazil’s policy has nominally helped its consumers by forcing foreign pharmaceutical companies such as Abbott Laboratories to sell drugs significantly below market prices, not only do the countries where the drugs are produced suffer, but the rest of the world suffers because there is less pharmaceutical R&D conducted, and therefore fewer drug discoveries made. Moreover, Brazil’s insistence on tampering with IP rights has damaged the development of its pharmaceuticals industry and discouraged foreign direct investment in the country, redirecting it to other countries such as Mexico and South Korea, which makes this an example of a “self-destructive” innovation policy. Likewise, evidence shows that the lack of effective protection for IP rights has limited the introduction of advanced technology and innovation investments by foreign companies in China.

Innovation mercantilists also manipulate government procurement practices to their advantage. For example, China has introduced indigenous innovation policies explicitly designed to discriminate against foreign-owned companies in government procurement. In November 2009, China announced an indigenous innovation product-accreditation scheme—a list of products invented and produced in China that would receive preferences in Chinese government procurement. To be eligible for preferences under the originally conceived program, products would have had to contain Chinese proprietary IP rights, and the original registration location of the product trademark had to be within the territory of China, practices well outside international norms. China’s indigenous innovation policies have already begun to negatively affect foreign firms’ ability to sell to China’s massive government procurement market, clearly contributing, for example, to foreign producers’ share of China’s wind turbine market cratering from 75% in 2004 to 15% in 2009. Worse, although it’s bad enough that Microsoft estimates that 95% of the copies of Microsoft’s Office software in China are pirated, at least 80% of China’s government computers use versions of the Microsoft Windows operating systems that were illegally copied or otherwise not purchased. It is no wonder the United States runs an outlandishly large trade deficit with China when U.S. consumers, businesses, and government agencies pay for their products and services, but even their government fails to pay for ours.

Why countries use mercantilist practices

Countries employ mercantilist policies because they hold one or more of four beliefs: that goods, particularly tradable goods, constitute the only real part of their economy; that moving up the value chain is the primary path to economic growth; that they should become autarchic, self-sufficient economies; or that mercantilist policies actually do work.

Many nations believe that goods constitute the only real part of their economy through which they can drive a growth multiplier, and they therefore view exports and large trade surpluses as good (and imports bad), in and of themselves, as targets of economic policy. Indeed, building their economies around high-productivity, high-value-added, export-based sectors, such as high-tech or capital-intensive manufacturing sectors, appears to be the path that nations such as China, Germany, Indonesia, Malaysia, Russia, and others are following, in the footsteps of Japan and the Asian tigers Hong Kong, South Korea, Singapore, and Taiwan before them.

Furthermore, these nations believe that the primary path to economic growth lies in moving up the value chain from low-wage, low-value-added industries to high-wage, high-value-added production. Many of these countries are willing to engage in what is essentially predatory pricing on international markets, sacrificing short-term losses in order to grow long-term production. By doing so, they hope to erode the production base of advanced industrial nations, with the goal of ultimately knocking industry after industry out of competition in order to reap long-term job gains.

Some countries pursue mercantilist, export-led growth strategies out of a desire to realize national economic self-sufficiency. China’s current economic strategy could perhaps best be described as autarky: a desire to become fully economically self-sufficient and free from the need to import goods or services, especially high-technology ones. Chinese policy appears to identify every single flow of money exiting the country and shut off the spigot, an ambition evident in China’s efforts to establish a domestic base of commercial jet aircraft production and in its desire to establish indigenous standards across a range of technologies so it need not make royalty payments on IP embedded in foreign technology standards.

But the primary reason countries purse mercantilist strategies is the belief that they work. Although U.S. policy has long held that mercantilists only hurt themselves, the reality is that, although some mercantilist policies that countries think will help them do backfire and end up hurting them (the bad and self-destructive ones), many mercantilist practices actually do work (the ugly ones) and help these countries gain competitive advantage, at least temporarily, especially if other countries fail to contest such practices. China’s ugly practices, such as currency manipulation and forced IP transfer, have clearly boosted the country’s exports, moved productive activity to its shores, and hurt foreign producers. The “success” of China’s mercantilist practices is reflected in the country’s share of world exports jumping from 7 to 10% between 2006 and 2010, in the country’s $400 billion and $426 billion current account (trade) surpluses in 2007 and 2008, respectively, and its accumulation of $2.65 trillion of foreign currency reserves. This creates a vicious cycle. Seeing the success of such mercantilist practices, other countries are enticed, even compelled, to introduce similar practices.

Unsustainable in the long-run

Yet despite their apparent attractiveness, mercantilist practices represent a flawed strategy for several reasons. First, they are fundamentally unnecessary and counterproductive; countries have much more effective means at their disposal to drive economic and employment growth. Second, they place the wrong focus on economic growth, neglecting the far greater and more sustainable opportunity to drive economic growth by raising productivity across the board, particularly in the non-traded domestic sectors of an economy, especially through the application of ICT. Third, export-led mercantilism is fundamentally unsustainable for both the country and the world. Fourth, many mercantilist practices, especially those distorting ICT and capital goods sectors, are of the bad variety, failing outright. Finally, mercantilist strategies contravene the established rules of the international trading system and undermine confidence in trade’s ability to produce globally shared prosperity, thus reducing global consumer welfare.

Although mercantilist countries, and the apologists who defend them, argue that the only way they can grow is through high-value-added exports that run up massive trade surpluses as if they were collecting gold bullion, the notion that the only way countries can achieve a full employment economy is by manipulating the trading system to run ever-growing trade surpluses is flat wrong. It contradicts basic macroeconomics, which observes that a change in gross domestic product equals the sum of the changes in consumer spending, government spending, corporate investment, and net exports (exports minus imports). In other words, mercantilist countries could grow just as rapidly, and probably even more so, by pursuing a robust domestic expansionary economy that drives growth through increased domestic consumption and greater business or government investment.

In effect, mercantilist countries mistakenly believe that promoting exports rather than increasing productivity across the board is the superior path to economic growth. Yet it is productivity growth—the increase in the amount of output produced by workers per a given unit of effort—that is the most important measure and determinant of a nation’s economic performance. Indeed, the lion’s share of productivity growth in most nations, especially large- and medium-sized ones, comes not from altering the sectoral mix toward higher-productivity industries, but from all firms and organizations, even low-productivity ones, boosting their productivity. Put succinctly, the productivity of a nation’s sectors matters more than a nation’s mix of sectors, and this means mercantilist countries could grow more reliably and sustainably if they focused on raising the productivity of all sectors of their economy, not just propping up their export sectors.

Perhaps the greatest weakness of countries’ export-led growth strategies is that they are unsustainable. Neither markets in the United States nor Europe (minus Germany), nor even both combined, are large enough if nations such as Brazil, China, Germany, Japan, and Russia continue to promote exports while limiting imports as their primary path to prosperity. Moreover, a predominantly export-led focus is unsustainable for countries themselves. For example, Japan boasts many world-leading exporters of manufactured products, but because it has never really focused on the non-traded sectors of its economy, only about one-quarter of its economy is growth-oriented. It can’t boast of any world-class services firms, and it trails badly in the use of ICT. Countries such as Argentina, China, Japan, India, and South Korea have been far more concerned with producing ICTs than consuming them. But what these countries have missed is that the vast majority of economic benefits from these technologies, as much as 80%, come from their widespread use across a range of industries, whereas only about 20% of the benefits of ICTs comes from their production. As Erik Brynjolfsson of the Massachusetts Institute of Technology has noted, it’s about how firms such as Wal-Mart use ICT products to revolutionize their industries, not where those products are manufactured, that truly drives countries’ productivity and economic growth. But most mercantilist countries have been more concerned with manufacturing and selling ICT (and other high-value-added) products than applying them to make their own domestic service industries more productive and competitive.

For these reasons, mercantilist policies placing high tariffs or other import restrictions on general purpose technologies such as ICT products are purely bad and fail outright. Such policies have the effect of raising the cost of domestic ICT and making local industries that need to leverage these technologies less competitive. For example, as part of its import substitution industrialization strategy, for many years India placed high tariffs on ICTs in an effort to keep out foreign ICT products and spur the creation of a domestic computer manufacturing industry. But research finds that, for every $1 of tariffs India imposed on imported ICT products, it suffered an economic loss of $1.30. Such policies served only to raise the price of ICTs for domestic players, inhibiting the diffusion of IT throughout domestic service sectors such as financial services, retail, and transportation, and causing productivity growth in these sectors to languish.

Getting innovation policy right

There is nothing wrong with countries engaging in fierce innovation, economic, and trade competition, so long as they are competing according to the rules of international trade. In fact, when a county intensely competes to win within the rules of the system, doing so benefits both itself and the world. This is because fair competition forces countries to put in place the right policies for the support of science and technology transfer, the right R&D tax credit policies, the right corporate tax policies with lower tax rates, the right education policies, etc. So when the United States introduces an R&D tax credit, or France trumps the United States by offering a credit six times more generous, or Denmark creates innovation vouchers for small businesses, or the Netherlands and Switzerland tax profits generated from newly patented products at just 5%, this is all tough, fair competition that forces other countries to raise their games in kind by enacting their own good innovation policies. The problem comes when countries start to cheat and contravene the global economy’s established rules. Ugly practices can indeed help countries win. But not only do such policies harm other countries, they then encourage other countries to cheat, undermining the structure of the international trading system. This causes the system to devolve into a competition in which every country has incentives to cheat and to engage in beggar-thy-neighbor policies. Thus the overall system decays, the competition becomes worse, and the global economy suffers as all countries fight for a slice of a smaller pie.

Countries committed to good innovation policies must abandon the notion that countries employing mercantilist policies are somehow going to play by the rules if we just play nice with them. They must agree to cooperate to fight it, not just to talk about it.

It is time for a new approach to globalization. The United States and like-minded countries should strive to develop a global consensus around good innovation policy, with a large focus on domestic-led service innovation through the application of IT. This renewed vision for globalization should be grounded in the perspectives that markets drive global trade; that in addition to joining and adhering to the WTO, countries should become signatories to major trade agreements, including the Government Procurement Agreement, the ITA, and TRIPS; that genuine, value-added innovation, especially that which spurs productivity, drives economic growth; that foreign aid policies should not support countries’ mercantilist strategies; and that fair competition forces countries to ratchet up their game by putting in place constructive innovation policies that leave all countries better off. This holds a number of implications for policymakers.

First, policymakers, both in the United States and abroad, must recognize that innovation mercantilism constitutes a threat to the global innovation system. In the United States, there has been mostly denial or indifference about the impact of countries’ mercantilist policies, encapsulated in the mistaken attitude that because mercantilists only hurt themselves, their policies aren’t a concern. As a U.S. congressman argued at a recent Washington conference, “We don’t really need to worry if the Chinese subsidize their clean energy industry, because American consumers benefit.” But this perspective fails to understand that although consumers may benefit from cheaper imports, U.S. workers suffer as production increasingly moves offshore, and as U.S. unemployment rises, in part as a result of foreign mercantilism, this only erodes confidence in globalization’s ability to deliver shared and sustained long-term prosperity. That’s why it is critically important for policymakers to be able to distinguish the good innovation policies from the ugly and bad ones, so they can promote those that benefit countries and the world simultaneously, while pushing back against those that benefit countries at the expense of other nations. Additionally, policymakers worldwide must recognize that the only sustainable path to raising living standards for the vast majority of citizens in developed and developing countries alike will be to leverage innovation to raise economies’ productivity across the board in all firms and all sectors.

Policymakers worldwide must recognize that the only sustainable path to raising living standards for the vast majority of citizens in developed and developing countries alike will be to leverage innovation to raise economies’ productivity across the board in all firms and all sectors.

Second, developed countries need to work alongside international development organizations to reformulate foreign aid and development assistance policies to use them as a carrot and stick to push countries toward the right kinds of innovation policies. Foreign aid should be geared to enhancing the productivity of developing countries’ domestic, non-traded sectors, not to helping their export sectors become more competitive on global markets. Blatantly mercantilist countries engaging in IP theft, manipulating currencies, imposing significant trade barriers, etc. should have their foreign aid privileges withdrawn. The message to these countries should be that if they want to engage the global community for development assistance, ugly and bad policies cannot constitute the dominant logic of their innovation and economic growth strategies.

For their part, international organizations must work more proactively to combat nations’ mercantilist strategies. The World Bank, International Monetary Fund, WTO, Organization for Economic Cooperation and Development, and others need to not only stop promoting export-led growth as a key development tool, but also tie their assistance to steps taken by developing nations to move away from negative-sum mercantilist policies. In particular, the World Bank should make a firm commitment that it will stop encouraging policies designed to support countries’ export-led growth strategies. The G-20 countries, as the primary sponsors of the World Bank, must tackle this issue head-on and truly begin focusing on win-win global growth through innovation, in part by placing a major focus on how to restructure international institutions to make this happen. For instance, G-20 countries should demand from the World Bank a new strategic plan for how it can completely revamp its approach to reward nations that are playing by the rules and to at least try to minimize the bad and ugly policies of the nations that aren’t.

The WTO must realize that what has been transpiring in the global trading system is not occasional and random infractions of certain trade provisions that need to be handled on a case-by-case basis, but rather that some countries continue to systematically violate the core tenets of the WTO because their dominant logic toward trade is predicated on export-led growth through mercantilist practices. The WTO must commit to stopping such practices. It should also annually publish a list of all new trade barriers, including non-tariff barriers, whether they are allowed by its rules or not

Finally, U.S. trade policy must become much more seriously geared to combating nations’ systematic innovation mercantilism. Currently, however, U.S. as well as European trade policy is organized in a legalistic framework designed to combat unfair trade practices on a case-by-case basis, an enormously expensive and time-consuming process. As a consequence, it becomes difficult for them to put in place a comprehensive trade strategy designed to stimulate competitiveness and innovation.

In addition to a more systematic approach to trade negotiations, the United States also needs to get much more aggressive with trade enforcement and bringing forward WTO cases. Part of the problem is a resource issue. One reason the U.S. Trade Representative (USTR) has not done more to enforce existing trade agreements is because doing so is quite costly and labor intensive. Moreover, much of USTR’s budget goes toward negotiating new trade agreements. Congress should increase USTR’s budget with the new resources devoted to enforcement and the fight against unfair foreign trade practices. The United States should also create within USTR an ambassador-level U.S. trade enforcement chief and a Trade Enforcement Working Group. Creating these new entities would send a clear signal that a key part of USTR’s job is to aggressively bring actions against other nations that are engaged in technology mercantilism. The United States should also institute a 25% tax credit for company expenditures related to bringing a WTO case, because government alone cannot fully investigate all potential WTO cases and because companies that do help bring cases are acting on behalf of the U.S. government.

If these measures prove insufficient, it may be time to think about establishing a new trade regime outside of the WTO that would exclude nations that persist in the systematic pursuit of mercantilist policies that violate free trade principles. The Trans-Pacific Partnership could provide a model for how to organize such a new trade zone. The Trans-Pacific Partnership represents a vehicle for economic integration and collaboration across the Asia-Pacific region among like-minded countries—including Australia, Brunei, Chile, New Zealand, Peru, Singapore, Vietnam, and the United States—that have come together voluntarily to craft a platform for a comprehensive, high-standard trade agreement. The door would be open for all countries to participate, but those that would like to do so would have to abandon wholesale mercantilist practices and demonstrate genuine commitment to free trade principles.

Enough is enough. The stakes are too important. Trade, globalization, and innovation remain poised to generate lasting global prosperity, but only if all countries share a commitment to playing by the rules and fostering shared, sustainable growth. It’s time to end innovation mercantilism and replace it with good innovation policy.

Forum – Winter 2011

Geoengineering research

In “The Need for Climate Engineering Research” (Issues, Fall 2010), Ken Caldeira and David W. Keith issued a strong call for geoengineering research, echoing earlier such calls. I completely agree with them that mitigation (reducing emissions of greenhouse gases that cause global warming) should be society’s first reaction to the threat of anthropogenic climate change. I also agree that even if mitigation efforts are ramped up soon, they may be insufficient, and society may be tempted to try to actually directly control the climate by producing a stratospheric aerosol cloud or brightening low marine clouds.

It will be a risk-risk decision that society may consider in the future: Is the risk of doing nothing in terms of advertent climate control greater than the risk of attempting to cool the planet? To be able to make an informed decision, we need much more information about those risks, and thus we need a research program. However, such a research program brings up serious ethical and governance issues.

Caldeira and Keith call for a solar radiation management research program in three phases, 1) computer and laboratory research, 2) “small-scale” field experiments, and 3) tests of deployable systems.

Computer research, using climate models, the type in which I have been engaged for the past four years, does not threaten the environment directly, but does it take away resources, in researchers’ time and computer time, that could otherwise be used more productively for the planet? A dedicated geoengineering research program with new, separate funds would remove this ethical problem. But does it create a slippery slope toward deployment? Laboratory studies bring up the same issue but also make some wonder whether nozzles and ship and airplane platforms may be developed for hostile purposes.

Field experiments and systems tests bring up fundamentally different issues. Is it ethical to actually pollute the atmosphere on purpose, even for a “good” purpose? And would injecting salt into marine clouds or sulfur into the stratosphere threaten to produce dangerous regional or global climate change? How large an emission would be acceptable for scientific purposes? Would a regional cloud-brightening experiment be OK if emissions were less than, say, the emissions of a typical large ship that sails across the Pacific Ocean? Or does intention matter? The cargo ship was not trying to change climate. Or do two wrongs not make a right? Even if intentional pollution is small, is it acceptable? Detailed climate modeling will be needed to search for potential risks and design field tests, but how much can you depend on them? We will need to define how large an area, for how long, and how much material can be emitted. And how can this be enforced over the open ocean or the stratosphere, with no laws or enforcement mechanisms? If there is a potential impact on global climate, how do we obtain informed consent from the entire planet?

The UK Solar Radiation Management Governance Initiative is just beginning to address these issues, but it is not obvious that they will be successful. In any case, fundamentally new international rules, observing systems, and enforcement will be needed before we start spraying.

ALAN ROBOCK

Department of Environmental Sciences

Rutgers University

New Brunswick, New Jersey


Ken Caldeira and David W. Keith make a cogent argument for a rapid increase in research spending around aspects of geoengineering. They argue that to do so is simply a matter of prudent safeguarding against the worst effects of catastrophic global warming and our inability or future unwillingness to significantly and comprehensively reduce greenhouse gas emissions. Their article also serves as a sort of primer for the various strategy options, feasibilities, and costs associated with both carbon dioxide recapture (CDR) and solar radiation management (SRD) technologies.

However, three important caveats are either omitted or not sufficiently emphasized in an otherwise excellent article. First, although the authors are convincing in saying that the technological genie can be let out of the bottle carefully—and then controlled—there is little evidence for this in the real world. Technologies, even those most obviously abhorred, once discovered always end up being used. We have never displayed much self-restraint, and we nearly always underestimate the consequences (both positive and negative). As such, the authors’ cost estimates for some technologies, such as introducing sulfate particles into the stratosphere (10 billion per year) are, by their admission, far too low if risk factors and unintended consequences are included. Add the cost of the Amazon or Ganges drying up, and the costs of tinkering with the stratosphere soar. By providing a number without the cost of risks, there is much danger that the public will gravitate toward a nonoptimal solution.

Second, their call to establish a government-funded research plan, although laudable and necessary, is probably off by an order of magnitude. $5 million to start up (once university overheads are subtracted it will be half that) is a smaller sum than the price of certain collectable automobiles available today. We need at least $50 million to start, and we need it now.

Third and perhaps most importantly, the authors, although they are keenly aware of the enormous political and social hurdles facing planetary engineering, make no mention of funding for socioeconomic research or for tracking and influencing public opinion. Failing to understand and influence the public’s understanding of climate change and of geoengineering solutions to deal with the most immediate problems of such change will undermine transparency and equitable decisionmaking. The danger, of course, is that without such parallel efforts in the social dimension, solutions designed to help us all will end up mostly helping those who need it least.

M. SANJAYAN

Lead Scientist, The Nature Conservancy

Arlington, Virginia


Before the U.S. government can launch an R&D program to study geoengineering, it must consider, and develop policies and programs to address, the intellectual-property aspects of this emerging field. We traditionally think about intellectual property, and specifically patents, in rather narrow terms. Patents stimulate innovation by rewarding inventors with a limited property right, in return for disclosure about their inventions. However, scholars have increasingly discovered that patents can have much broader effects: They can shape whether and how technologies are developed and deployed. This poses serious problems for geoengineering, given the enormous stakes for humanity and the ecosystem.

Because the current patent system places considerable power in the hands of individual inventors, private companies, and universities, rather than in the hands of states or international governing bodies, it could be detrimental to geoengineering R&D in a few ways. [Although government funding agencies retain many rights over a patent if the inventor is a government employee (for example, in a national laboratory), they have very few rights if they fund extramural research.] First, if inventors refuse to license their patents, they could stifle the innovation process. This will be particularly problematic in the case of broadly worded geoengineering patents, which the U.S. Patent and Trademark Office has already begun to issue. Second, if inventors disagree with policymakers about the definition of a “climate emergency,” they could make independent decisions about whether and when to experiment with and deploy geoengineering interventions. On the one hand, they may refuse to deploy the technologies, arguing that we have not reached crisis levels. On the other hand (and more likely), inventors might decide independently and without authorization from a state or other relevant governing body to deploy a technology because Earth is in an “emergency” situation. Inventors would probably deploy the technology in an area with lax regulatory oversight, but the move would have global repercussions.

Luckily, these aren’t unprecedented problems, and we have tools to solve them. One option would be for the U.S. government to take advantage of its power to force inventors to license their patents. However, because it almost never uses this power, it must develop a detailed policy that outlines the circumstances under which it would require compulsory licenses for geoengineering patents. Another option would be to develop a special system to deal with these patents, similar to the one devised for atomic energy and weapons in the mid-20th century. The 1946 Atomic Energy Act divided inventions into three categories: unpatentable (those that were solely useful in “special nuclear material” or “atomic energy in an atomic weapon”), patentable by government (technology developed through federal research), and regular patents (all other technologies). One could imagine a similar system in the case of geoengineering, with criteria based on the potential dangers of the invention and the ability to reverse course. Regardless, in order to ensure the benefits of a U.S. government– funded R&D program, we must seriously consider whether the current patent system is suitable. To do otherwise would be tragically shortsighted.

SHOBITA PARTHASARATHY

Assistant Professor

Co-Director, Science, Technology, and Public Policy Program

Ford School of Public Policy

University of Michigan

Ann Arbor, Michigan


International and scientific agreement that fossil fuel combustion and deforestation are the major causes of global climate change, and the increasingly disruptive environmental and societal effects that are occurring, have led to widespread calls for mitigation and adaptation. Greatly improved efficiency and switching to non–carbon-emitting technologies, and doing so soon, can eventually halt climate disruption. A largely untapped opportunity for earlier slowing is for the world community to also move aggressively to limit emissions of short-lived species, including methane, black carbon, and the precursors to tropospheric ozone. Even with the most aggressive foreseeable emissions reductions, however, climate change will continue for many decades if additional steps are not taken, and so adaptation is, and will be, essential. And without additional actions, the decay of the Greenland and Antarctic ice sheets, significantly accelerated sea-level rise, and other disruptive surprises will grow beyond manageable, even to dangerous, levels.

Carbon dioxide removal technologies (see especially “Pursuing Geoengineering for Atmospheric Restoration,” by Robert B. Jackson and James Salzman, Issues, Summer 2010), including restoring and expanding forest cover and eventually scrubbing carbon from the atmosphere, offer an additional approach to slowing climate change and ocean acidification. Even if fully implemented, however, there would be virtually no effect on the rising global average temperature until global emissions are dramatically reduced. Current fossil fuel emissions are roughly equivalent to the net carbon uptake of the Northern Hemisphere biosphere as it greens from March to September, and are more than five times the emissions from current deforestation. Ending deforestation will be challenge enough; sufficiently enhancing ocean and/or land uptake of carbon dioxide and scrubbing out enough to stabilize atmospheric composition without sharply cutting emissions are a distant dream, if possible at all.

Reducing incoming solar radiation is a second climate engineering approach. Jackson and Salzman apparently consider such approaches too dangerous because of potential unintended side effects, whereas Ken Caldeira and David W. Keith argue that it would be irresponsible not to research and develop potential approaches to solar radiation management to have in reserve in case of a climate emergency. Although there is agreement that reducing the warming of solar radiation is the only technique that could rapidly bring down global average temperatures, waiting until the emergency is evident might well be too late to reverse its major effects, especially biodiversity and ice sheet loss.

An alternative approach would be to start slowly, offsetting first one year’s warming influence and then the next and next, seeking to stabilize at current or slightly cooler conditions rather than allowing significant warming that then has to be suddenly reversed. Although geoengineering might be viewed as dangerous on its own, the world’s options are climate change with or without geoengineering (which Jackson and Salzman mention but omit from their analysis). Starting geoengineering on a regional basis (for example, in the Arctic, by resteering storm tracks) seems likely to me to have a lower risk of surprises, nonlinearities, and severe disruption than letting climate change continue unabated, even with mitigation and adaptation. But perhaps not; that is what research is for, and it is desperately needed.

MICHAEL MACCRACKEN

Chief Scientist for Climate Change Programs

Climate Institute

Washington, DC


Health care innovation

In “Where Are the Health Care Entrepreneurs?” (Issues, Fall 2010), David M. Cutler makes a cogent case for health care innovation. He correctly cites the fee-for-service business model as a challenging one, in which care is fragmented, and incentives drive caregivers to maximize treatments. The fee-for-service model often fails to keep the patient at the center of care.

CUTLER CORRECTLY CITES THE FEE-FOR-SERVICE BUSINESS MODEL AS A CHALLENGING ONE, IN WHICH CARE IS FRAGMENTED, AND INCENTIVES DRIVE CAREGIVERS TO MAXIMIZE TREATMENTS.

As Cutler points out, Kaiser Permanente uses an integrated, technology-enabled care model. The centerpiece of our innovation is Kaiser Permanente HealthConnect, the world’s largest private electronic health record (EHR). KP HealthConnect connects 8.6 million people to their care teams and health information and enables the transformation of care delivery. Because Kaiser Permanente serves as the insurer and health care provider, important member information is available at every point of care, resulting in top-quality care and service delivery.

Cutler also highlights the need for preventive care, especially for chronic conditions. Our population care tools connect medical teams with organized information, resulting in health outcomes such as the 88% reduction in the risk of cardiac-related mortality within 90 days of a heart attack for our largest patient population.

Kaiser Permanente shares information within the organization to get these results, but it is far from “walled in.” Rather, we have established a model that we are now extending beyond our walls, effectively doing as Cutler suggests: bringing providers together to enhance quality and lower costs. In a medical data exchange pilot program, we helped clinicians from the U.S. Department of Veterans Affairs and Kaiser Permanente obtain a more comprehensive view of patients’ health using EHR information. Over many years, we have developed our Convergent Medical Terminology (CMT): a lexicon of clinician- and patient-friendly terminology, linked to U.S. and international interoperability standards. We recently opened access to make it available for use by a wide range of health information technology developers and users to speed the implementation of EHR systems and to foster an environment of collaboration.

Kaiser Permanente promotes technology and process innovation as well. Our Sidney R. Garfield Health Care Innovation Center is the only setting of its kind that brings together technology, facility design, nurses, doctors, and patients to brainstorm and test tools and programs for patient-centered care in a mock hospital, clinic, office, or home environment. As a result, we’ve introduced the Digital Operating Room of the Future and medication error reduction programs, and continue to test and implement disruptive technologies such as telemedicine, time-saving robots, remote monitoring, and handheld computer tablets—all designed to improve health and drive efficiency.

We’ve institutionalized innovation in many ways, including the Innovation Fund for Technology, which supports physicians and employees bringing important innovations from concept to operation. Many of our members are benefiting today from innovations, such as short message service text appointment reminders, that improve their experiences and reduce waste in the system.

An integrated model powered by the innovative application of information technology centered on patient needs is the key to improving health care. As more providers adopt EHRs and work toward interoperability, we will begin to see the results experienced at Kaiser Permanente increase exponentially and across the entire country.

PHIL FASANO

Chief Information Officer

Kaiser Permanente

Oakland, California


Regulating nuclear power worldwide

“Strengthening Global Nuclear Governance,” by Justin Alger and Trevor Findlay (Issues, Fall 2010) raises a number of important issues that merit serious discussion. Although I have no argument with the issues raised or their importance, I found the article to be written from a rather conventional perspective. It reflects the current focus on building only large plants, the assumption being that such facilities are also appropriate for all other countries to meet their electricity demands. I am not convinced.

Increasing demand and the implications of climate change are not restricted to developed countries, and rapidly escalating costs and robust infrastructure requirements are not restricted to developing countries. The real questions are: What are the electricity requirements, and how can these best be met? The projected demand in any given region may well be significant, especially if described in percentage terms, but the manner in which such demand can be addressed may be very different depending on a host of factors, several of which are raised in the article.

Countries or regions that would be very hard-pressed to commit to building large nuclear power plants (for cost and/or infrastructure reasons) may be better served to look at small nuclear facilities. In fact, such facilities might make sense in many instances in countries that already have large facilities.

Although small nuclear facilities would be more affordable, more rapidly installed, and would not overload the distribution system, there are obviously a host of issues to be addressed. The technologies of small nuclear plants are not nearly as well developed as those of the Gen III plants, there is no operating experience yet to demonstrate their claimed inherent safety, and regulators have only recently started to devote serious assessment time to such designs.

It goes without saying that the safety, security, nonproliferation, infrastructure, and human resource and governance issues that the article raises with respect to the conventional systems would also need to be addressed with small nuclear facilities. The size and relative simplicity of these designs may ease finding acceptable technical solutions, which in turn may help with the governance issues.

Developing countries may have somewhat of an advantage in being able to select technologies that best suit their needs rather than only buying into the concept of bigger is better. Adopting small nuclear facilities would involve a distributed delivery system with a number of production nodes spaced where they are needed. However, building broad public understanding and acceptance of the idea of numerous small facilities being installed close to the user communities could be at least as great a challenge as developing the regulatory expertise or the necessary safety culture. If such a route is chosen, developing countries may be in the forefront, because substantive discussions in developed countries on this distributed concept have yet to be seriously initiated. Are we missing out?

PAUL HOUGH

Ottawa, Ontario, Canada


Justin Alger and Trevor Findlay argue that “interest in nuclear energy by developing countries without nuclear experience could pose major challenges to the global rules now in place to ensure the safe, secure, and peaceful use of nuclear power.” Caution is clearly warranted. After all, a nuclear disaster anywhere in the world would be seen as an indictment of the technology, not a problem with the user. After the Chernobyl accident in 1986, it was nuclear energy that was blamed and continues to be blamed, not the nature of the Soviet communist system. Likewise, if there were a serious accident in Iran, it would be nuclear energy that would be blamed, not the nature of the Iranian totalitarian theocratic system.

Alger and Findlay go into great detail on the barriers confronting developing countries to complying with the global nuclear governance system (nuclear safety, nuclear security, and nuclear nonproliferation). Unfortunately, they have also overstated the problem. They begin by setting up two straw men as nuclear-aspiring developing countries: Nigeria and the United Arab Emirates (UAE). However, Nigeria is nowhere close to building a nuclear reactor, and classifying the UAE as a developing country is a real stretch. The UAE’s per capita gross domestic product is $58, 000—higher than that in Canada and the United States. The United Nations’ Human Development Report also considers the UAE to have a “very high human development,” ranking it 32nd in the world. Moreover, the vast majority of new builds will occur in existing nuclear markets (primarily India and China), not in new entrants from the developing world.

If the concern is the introduction of nuclear energy in developing countries, a better approach would compare the current situation with the historical record in Argentina, Brazil, India, China, and South Korea. All built nuclear reactors in the 1950s, 1960s, and 1970s when they were all developing countries. The global nuclear governance system was also in an embryonic state. There was no World Association of Nuclear Operators or Nuclear Suppliers Group. In addition, there were no international treaties on nuclear safety, nuclear waste, the physical security of nuclear materials, and nuclear terrorism. Although there were clear problems in issues of weapons proliferation, reactor performance, and other matters, the fact is that there were no serious accidents in these countries in years past. To better make their case, Alger and Findlay should have explained why global nuclear governance is more important today than in previous decades.

This short reply is not aimed at diminishing the role that new and more powerful international treaties and organizations have played in ensuring a more safe, secure, and peaceful nuclear sector. Nor am I suggesting that there is no need to strengthen the regime (I have made similar recommendations elsewhere). Rather, I am arguing that Alger and Findlay need to be less ahistorical in their analysis.

DUANE BRATT

Associate Professor

Department of Policy Studies

Mount Royal University

Calgary, Alberta, Canada


A smarter look at the smart grid

I commend Marc Levinson (“Is the Smart Grid Really a Smart Idea?”, Issues, Fall 2010) for putting front and center something that is often lost in technical and energy policy discussions: that the smart grid should be subject to a cost/benefit test. The “if it’s good, we’ve gotta do it” mindset pervades the smart grid discussion. Levinson provides a useful reminder that one needs to compare benefits to costs to see if the smart grid is worth it.

This brings up a second point I’d guess he’d agree with: If the smart grid is a good idea, why won’t the market take care of it? Being worthwhile doesn’t justify policy intervention, because marketplace success is our usual test for whether benefits exceed costs.

However, four market failures make smart grid policy worth considering. First, the absence of a way to charge for electricity based on when it is used creates enormous generation capacity cost. Levinson understates the severity of this problem. In many jurisdictions, 15% of electricity capacity is used for fewer than 60 hours out of the year, which is less than 1% of the time. To cover the cost of meeting that demand, electricity in those critical peak hours can cost upward of 50 times the normal price. Much of those costs could be avoided with smart grid–enabled real-time pricing.

A second problem is the production of greenhouse gases. Many cleaner electricity sources, particularly wind and solar, are subject to weather vagaries such as wind speed or cloud cover. By allowing utilities to match the availability of electricity to use, a smart grid may be an important tool for mitigating climate change.

Third, utilities surprisingly lack information on outages, depending on users for reports and then having to detect problems on site. Building communications and intelligence into the distribution network could help reduce the duration and severity of blackouts.

Fourth, many energy-sector observers believe that consumers fail to recognize that savings from reduced energy use will more than compensate for spending up front on high-efficiency equipment and facilities. A smart grid will allow utilities or entrepreneurs to combine electricity sales with energy efficiency in order to offer low-cost energy services that can pass along to consumers savings they didn’t know they could make.

Finally, Levinson’s concern regarding the little guy may be overstated. As Ahmad Faruqui of the Brattle Group has pointed out, it’s the wealthy whose air conditioners and pools are effectively subsidized by having peak electricity costs spread over the public at large. Moreover, a smart grid can enable utilities to pay users to reduce demand in those critical peak hours; the little guy can share in the savings.

In addition, residential use constitutes only around a third of electricity demand. Improving commercial and industrial efficiency, along with avoiding peak capacity, reducing carbon emissions, and improving reliability may justify the costs of the smart grid. But, as Levinson reminds us, that cannot be treated as a foregone conclusion.

TIM BRENNAN

Professor of Public Policy and Economics

University of Maryland, Baltimore County

Baltimore, Maryland

Senior Fellow, Resources for the Future

Washington, DC


Storing used nuclear fuel

“Nuclear Waste Disposal: Showdown at Yucca Mountain” by Luther J. Carter, Lake H. Barrett, and Kenneth C. Rogers (Issues, Fall 2010) incorrectly suggests that the political difficulties associated with a specific facility—Yucca Mountain—automatically translate to the absence of a larger used-fuel management program that can be successful. Although the nuclear industry supports the Yucca Mountain licensing process, it must be recognized that the program is bigger than Yucca Mountain itself and that there are advantages associated with ongoing efforts such as the development of a centralized interim storage facility.

Clearly, U.S. policy on the back end of the nuclear fuel cycle is not ideal, and the United States needs a path forward for the long-term disposition of high-level radioactive waste from civilian and defense programs. But the political football that has been, and to some extent remains, Yucca Mountain, is not the linchpin concerning growth in the nuclear energy sector. Rather, electricity market fundamentals will determine whether new nuclear plants are or are not built, as evidenced by site preparation activity under way in Georgia and South Carolina.

Several states have moratoria on nuclear plant construction by virtue of the government’s not having a repository for used nuclear fuel disposal, but there is widespread reconsideration of that ban in a number of those states. Alaska earlier this year overturned its moratorium. Industry’s safe and secure management of commercial reactor fuel is playing a role in this reconsideration by state legislatures.

Even as the administration’s Blue Ribbon Commission on America’s Nuclear Future examines a range of policy options, one certainty is that we will be securely storing used nuclear fuel for an extended period of time.

The nuclear energy industry supports a three-pronged, integrated used-fuel management strategy: 1) managed long-term storage of used fuel at centralized, volunteer locations; 2) research, development, and demonstration of advanced technology to recycle nuclear fuel; and 3) development of a permanent disposal facility.

Long-term storage is a proven strategic element that allows time to redesign the nuclear fuel cycle in a way that makes sense for decades to come. Meanwhile, the Nuclear Regulatory Commission’s approval of the final rulemaking on waste confidence represents an explicit acknowledgment by the industry’s regulator of the ongoing, safe, secure, and environmentally sound management of used fuel at plant sites and/or central facilities. Although used fuel is completely safe and secure at plant sites, indefinite onsite storage is unacceptable.

The Blue Ribbon Commission must take the next step and recommend forward-looking policy priorities for used-fuel management. To date, the commission has demonstrated an awareness of the importance and magnitude of its task. The challenge is to recommend a used-fuel management policy that can stand the test of time and enable the nation to take full advantage of the largest source of low-carbon electricity.

EVERETT REDMOND II

Director, Nonproliferation and Fuel Cycle Policy

Nuclear Energy Institute

Washington, DC


Early childhood education

In “Transforming Education in the Primary Years” (Issues, Fall 2010), Lisa Guernsey and Sara Mead argue that we must invest in building a high-quality education system that starts at age three and extends through the third grade. They envision this as integral to a “new social contract” that “sets forth the kind of institutional arrangements that prompt society to share risks and responsibilities of our common civic and economic life and provide opportunity and security for our citizens.”

They discuss the current fragmentation of the early childhood education sector, its uneven, mediocre quality, and the systemic underperformance of primary education in the United States, and they argue that fixing and extending the primary years of children’s education is a critical first step. I believe that this argument takes an important first step for granted. We need to define what quality early education programs look like, how to build capacity to provide it at scale, what outcomes we should expect—or demand—in exchange for greater investment, and what measurement will ensure that we receive value for investment.

Having worked in this field for 10 years, I see scant evidence of consensus about meaningful early childhood quality indicators among policymakers, providers, schools of education that prepare early educators, or agencies that administer and oversee this fragmented sector.

Many efforts to improve the early childhood education sector have been incremental and transactional versus transformational. Transactional leadership values stability and exchanges, whereas transformational leadership emphasizes values and volatility.

We know that evidence-based early interventions can build critical language, vocabulary, pre-literacy, and numeracy skills that, left unattended, typically result in reading and math difficulties in the primary grades. We also know that important social/emotional skills, such as attending to instruction, following teacher directions, learning to persist, and solving problems with words can be taught to 3- and 4-year-old children and that having these skills dramatically improves their potential for success in the early years.

Yet, in advocacy for greater access to preschool, many still argue a false dichotomy that early education is either about academics or developing children’s social/emotional skills, when research is clear that it must be both. In other situations, the primary purpose of early childhood centers is to provide care for children while their parents work and to provide employment for adults who have much in common with the families whose children are enrolled.

IN ADVOCACY FOR GREATER ACCESS TO PRESCHOOL, MANY STILL ARGUE A FALSE DICHOTOMY THAT EARLY EDUCATION IS EITHER ABOUT ACADEMICS OR DEVELOPING CHILDREN’S SOCIAL/EMOTIONAL SKILLS, WHEN RESEARCH IS CLEAR THAT IT MUST BE BOTH.

If we truly want to have a higher-performing educational continuum, we need to encourage more disruptive technologies in both early education and primary education to drive greater quality, while innovating to ensure that a higher-performing but diverse delivery system delivers quality outcomes throughout children’s educational experiences.

If, in the context of a “new social contract,” we could agree on an audacious, meaningful, and measureable goal, such as bringing all children to the normative range in language, vocabulary, pre-literacy, and numeracy skills before they enter kindergarten, we could make a stronger case for greater investment in early education and then truly begin to transform education in the primary years.

JACK MCCARTHY

Managing Director

AppleTree Institute for Education Innovation

Washington, DC


Lisa Guernsey and Sara Mead argue for universal preschool as the cornerstone of educational reform in the 21st century, emphasizing that a new approach to early education must also cross the threshold of early elementary classrooms. This will require a fundamental rethinking of the culture of teaching, including changes in teacher preparation and socialization. Guernsey and Mead correctly argue for greater emphasis on the lessons of cognitive science in teacher education curricula, clinical practice and mentoring, and cross-grade communication.

If early education is to deliver on its promise, this major change must include a reassessment of the skills involved in and value of jobs teaching young children before they reach the kindergarten door. Every day across the country, approximately 2 million adults care for and educate nearly 12 million children between the ages of birth and five. When educators are well-prepared, supported, and rewarded, they can form strong relationships that positively affect children’s success. Unfortunately, too many lack access to education, are poorly compensated, and work in environments that discourage effective teaching. Disturbingly, far too many practitioners live in persistent poverty and suffer untreated depression and other health problems that undermine their interactions with children. Job turnover in early education ranks among the highest of all occupations and is a driving force behind the mediocre quality of most early learning environments. Recent college graduates shy away from careers with young children, or seek positions in K-3 classrooms, knowing that the younger the student, the lower the status and pay of their teachers.

Long-held, deep-seated attitudes about the unskilled nature of early childhood teaching itself are evident in proposals to promote in-service training to the exclusion of higher education for early childhood teachers. Such positions reflect a culture of low expectations for these teachers, resulting in part from decades of predominately least-cost policy approaches to retain or expand early education. Human capital strategies, whether based on more education and/or training for teachers, dominate investments in quality improvement, eclipsing other equally significant factors, such as better compensation and supportive work environments that facilitate appropriate teaching and recognize the contribution of adult well-being to children’s development and learning.

Those concerned about the environment reframed our thinking about industry, creating new jobs to protect and replenish our natural resources. Today, green jobs constitute one of the few growing employment sectors in the economy. A new vision for the early learning workforce is equally important to our country’s future. Revalued early childhood jobs could attract young people who are educationally successful and excited to invest in the next generation, confident that they will earn a professional salary; such jobs could transform the lives of the current early childhood workforce, predominately low-income women of color, eager to improve their practice and livelihood, pursue their education, and advance on a rewarding career ladder. Such a change requires return-on-investment analyses that demonstrate the costs to children, their parents, teachers, and our nation if we continue to treat jobs with young children as unskilled and compensate them accordingly, even as we proclaim preschool education’s potential to address long-standing social inequities and secure global economic competitiveness.

MARCY WHITEBOOK

Director/Senior Researcher

Center for the Study of Child Care

Employment Institute for Research on Labor and Employment

University of California at Berkeley

Berkeley, California


Information that only 7% of students in the early grades have consistently stimulating classroom experiences must move us to make dramatic changes in the education of both our teachers and our students. It is essential that we scrutinize the educational climate that has emerged from what often feels like a desperate grab for simple answers to complex questions and make trans parent the situation in which teachers and students find themselves on a daily basis.

Schools have resorted to the overuse, particularly in schools with our most vulnerable children, of prescriptive curriculum and isolated assessments to solve their problems. This is a slippery slope that has far-reaching consequences. Prescriptive/scripted curricula force a pace that is unresponsive to the actual learning and understanding of the students. Teachers know they are not meeting the needs of their students but regularly report that they fear administrator reprisal if they are not on a certain page at a certain time. Prescriptive curricula were developed to support teachers who were new or struggling, not to replace the professional in the classroom.

Despite what we know about the importance of vocabulary development; oral language expression; and opportunities to represent learning in language, written expression, and graphically and pictorially, prescriptive curricula predominantly value closed-ended questions and “right” answers. These curricula do not take into account culture or language or prior experiences of the children, all things that we know are important in order to engage children and to ensure that they acquire information and knowledge. Instead of viewing the opportunity to ask children about what they have heard, learned, and understood as the strongest and most powerful form of assessment, teachers spend too much of their time pulling children for the assessment of isolated skills. This isolated skills assessment leads directly back to isolated skills instruction. It is a cycle that leads us far astray from the “developmentally appropriate formative assessments and benchmarks to monitor children’s progress….to inform instruction and identify gaps in children’s knowledge” espoused in this article.

In order to realize a new social contract for the primary years, let’s start by taking an active stance in favor of making the child, rather than the curricula, central to our concern and focus.

SHARON RITCHIE

Senior Scientist

FPG Child Development Institute

University of North Carolina, Chapel Hill


Lisa Guernsey and Sara Mead have once again written a very thoughtful and well-researched article on early education. They posit that we cannot expect school reform to be effective if we don’t start with the early years. They call for beginning with intensive and high-quality preschool but make the case that young children’s foundational skills are not yet developed until they complete third grade, and thus we need to think more systemically about these critical early years. Although we have extensive research to show that high-quality preschool on its own can result in lifelong benefits for the participants, it seems obvious that we could capitalize on these advantages by both ensuring that children’s infancy and toddler years are nurturing and enriching and by improving practice in kindergarten and the primary grades. The difficulty in this is determining what that system should look like. What are the critical ingredients of an effective P-3 or birth–to–third-grade system?

This is not a new idea. Guernsey and Mead do not include in their review the results of multiple attempts to capitalize on preschool’s success. The first of these, The National Follow Through Project, was initiated in the 1970s with mixed results. Over the next 30 years, four more attempts were made to show that the effects of preschool could be enhanced by focusing on kindergarten and the primary grades. Across all of them, results have been mixed, limited, or showed no impact on children’s school success.

To ensure that this P-3 initiative is more successful than those in the past, we should carefully consider why past initiatives were unsuccessful. We also will need to set out some specific program guidelines and criteria for effective early education. These guidelines are needed to make our vision operational so that we can say: When this system is implemented in an effective way, this is what will we see in practice.

Guernsey and Mead begin to lay out some of these specifics, but as a field we are a long way from having established what the P-3 system should look like, and once implemented as intended, what we expect to see as the actual effects on student learning. We must conduct evaluations to assess whether our best-evidence practices actually result in increased student success. With a review of our past successes and (mostly) failures with P-3 and with more a detailed definition of what it should look like and accomplish, the P-3 movement will be much more likely to fulfill its promise.

ELLEN FREDE

Co-Director

National Institute for Early Education Research

New Brunswick, New Jersey


Rethink technology assessment

Richard E. Sclove is one of the most creative voices promoting participatory technology assessment and is responsible for bringing the Danish consensus conference model to the United States. In “Reinventing Technology Assessment” (Issues, Fall 2010), Sclove articulates the many virtues of citizen engagement with technology assessment and calls for broad support of the Expert & Citizen Assessment of Science & Technology network (ECAST).

Although Sclove’s contribution to conceptualizing participatory technology assessment cannot be overstated, there are two general areas in his article I’d suggest demand careful consideration before we move headlong along the lines he suggests. First, we need to give serious thought to ways in which we might promote different thinking about technology among our leaders and lay citizens. Second, more attention to the challenges raised by previous efforts at participatory technology assessment is required before we institutionalize relatively unaltered versions of these initiatives.

Our culture is clouded by a pervasive scientism. This is the idea that when we consider developments in science and technology, matters of value (what is good or bad) and of social effects (who will be advantaged/who disadvantaged) can be separated from technical considerations. Instead, in real life, the social and the technical are inextricably linked. Until technical experts, policymakers, and citizens understand this, we will engage in futile efforts to create responsible science and technology policy. Advocates of participatory technology assessment should promote education programs that challenge reductive ideas about the science/technology–values/society relationship and seminars for policymakers that would lead them to reflect on their assumptions about the sharp distinction between knowledge and values.

With regard to the practical organization of participatory initiatives, we must establish rigorous methods for evaluating what works. Existing mechanisms of recruitment for consensus conferences, for example, are premised on a “blank slate” approach to participation. Organizers seek participants who have no deep commitments around the issues at stake. In the United States, where civic participation is anemic, this makes recruitment challenging. However, research I have done with collaborators shows that non–blank slate citizens are capable of being thoughtful and fair-minded. We should not exclude them.

To take another example, citing the case of the National Citizens Technology Forum (NCTF), Sclove advocates the virtues of using the Internet to expand the number and geographical location of participants in consensus conferences. However, although the Internet may provide a useful tool in the future, as my collaborators and I have shown, NCTF (which we were involved in organizing) participants often experienced the online portion of the forum as incoherent and chaotic. It was not truly deliberative. To draw on the promise of the Internet for participatory technology assessment will require carefully selected software and attention to integrating in-person and online deliberation.

I support the objectives behind ECAST. I do hope, however, that part and parcel of the initiative will be a national effort to change thinking about the nature of science and technology and that ECAST will engage in rigorous evaluation of the different participatory assessment approaches they use.

DANIEL LEE KLEINMAN

University of Wisconsin, Madison


The future of biofuels

“The Dismal State of Biofuels Policy” (Issues, Fall 2010, by C. Ford Runge and Robbin S. Johnson) is essentially history, because it is all about corn ethanol, which has now peaked near the 15-billion-gallon mandate established by Congress. Corn ethanol will not grow further, so the real issue is not past policy but future policy with respect to other biofuel feedstocks. The hope for the future is biofuels produced from cellulosic feedstocks such as corn stover, switchgrass, miscanthus, and woody crops. These feedstocks can be converted directly into biofuels via a thermochemical process leading to green diesel, biogasoline, or even jet fuel. Congress has mandated 16 billion gallons of cellulosic biofuels by 2022. The problem is the huge uncertainty facing potential investors in these facilities. The major areas of uncertainty are:

  • Market conditions—what will be the future price of oil?
  • Feedstock availability and cost
  • Conversion technology and cost
  • Environmental effects of a large-scale industry, and
  • Government policy

The cellulosic biofuel technologies become market-competitive at around $120 per barrel of crude oil. We are far from that today, so there is no market except the one created by government policy. Feedstock cost is another big issue. Early U.S. Department of Energy (DOE) estimates put the feedstock cost at around $30/ton. Today’s estimates are closer to $90/ton, triple the early DOE figures. There are no commercial cellulosic biofuel plants, so there is huge uncertainty about how well they will work and what the conversion cost will be. Although most assessments of environmental effects show environmental benefits of cellulosic biofuels, there remain unanswered questions regarding the environmental effects of a large industry such as that mandated by Congress. Finally, government policy is highly uncertain. There is a cellulosic biofuel subsidy on the books today, but it expires in 2012 before any significant amount of cellulosic biofuel will be produced. The Renewable Fuel Standard (RFS) established by Congress has off-ramps, meaning it does not provide the iron-clad guarantee of a market needed by investors in today’s environment. Senator Richard Lugar has proposed a reverse auction process to support the industry. Potential investors would bid the price at which they would be willing to provide cellulosic biofuels over time. The lowest price bidders would receive contracts. Such a policy would reduce market and government policy uncertainty, leaving companies to deal with feedstock and conversion technology issues. It is this kind of forward-looking policy we need to consider at this point.

The three policy changes proposed by Runge and Johnson are really old news. Changes in the RFS are not being considered. There are already proposals on the table, supported by the corn ethanol industry and many in Congress and the administration, to phase out the corn ethanol subsidy. There is also a proposal to end the import tariff on ethanol. Where we need to focus our attention is on cellulosic biofuels that do not pose the challenges described by Runge and Johnson for corn ethanol. That is not to say it would be cheap, but if Congress wants the increased energy security and reduced greenhouse gas emissions, cellulosic biofuels are where the focus should be, and, indeed, today that is where it is.

WALLACE E. TYNER

James and Lois Ackerman Professor

Department of Agricultural Economics

Purdue University

West Lafayette, Indiana

Renewing Economically Distressed American Communities

All communities do not fare equally well after recessions and other economic shocks. Some bounce back fairly quickly. Others suffer more and take longer to recover—sometimes decades longer. A sluggish return to growth is not always necessary, however. There is evidence that well-targeted policies may be able to speed the pace of recovery.

Buffalo, New York, is one example of a community that has suffered for far too long after an economic shock. In 1950, Buffalo was the nation’s 15th largest city, boasting nearly 600,000 residents. It was a nexus of manufacturing and automobile and aircraft assembly and home to the world’s largest steel mill. Buffalo’s boomtown prosperity radiated out across Great Lakes shipping lanes and railway hubs, and attracted migrants from around the country. In 1970, the president of Bethlehem Steel, the operator of the steel plant, said of the city, “You can’t help but believe that a tremendous decade lies ahead.”

But three harsh recessions between 1969 and 1982 pushed Buffalo and many other manufacturing-based cities off the path to prosperity. During each recession, manufacturing employment in the United States plummeted by between 9 and 15%. These were not temporary layoffs; jobs disappeared, shifts shrank, and plants closed. Buffalo’s steel mill, which had employed 20,000 workers in 1965, was shuttered completely in 1982. That year, unemployment in the Buffalo area, which had been well below the national average for at least a decade, topped 12%. Local income, which was more than 6% above the national average in 1970, is today 9% below the average. When jobs disappeared, so did workers—in droves. By 2000, Buffalo’s population had fallen by half. Property values dropped, and neighborhoods crumbled into disrepair, pocked with abandoned homes. More than a quarter of the city’s residents lived in poverty.

Today, Buffalo remains distressed, and poverty in the central city is still very high, but the situation is improving. The Buffalo metropolitan area’s unemployment rate of 7.6% is below the national average. Employment rates have increased, and income, although still below average, is no longer falling even further. New businesses have moved in. Developers, drawn to low property prices, have started to enter the local real estate market. Families have followed. In 2010, Forbes Magazine called Buffalo one of “America’s Best Places to Raise a Family,” based on factors such as the cost of living, prevalence of homeownership, median household income, commuting time, crime, and high-school graduation rates.

No city should have to suffer the persistent distress that Buffalo and other cities have endured. It should not take 40 years for a city to recover. But the slow pace of recovery in the wake of the recent Great Recession, compounded by ongoing restructuring in the U.S. economy, raises the troubling prospect of newly distressed communities that will languish for a long time.

Here we draw on economic research to argue that a national economic strategy to aid distressed communities is both appropriate and necessary. There are many opportunities to develop and implement policies that can deliver more success stories and quicker recoveries, even in the wake of a rapidly changing economy. We recognize, however, that every community is different and that there is no one-size-fits-all solution for the challenges facing economically distressed communities. We therefore propose a basket of options that could begin the process of restoring good jobs to local workers. Each option follows three approaches: attracting new businesses, aiding displaced workers, and matching workers to jobs.

The problem of distressed communities

Workers and their families living in especially hard-hit communities face a number of challenges. Unemployment in persistently distressed areas often arises from plant closings or mass layoffs associated with declines in specific industries and businesses. Unlike other types of joblessness, these losses can result in a permanent reduction of job opportunities as well as the erosion of local workers’ marketable skills. In addition, evidence suggests that local economic shocks have long-lasting effects on local labor markets.

Losing a long-held job does not just result in temporary unemployment. It often leads to permanent income loss because workers earn lower wages upon reemployment. Figure 1 summarizes a study completed by Till von Wachter, Jae Song, and Joyce Manchester that compares the earnings trajectories of workers who lost their jobs in a sudden mass layoff in the early-1980s recessions and workers who maintained their jobs throughout those recessions. Before the recessions, both groups’ earnings followed a similar pattern. After the recessions, however, displaced workers faced devastating long-run earnings losses. Even in 2000, almost 20 years after the 1980s recessions, a sizable earnings gap remained. According to the study, a displaced worker with six years of job tenure faced a net loss of approximately $164,000—more than 20% of his or her average lifetime earnings. Long-run earnings losses in fact dwarf income losses resulting from a period of unemployment.

Job loss also has calamitous effects on workers’ health and families. In the year after they lose their jobs, men with high levels of seniority experience mortality rates that are 50 to 100% higher than expected. Elevated mortality rates are still evident 20 years after job losses. Children of jobless workers also suffer income loss. They not only have a tough time finding jobs when the unemployment rate is high in their local labor market, but also earn considerably less than their peers elsewhere once they have entered the market. Earnings gaps persist even 10 years after these young people have left school.

A sharp economic shock permanently affects communities just as it affects workers. For communities experiencing the largest economic contractions during recessions, the impact on employment and income can be extremely persistent. The data show that unemployment rate differences between distressed areas and the rest of the country dissipate within a decade, but this is largely because of workers leaving distressed areas rather than a resurgence of job opportunities.

Figure 2 shows income for the 20% of counties that experienced the largest drops in inflation-adjusted income per capita during the early-1980s recessions. About 10% of U.S. residents live in these counties. Before the recessions, average incomes in these counties (indicated by the purple line) moved in lockstep with incomes in the rest of the country (indicated by the green line). During the recessions, however, incomes in these counties plunged by 14% more than did average per capita incomes elsewhere.

For most of the country, it took less than two years after the end of the 1982 recession for average incomes to return to their pre-recession levels. But for the hardest-hit communities, it took more than six years. Figure 2 shows that, after the recessions, incomes in these counties began to grow again but at a slower rate than in the rest of the country. Instead of catching up, these communities lagged farther behind. Today, almost 30 years later, there is a gap of almost $10,000 in average per-person income.

A different but still disconcerting pattern holds true for employment. Figure 3 illustrates the path of employment (defined as the share of local residents with a job) relative to where communities started in 1979, just before the recessions. Employment in the hardest-hit areas plunged: Roughly 4% of their respective populations lost jobs. Although employment growth eventually returned and roughly followed the trend in the wider economy, the gap has still not closed. There are simply fewer working adults in the most distressed areas even today.

During the past 30 years, average earnings in the hardest-hit communities grew by only 12%, or about one-quarter the national rate. Employment as a share of the population increased by much less there than elsewhere, and populations grew more slowly. Because workers left these communities and took their families with them, demographics changed too; there were fewer young people and more retirees. As a result, demand for housing was weaker and home prices increased more slowly than elsewhere. Falling home prices and lower rents may help workers stretch their budgets, but they are unlikely to offset the decline in workers’ income.

An optimistic view is that these changes—falling wages and land prices—will ultimately spark renewal by attracting new businesses and providing new residents with better homes at lower cost. Indeed, in cities such as Buffalo, economic factors such as these are attracting businesses and families. But stabilization takes many years. That a recession could temporarily have dire effects is not surprising. But for its toll to be even greater by some measures a quarter-century later is sobering.

Concerns about distressed communities are particularly salient today. The Great Recession and ongoing restructuring in manufacturing, construction, and other industries have affected some communities much more than others. There is a serious risk that new communities will face long-lasting economic hardship even as existing distressed communities continue to struggle.

The impact of the Great Recession (see the hardest-hit counties in Figure 4) reflects the geography of declining manufacturing activity in the Midwest and Southeast and the burst housing bubble in states with the greatest run-up in home prices. Unemployment is concentrated in the industrial Midwest—Michigan, northern Ohio, Indiana, and western Pennsylvania—as well as in states that have significant manufacturing operations, such as Alabama. It is also high in states where home building had been an important source of economic growth, such as California, Nevada, Arizona, and Florida.

It is particularly troubling that the geographic pattern of unemployment tends to reflect the pattern of employment in specific industries. Unless these industries return to full capacity or new industries move in, distressed communities could face long-lasting economic hardship.

The Great Recession’s geographic impact is very different from that of the 1980s recessions. Relatively few counties appear in the bottom 20% in both periods. In the 1980s, oil-and gas-producing states such as Texas, Oklahoma, Louisiana, and Wyoming were hit hardest. The fact that these two patterns of distress differ so much is important because it tells us that the shocks that communities face vary from recession to recession and the risks that materialize are idiosyncratic and relatively unpredictable.

Why national policy is needed

Because industries are not spread evenly across the country, problems in one industry can translate into a local disaster. This is especially true in manufacturing, because individual plants frequently employ hundreds or even thousands of workers.

The need to encourage national benefits but also mitigate idiosyncratic and large localized costs suggests one rationale for federal involvement: providing insurance against unforeseen risks. A number of state-based programs, including unemployment, disability insurance, and Medicaid, insure against unforeseen risks for individuals. We believe that there are also reasons why the federal government should consider policies specifically directed at distressed communities. At their core, these rationales recognize that communities are greater than the sum of their individual parts.

Addressing the economic and social costs associated with persistent localized economic distress requires a different set of policy tools from the ones the country has been using. Most existing policy and most social insurance spending are directed to people, not to places.

Perhaps the strongest argument for federal involvement is research showing that economic adjustment takes longer and is harsher than previously recognized. In many distressed communities, the post-recession rate of economic growth remains below that of the rest of the nation for decades. This suggests that there are substantial barriers to recovery and that overcoming them requires substantial help. There are four rationales for the federal government, in particular, to play a strong role in aiding these communities. These include its ability to:

Promote agglomeration economies. Studies show that people and companies are more productive when they cluster, especially when they work in the same industry. Improvements in manufacturing processes and other efficiencies tend to diffuse to neighbors: When one company does better, in other words, others also improve. The private market does not capture these spillover benefits, however, and so businesses do not take into account these potential gains when deciding where to locate. They need encouragement to gather together—encouragement that government can provide.

In the case of distressed communities, an economic shock that directly affects certain businesses may result in unforeseen costs to nearby businesses. Targeted programs to attract new businesses could help offset these costs. The rationale for intervention in this case is not to help a specific company but to generate spillovers that benefit many local businesses. Policies should thus distinguish between cases where location subsidies generate broader growth and renewal effects and cases where subsidies benefit the recipient only.

Avoid tipping points. Research suggests that persistently elevated unemployment can have a devastating impact on crime, teenage pregnancy, mental health, and other social problems. In The Truly Disadvantaged (1990) and When Work Disappears (1996), William Julius Wilson argued that many social problems are fundamentally the result of jobs disappearing. He and others argue that concentrated areas of economic distress and joblessness result in a breakdown of other social structures.

In one version of this theory, when unemployment reaches a certain level or tipping point, negative consequences become much more severe. For example, an increase in the unemployment rate from 14 to 15% might be much more detrimental for a community than an unemployment increase from 4 to 5%. This differential suggests that there may be gains from reducing unemployment in particular areas, even at the expense of employment elsewhere. To be sure, empirical evidence is mixed on the existence and significance of tipping points.

Facilitate skill acquisition. There is promising evidence that education and training pay off in higher future earnings. But unemployed workers, younger workers, and workers in distressed neighborhoods may not be able to afford the upfront cost of such an investment. The private market is less willing to make loans for training and education than for cars or homes, in part because workers cannot use future earnings as collateral. This provides a major rationale for the federal student loan program.

In distressed communities, workers displaced from long-held jobs often have skills that are best suited to industries or occupations in decline. This is why they tend to earn less even if they manage to find new jobs. Evidence suggests that some of these workers would benefit from retraining, but that many tend to underinvest in additional schooling. Besides facing barriers to educational loans, they often lack good information about the returns from undertaking training programs. Government investments in the right kind of training for certain displaced workers could yield benefits greater than the costs of that training.

Minimize adjustment costs. Adjusting to economic distress often involves incurring costs. History suggests that the movement of families to new places is a primary way for communities to adjust to economic shocks. But moving is costly and potentially wasteful. The costs of moving go beyond the costs of selling a home and shipping furniture. Families often develop strong bonds in their communities. When a family moves, children have to be uprooted from their schools, and friends, social routines, memories, and local knowledge are left behind. It takes time and effort, moreover, to learn about a new community and become integrated members of it. There are few ways to avoid or mitigate these costs and no insurance policy against them. In other words, people can’t protect themselves against the risk that a local employer will fail or that a vibrant industry may become obsolete.

A similar argument can be made about infrastructure. It is impossible to ship roads and bridges to follow movements in population. When a city or community declines, it leaves behind a base of infrastructure meant to serve a larger population. The reverse is also true: Immigration and population growth may lead to congestion and require new infrastructure investment.

Even when there are significant benefits to moving, there may be barriers that prevent people from relocating to a community with better job prospects. In this case, moving is an investment in future earnings, just like an investment in education. And just like unsecured educational loans, loans to facilitate moving are difficult or impossible to get, leaving workers unemployed or underemployed when they could do better elsewhere.

Approaches to helping distressed communities

Addressing the economic and social costs associated with persistent localized economic distress requires a different set of policy tools from the ones the country has been using. Most existing policy and most social insurance spending are directed to people, not to places. This includes policies such as unemployment insurance, health insurance for children of unemployed or underemployed adults, food stamps, and other forms of assistance. These programs are, moreover, intended to be temporary solutions for short-term problems. Unemployment insurance in normal times lasts only 26 weeks, and other programs include time limits. In addition, most are conditioned largely on unemployment rather than on underemployment. They protect against poverty caused by job loss, but not against lower wages. In short, these policies do not directly address the causes and costs of long-term economic distress for workers, their families, and their communities.

There are alternative approaches that could promote economic recovery and shorten the depth and duration of economic distress by directly targeting residents, workers, businesses, and infrastructure in distressed communities. In the current fiscal and economic environment, it is even more important than usual that the benefits of these programs exceed costs. Furthermore, these programs should be targeted at communities that meet objective criteria for persistent distress, such as high rates of unemployment or low rates of income growth over several years.

We recommend a three-pronged approach to aiding distressed areas that is motivated by the fundamental mismatch between the skills of local workers and the demand for their work from local businesses and industries. This approach involves:

Attracting new businesses that can provide jobs, raise wages, and provide local services. Distressed communities usually present a poor environment for business investment. Plant closings and mass layoffs result in increased poverty as well as lower consumption. Detroit provides a particularly poignant example. There is no longer a single national grocery chain with an outlet in that city. In addition, shrinking tax bases can lead to cuts in key services—smaller police forces, lower-quality schools, and poorly maintained physical infrastructure—and even basic services such as waste disposal and snow removal. Having fewer municipal services effectively raises the costs of doing business. Finally, of course, residents may not have the skills new companies are seeking.

Communities have tried many approaches to attract businesses, with mixed success. A typical approach has been to provide subsidies or tax breaks for new businesses. Policies based on this strategy have been tried for decades, but evidence of their effectiveness is weak. Tax cuts reduce overall business costs, but they may not compensate for the cost of establishing a new business. Businesses may also be wary of investing their own resources in programs such as job training that may not benefit them exclusively. They certainly cannot be expected to improve public infrastructure.

Attracting businesses to revitalize distressed communities requires a holistic approach that targets all of the major problems these communities face. Tax cuts may be especially effective when combined with expansions in public services and infrastructure investment. On-the-job training can help make labor costs in distressed communities more competitive. Other options include supporting programs that provide direct consulting assistance to employers.

Timothy J. Bartik, a senior economist at the W. E. Upjohn Institute for Employment Research, has proposed one version of this approach. In his paper “Bringing Jobs to People,” written for The Hamilton Project, Bartik argued for a return to the original Empowerment Zones created in the 1990s, which combined tax cuts for businesses with grants to state and local governments for public services. Additional grants would help businesses invest in training that is tailored to meet their specific needs. Bartik also argued for expanding the Manufacturing Extension Program, which offers subsidized consulting services to small- and medium-sized manufacturers and helps them to improve their productivity and profitability. Recognizing that the body of evidence on the efficacy of place-based policies is mixed, Bartik’s proposal included methods to evaluate programs as they are scaled up, so that policymakers can determine which ones are the most successful.

Aiding displaced workers. As we have said, for people whose jobs disappear, a period of unemployment tends to be less costly than potentially permanent earnings losses on reemployment. These long-term losses can exceed $100,000 over a lifetime and are not addressed by any programs.

One option to consider is wage insurance, which would pay an unemployment insurance–like benefit to workers even after they find new jobs if their new wages are much lower (say, 30% lower) than their previous wages. Wage insurance might fill 25% of the earnings gap.

Another alternative is to help displaced workers improve their job skills. Evidence suggests that job training through community colleges can boost displaced workers’ earnings and help restore their incomes. A study in Washington State showed that the equivalent of a year of community college increased displaced workers’ earnings by 9% for men and 13% for women—a sizable return. Even taking just a few courses increased earnings substantially.

The benefits of retraining, however, vary widely. They depend, in particular, on the types of students who retrain and the kind of courses they take. Quantitative subjects, science classes, and health care courses boosted earnings by 14% for males and 29% for females, gains that come close to offsetting the losses from displacement. Success was greatest for younger workers and those with a good academic record.

There are potential gains from retraining programs that include several high-return courses and from supporting institutions that provide these courses. It is especially important to support retraining during economic downturns, when cuts in government budgets often mean cuts in education.

In “Retraining Displaced Workers,” a working paper written for The Hamilton Project, Robert LaLonde of the University of Chicago and Daniel Sullivan of the Federal Reserve Bank of Chicago proposed increasing federal funding for retraining by extending Pell Grant eligibility to training-ready displaced workers even after they are reemployed. They also argued that there should be a federal mechanism for distributing aid for education and retraining during recessions in order to counteract the tendency of state and local governments to cut education budgets. To encourage training in fields with higher returns, LaLonde and Sullivan suggested that extra support should be provided for courses in technical fields and health care, which are often more costly for community colleges to offer. Both investments in community colleges and subsidies for retraining should be accompanied by financial aid policies that encourage students to complete their training. New policies should also evaluate the returns from different programs, establish standardized curricula, and disseminate information to help students make informed choices.

Matching workers to new jobs. The country needs to improve how it matches workers with jobs they are suited for. Losing a job is a harrowing experience for workers and their families. Some are able to adjust without government aid. These workers are generally well-educated and have substantial savings. Other displaced workers lack these advantages. Faster and better job matching would have national economic benefits, reducing the waste of resources from prolonged unemployment and underemployment.

The country needs to improve how it matches people looking for jobs with jobs they are suited for. Faster and better job matching has national economic benefits, reducing the waste of resources from prolonged unemployment and underemployment.

One approach would be to augment one-stop career centers. In a 2009 Hamilton Project paper, “Strengthening One-Stop Career Centers: Helping More Unemployed Workers Find Jobs and Build Skills,” Louis S. Jacobson, a senior fellow at the Hudson Institute and visiting professor at Georgetown University, noted that improving the job search assistance and counseling services that one-stops offer—in particular, steering workers toward high-return training—could help workers improve their skills and match up with better jobs.

Another approach is to help workers relocate to communities with greater job opportunities. Moving can be a good way to find work but involves costs that are sometimes difficult to bear, especially during hard times. The slowdown in mobility that typically occurs during recessions has been even more pronounced during the Great Recession. Residential mobility rates in the United States are currently at a historic low as compared with past recessions. In fact, they have reached their lowest levels since World War II.

To give job seekers the resources to move for work, Jens Ludwig of the University of Chicago and Steven Raphael of the University of California at Berkeley called for creating a loan program to finance employment-related moves. As discussed in their Hamilton Project paper, “The Mobility Bank,” monthly loan repayments would depend on reemployment earnings. The mobility bank would be accompanied by increased use of national job banks that help people search more broadly for jobs. If workers have better job opportunities elsewhere, and a mobility bank to loan them the money to move, they would be more likely to leave distressed communities. This could improve their long-term earnings while also speeding recovery in distressed areas by reducing the glut of jobless individuals.

Improving policy by learning what works

Local development strategies in the past have included many kinds of programs, but policymakers lack good evidence for which programs work. Sometimes outcomes are not tracked at all. In other cases, there is no rigorous attempt to separate program effects from other economic and policy trends. Programs often are not designed with evaluation in mind, even when slight modifications would make them easier to study. The lack of evidence about efficacy undermines support for even those programs that may be working and creates a perception that local development projects are not cost-effective investments.

Every new policy to speed up recovery in hard-hit communities should be accompanied by constant and rigorous evaluation so that the most promising approaches can be scaled up. This means a financial commitment and the political will to distinguish between good programs and bad ones using the most credible empirical methods feasible. With knowledge of what works, the nation will be able to help future distressed communities avoid or shorten the decades-long period of adjustment that previously distressed communities have endured.

From the Hill – Winter 2011

R&D funding faces budget cuts

In the wake of Republican gains in the 2010 midterm elections, funding cuts to rein in soaring federal budget deficits have jumped to the top of the congressional agenda. All domestic discretionary spending faces cuts, including R&D. Meanwhile, as of mid-December 2010, Congress still had not approved any appropriations bills for fiscal year (FY) 2011, which began October 1.

Although President Obama has directed non–national security agencies to reduce their budget requests for FY 2012, he has indicated that he might exempt R&D spending from cutbacks. “I don’t think we should be cutting back on research and development,” he said at a November 3 news conference.

However, the 2010 Republican agenda, called A Pledge to America, calls for severe cuts in government spending, including R&D. Republicans want to cut the level of discretionary spending to FY 2008 levels, which would result in multibillion-dollar cuts in the federal R&D investment. (The incoming Republican leadership recently clarified that the FY 2008 target would be adjusted for inflation.)

The hardest hit agencies would be the National Science Foundation (NSF), the Department of Energy’s Office of Science, and the National Institute of Standards and Technology. All were authorized at higher spending levels under the 2007 America COMPETES Act and have since received major funding increases. The NSF cuts would mean more than 1,400 fewer new awards than in FY 2010.

Other agencies that the Republicans are targeting for sharp cuts include the National Oceanic and Atmospheric Administration and the Departments of Education, Transportation, and Interior.

In addition, budget rollbacks would make it difficult for agencies to sustain their current level of commitment to multiagency initiatives such as the U.S. Global Change Research Program. Republicans have also proposed a hard cap on future growth of the discretionary budget, which would make it more difficult for Congress to implement R&D growth initiatives such as the President’s Plan for Science and Innovation and the America COMPETES Reauthorization Act, currently under consideration in Congress.

Although A Pledge to America does not address R&D investment, Rep. Ralph Hall (R-TX), the next chairman of the House Science and Technology Committee, said in a November 3 statement that his priorities would be checking runaway spending, strong oversight, and the use of science policy to drive innovation.

The Obama administration has continued to push R&D initiatives. In a November speech, the president announced his proposal for “a more generous, permanent extension of the tax credit that goes to companies for all the research and innovation they do.” The proposal would raise the base amount of credit from 14 to 17% for companies choosing to calculate their credit using the “simpler” formula.

Obama’s proposal comes in the wake of the release of the 2010 edition of the EU Industrial R&D Investment Scoreboard, which reported that top U.S. companies cut R&D spending by 5.1% in 2009. Top European Union companies cut spending by 2.6%. Worldwide, the drop in R&D spending was only 1.9%, because of flat or growing investment by Asian countries.

Stem cell research funding in jeopardy

Strong support from the Obama administration appeared to clear the way for largely unfettered federal funding of human embryonic stem cell research. But in a surprising decision on August 23, a federal judge issued a preliminary injunction barring the National Institutes of Health (NIH) from funding the research, ruling that it was illegal under federal law.

The U.S. Justice Department quickly moved to appeal the injunction, and on September 27, a three-judge panel of the U.S. Appeals Court for the D.C. Circuit ruled that federal funding of human embryonic stem cell research could continue while the appeals process moved forward.

In his August decision, U.S. District Court Judge Royce C. Lamberth said that embryonic stem cell research was illegal under the Dickey-Wicker Amendment. The amendment, an annual feature in NIH’s appropriation bill, prohibits the use of federal funds on research that destroys an embryo. For more than a decade, the government has allowed the use of public funds for research on human embryonic stem cell lines as long as the derivation of the cells, which results in the destruction of an embryo, was carried out with private funds. The judge disagreed, ruling that it is not possible to disentangle embryonic stem cell research from the derivation of the stem cells.

The case before Lambert had been brought by two scientists, James L. Sherley of the Boston Biomedical Research Institute and Theresa Deisher of AVM Biotechnology in Seattle, who argued that the funding of embryonic stem cell research would cause them “irreparable injury” by increasing competition and therefore potentially taking funds away from adult stem cell research, their area of work.

For FY 2011, NIH has estimated that $358 million of its budget would go to human nonembryonic stem cell research and $126 million to human embryonic stem cell research.

After the August 23 decision was announced, NIH was forced to shut down its intramural human embryonic stem cell experiments and halt any grants or renewals that had not yet been paid out. Research that was already in progress was allowed to continue until at least the next renewal period.

Oil spill investigations continue

Although Congress has not enacted legislation to reform offshore oil drilling, investigations into the April oil well explosion and spill in the Gulf of Mexico and its effects on the Gulf Coast have continued. The House Energy and Commerce Subcommittee on Energy and Environment held a hearing on seafood safety, and the National Commission on the BP Deepwater Horizon Oil Spill and Offshore Drilling held several meetings. Meanwhile, plans for research and Gulf restoration have begun.

At an August 19 House hearing, which was held while the House was out of session, Subcommittee Chairman Edward Markey (D-MA) was the only representative to attend. Markey cited reports from the Food and Drug and Administration that found there is little chance for bioaccumulation in seafood of chemicals used to disperse the oil, but noted this is the case for short-term exposure only. Markey stated that it is not known what long-term exposure will do to seafood, nor is it known whether dispersants will increase other toxic compounds in seafood.

Ian MacDonald of Florida State University testified about potential future declines in seafood species. He predicted that the Gulf leak will reduce ecosystem productivity by 10 to 15%, although he acknowledged that this will be difficult to quantify. He stated that the decline will push some species past the tipping point, with tuna, shrimp, fiddler crab, and a type of clam being the most affected. He called for a multiyear Gulf monitoring program to determine the oil’s effects.

Seafood safety was also debated at the September 27 Commission meeting. Bill Walker, director of the Mississippi Department of Marine Resources, said that no seafood has been deemed contaminated, but Steven Murawski, director of scientific programs and chief science advisor for the National Oceanic and Atmospheric Administration Fisheries Service, defended the temporary closing of fishing areas to safeguard public health.

Another late-September meeting focused on research needs. Nancy Kinner of the University of New Hampshire stated that before the well explosion, oil spill R&D was underfunded. More money, she said, should be targeted to Arctic and deep-water environments and further monitoring of the Gulf ecosystem. Environmental Protection Agency (EPA) Administrator Lisa Jackson told the commission that more research was needed on dispersants.

In addition to threatening more than 1,000 miles of coastline and 38 species protected under the Endangered Species Act, the oil spill has shed light on the need to repair the overall Gulf ecosystem, which has lost land area about the size of Delaware during the past 50 years. The dams and levees along the Mississippi River that hold back the sediment that renews gulf marshes are the main culprits of the erosion.

A Gulf Coast restoration plan, written by Secretary of the Navy Ray Mabus, recommends that Congress dedicate a significant portion of any penalties for the Deepwater Horizon oil spill to a fund to address long-term Gulf recovery and restoration efforts. The report recommends the establishment of a Gulf Coast Recovery Council to manage the funds, coordinate recovery projects, and engage local governments and citizen stakeholders. As he awaits congressional action on the plan, President Obama issued an executive order to create an intergovernmental advisory body, the Gulf Coast Ecosystem Restoration Task Force, chaired by EPA Administrator Jackson, as a bridge to the Recovery Council to begin restoration efforts.

Some research plans are also moving ahead. BP announced plans for the implementation of a $500 million Gulf of Mexico Research Initiative (GRI) to study the environmental and public health effects of the spill. The fund will be managed by a board of scientists from academic institutions, which will be appointed by BP and the Gulf of Mexico Alliance, a partnership of the Gulf states. Research will be focused in five areas: physical distribution and ultimate fate of contaminants; chemical and biological degradation of the contaminants; environmental effects and ecosystem recovery; technology developments for oil spill detection, mitigation, and remediation; and human health.

Funds will be directed primarily to academic institutions in the region, but partnerships with institutions based outside the Gulf will be encouraged. A press release noted, “All GRI-funded research will be independent of BP, and the results will be published in peer-reviewed scientific journals with no requirement for BP approval.” BP has already provided $30 million for research at universities in the Gulf region and $10 million for research on human health at NIH.

Export control reforms announced

In late August, the Obama administration provided its recommended framework for a proposed reform of the nation’s export control system. Earlier this year, the president had commissioned an interagency review of export controls based on an executive order issued at the end of the Bush administration.

The administration’s proposed reforms would attempt to harmonize and simplify the two major export control systems. The State Department controls the export of weapons and weapons components, and the Commerce Department controls commercial exports that might have a military application.

Although these dual systems both attempt to control the inadvertent release of sensitive technologies overseas, they are implemented in a dissimilar fashion. This point was emphasized by a National Research Council committee in the 2009 report Beyond Fortress America, which called the export control regime an antiquated artifact of the Cold War.

The Commerce Department’s control list, for example, is written in very narrow terms and specifies precise technical parameters to determine whether an item is controlled or not. The State Department’s list follows very broad categories, and whether an item is controlled depends on whether it falls into one or more categories. These rules make it difficult, for example, for universities conducting research in a range of engineering fields to determine whether basic research projects that involve foreign students or collaboration with international universities violate export laws.

After the interagency review, the administration announced the results of an intensive scrub of the categories on the State Department’s Munitions List. It found, for example, that of the 12,000 items in one category, about 32% could be decontrolled entirely and 26% could be moved to the less-stringent Commerce Control List.

The administration’s statement foreshadows a major simplification of the export control process. The White House has requested that the remainder of the Munitions List be reviewed, and the State and Commerce Departments will develop new criteria for determining what should remain on the control lists. The administration asked that the agencies create a “tiered” system to determine items that should receive “stricter or more permissive levels of control” based on their ultimate destination, end use, and end users. Furthermore, it recommends that a “bright line” be created between the two existing control lists in order “to clarify jurisdictional determinations and reduce government and industry uncertainty” on whether an item falls under the control of the State or Commerce Department.

The ultimate goal is to work toward developing a single export control list, but reactions to the proposed changes have been mixed. Robert Berdahl, president of the Association of American Universities, lauded the announcement as “an important first step toward achieving meaningful and sensible export control reform,” saying that the reforms would “protect national security without disrupting university research” and that they were “intended to ensure that the world’s best talent can participate openly in that research.” Gary Milhollin, director of the Wisconsin Project on Nuclear Arms Control, on the other hand, criticized the proposed reforms, saying that “we have already reduced controls to the bone.”

Science and technology in brief

  • President Obama on October 11 signed into law a bill authorizing funding and activities for the National Aeronautics and Space Administration (NASA). The law authorizes NASA funding for three years ($19 billion in FY 2011 to $19.96 billion in FY 2013), extends the life of the International Space Station by five years to 2020, provides support for private firms to ferry cargo and people into near-Earth orbit, funds an additional Shuttle mission, and invests in a heavy-lift vehicle program that will make use of expertise from the now-abandoned Bush administration Constellation program as well as the Shuttle.
  • Concerned about the supply of rare-earth elements and minerals essential to producing a wide variety of high-technology goods, the House on September 29 passed a bill that would create the Rare Earth Materials Program within the Department of Energy to quantify U.S. stocks of rare-earth metals and find new ways to collect, use, reduce, reuse, and recycle these metals. A similar bill is pending in the Senate. Once the leading producer of rare-earth metals—used in sectors ranging from the automotive to clean energy to national security— the United States now imports all of its supplies and does not have any active mines. Even if the United States could harvest its rare-earth metals, it does not have the manufacturing expertise to process the metals. The metals would need to be shipped to China for processing. China currently supplies about 97% of the world’s rare-earth metals, and there have been concerns that it might restrict exports to benefit its own industries.
  • On October 5, President Obama signed into law the Improving Access to Clinical Trials Act of 2009, aimed at facilitating participation in clinical trials for treating rare diseases. It would exclude up to $2,000 a year in compensation for participating in such trials from income calculated to determine eligibility for Supplemental Security Income and Medicaid.
  • In August, the government announced a $1.9 billion initiative to reform the system for identifying and manufacturing drugs and vaccines needed for public health emergencies. The proposed strategy is the result of a Department of Health and Human Services report on medical countermeasures. Meanwhile, the President’s Council of Advisors on Science and Technology released its plan for improving the nation’s vaccine response.

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Accelerating the Pace of Energy Change

Scientists and engineers invariably see technology innovation as the primary, if not sole, driver of energy transformation, but changing the energy system involves much more. Economic, political, and business aspects determine whether any new technology will have a meaningful impact and, indeed, ultimately govern our ability to meet the substantial challenges that energy poses.

The U.S. energy challenges are separable into two largely independent issues. The first is energy security, which is mostly about imported oil and transportation. The United States imports roughly 60% of its oil, sending about a $1 billion per day offshore. Those imports also make us dependent on the actions and fates of a small number of countries. Recognizing this situation, the Obama administration has set a goal of reducing crude oil use by 3.5 million barrels a day within a decade, which corresponds to about 25% of what we use every day for transportation.

The second energy challenge is to reduce greenhouse gas emissions, which is mostly about carbon dioxide, most of which emanates from stationary sources (heat and power). There is an urgency to demonstrate U.S. leadership in dealing with what is intrinsically a global problem, requiring action from the developing world as much as from the developed world. Further, because energy infrastructure lasts for decades, it is important not to lock in future emissions. The conventional coal plants we build today will still be emitting greenhouse gases in the middle of the century. If we can develop the low-emission technologies that the world is going to want in the next several decades, the United States will benefit economically. For all of these reasons, the administration has set a goal of reducing U.S. energy-related greenhouse gas emissions by 17% in the next decade and 83% by 2050 (relative to a 2005 baseline).

Meeting these national energy goals will require significant changes in the ways we produce, deliver, and use energy. The challenge is to identify and implement the most material, timely, and cost-effective solutions. Neither time nor resources are abundant to effect these changes.

Why has energy supply changed so slowly?

History gives some guidance as to how and how rapidly our energy system could change. Figure 1 shows U.S. primary energy supply during the past 160 years [units are quadrillion BTUs (Quads)]. The rise of coal consumption to power the industrial revolution and of oil consumption to fuel the mobility revolution are evident, as is the more recent growth of natural gas and nuclear energy in the latter half of the past century. One lesson from this chart is that virtually all energy consumption has increased as the country has developed. Another is that new energy sources have largely supplemented existing sources, not displaced them.

Figure 1 also shows that the most rapid change in any source was the drop in oil consumption in the late 1970s and early 1980s in response to the oil embargo; the slope is about one Quad per annum. The country then used a total of about 80 Quads per year (now slightly less than 100), so the historical drop in oil consumption was an ~1% annual shift in the energy supply. This continued for about five years before reversing in subsequent decades. Because we are aspiring to reduction goals that amount to 17 to 20% over the next decade, the oil embargo analog gives a sense of the scale of change necessary, which must also be accomplished in a way that enhances our prosperity and quality of life.

The proportions of primary energy sources (see Figure 2) show that nonhydro renewable energy technologies, including wind, biomass, and solar power, are currently a small fraction of the total. The good news is that the energy mix changes because of technology innovation, economics, and politics. The sobering news is that change has taken place historically only on decadal time scales. We must learn how to accelerate the pace of change if we are to meaningfully and promptly address energy security and greenhouse gas emissions.

The rapid evolution of information technologies during the past several decades is often taken as a demonstration of just how rapidly energy could change. For example, the Department of Energy’s (DOE’s) world-leading computing systems have become 10,000 times more powerful during the past 15 years. And personal audio and video equipment went from tapes and CDs a decade ago to flash memory and mp3, during which time the mix of deployed energy technologies remained virtually unchanged. Effectively addressing energy challenges must start with an understanding of why energy has changed so slowly, and, of course, what we can do to accelerate that change.

The energy demand and supply sectors are really quite different in how they change. For example, until very recently, we were purchasing roughly 12 million lightweight road vehicles each year, each expected to last 10 to 15 years. That scale and turnover provide ample opportunity to practice engineering, to optimize the fuel/vehicle/ infrastructure system, and to come down learning curves. Moreover, most people do not buy those automobiles to make money, nor do they buy them with multidecade horizons. The supply-side contrast is stark: In 2008, the United States built only five new coal plants, 94 new gas plants, and about 100 new wind farms, and those supply technologies were purchased with the expectation that they would make money and last for decades. As a consequence, evolution on the supply side is much more difficult and slower than it is on the demand side.

There are four barriers to transforming our energy supply. The first is the scale of things. Large power plants, refineries, and transmission lines are each multibillion-dollar capital investments that must work with the existing infrastructure.

The second is the ubiquity of energy for heat, light, and mobility. That ubiquity means that many people are interested in it, and those interests don’t always align.

The third factor is incumbency. Energy is a commodity; electrons on the grid are anonymous, and fuel molecules are pretty much all the same. Any new technology is then going to have to compete on cost, and the margins are going to be thin.

Finally there is the longevity factor. Large energy assets last for decades (for example, nuclear power plants are being licensed to operate for 60 years or more in the United States). Businesses must bet a large amount of capital on what will happen 30 or 40 years out, with the expectation of commodity-level returns. All four of these factors combine to make the energy supply change very slowly under current circumstances.

The role of the private sector

To accelerate energy change, we must recognize the central role of the private sector and its interaction with the government. The U.S. energy system is almost entirely in the hands of the private sector, which therefore must be the executive agent for any change. Although the government has played a prominent role in technology development through R&D, pilot facilities, and commercial-scale demonstrations, scaling to commercial deployment and operations has traditionally been the responsibility of industry. Further, government alone cannot finance large-scale energy transformation; the total annual investment in the U.S. energy system is some $200 billion, whereas the DOE’s entire annual budget, including basic research, energy research, waste cleanup, and nuclear security, is $25 billion, which is no more than the capital budget for a single large energy company.

Energy transformation requires deploying innovative technologies, but it is not industry’s goal to deploy the most innovative or greenest technologies, although sometimes innovative or green approaches can be useful tactics toward profit. The multidecade time horizons for capital investments in the energy supply business highlight the centrality of predictable return on investment.

Combining the central role of the private sector in energy transformation with the priorities of industry leads to the conclusion that the energy supply will change only when business finds it profitable or it is mandated. President Obama clearly stated as much in his 2010 State of the Union address when he spoke of the need to “finally make clean energy the profitable kind of energy in America.”

Yet despite private-sector ownership, regulatory authority over the energy sector is held by a host of federal, state, and local governments, a composition that often inhibits changes in energy supply. For example, the some 3,000 public utility commissions and governing bodies that oversee the electrical grid all have different motivations, most of which do not include the mandate of energy change or issues germane to transformation. Examples include focusing on low rates rather than total consumer cost and a disregard for the issues of system efficiency or carbon externalities.

To illustrate the impact of regulation, tax policy, and profit on energy transformation, one need look no further than the wind and ethanol industries during the past decade. Since 1999, the relationship between production tax credits to induce profitability in these industries and the pace of change has been irrefutable (see Figure 3).

Although it is obvious that regulation and economics influence the actions of the private sector, if we are to rise to the president’s challenge of making clean energy profitable, it is important to understand how these forces drive business to act. At the heart of business is the allocation of capital, either the capital that you have or the capital that you might borrow. The goal of a business is to invest that capital for a legal and predictable return by balancing risk against reward. More risky investments must have higher reward than more conservative investments. Historically, electricity provides a modest but stable return on investment of around 5%, with a variance on annual return of only 1%.

The magnitude and stability of return are central to business goals, and many elements make up the assessment of any single big energy project. One element is technology risk: If a new technology is deployed, is it certain to work? Another is construction risk: Can the project be built on time and on budget? Another is supply risk: If the project converts switchgrass into ethanol, what is the confidence in switchgrass prices two decades into the future? Operations risk: Can the facility be run efficiently and reliably enough in order to hit profitability goals? Policy risk: Will shifts in policy during a few decades affect the economic viability of the business model? Market risk: How will the prices of the product or services sold fluctuate in the future?

Technology is therefore only one element in judging the viability of a project, and it is often the least influential in moving a project along. Operations and marketing issues are often much bigger factors in deciding whether to do something. So when new technologies are introduced in large projects, conservatism is the norm. To spend billions of dollars on a technology, the performance must be certain and the financial advantage must be significant, otherwise the technology will not be deployed.

Although our near-term goals require rapid deployment of new technologies, the time and expense associated with doing so in energy supply are great. A technology must be taken from a laboratory, where one might invest millions of dollars producing a fuel at a hundredth of a barrel a day (about half a gallon), and moved through pilot facilities, demonstration, and ultimately to full-scale production and deployment, where billions of dollars are spent for daily production capacity on the order of 100,000 barrels. This is a decade-long process because of the confidence required to take each step. For example, the SOx and NOx scrubbers used on coal plants today required 40 years and four generations of technology to reach their current state of maturity.

Many in the venture capital world are optimistic about their ability to transform the energy system. That could well be true on the demand side, where unit sales volumes can allow more efficient technologies to be more easily refined and scaled. But the average venture capital fund is about $150 million, with individual investments of $3 million to $5 million (and unusually $10 million to $20 million). These sums are more than 100 to 1,000 times smaller than the cost of a single power plant. Additionally, in energy supply one invests with a multidecadal perspective, whereas the average venture firm exit time is five or seven years. Also discordant with the venture capital business model are the high illiquidity of big infrastructure assets and the small and conservative returns noted above. All of this need not and will not discourage venture capital activity in energy supply, but it is important to understand the character and nuances of the big energy business. It is often said that big energy companies don’t understand risk, but the truth is precisely the opposite: The job of a large energy firm is to manage risk, but on a 30-year time scale and with much larger sums of money than other businesses usually do.

How can government accelerate change?

The discussion above shows that to effect significant changes in our energy supply, we in government must fully engage with the private sector, understanding that mindset and thinking like a business. Thus, the first and most important consideration is predictability. Given the long time horizons, policies that are uncertain or fluctuate every few years will not get us anywhere and in fact are often counterproductive. And of course the policies need to be well considered; we should not be starting out on paths that are suboptimal in terms of technology, economics, or environmental impact.

A second consideration is to play to business’s risk/reward calculus and do what we can to mitigate risks for the early movers. The DOE’s National Laboratories (large, multidisciplinary, mission-oriented research organizations) could play a much greater role here. The provision of user facilities as full-scale test beds for new technologies could also be very important for mitigating technology risk. Much as the DOE’s scientific user facilities have greatly advanced understanding of the physical world, technology user facilities could provide significant insight into the operation of innovative technologies and components at scale. For example, a microgrid test bed where various demand-side technologies can be tested in real environments, or combustion facilities where technologies for gas treatment, CO2 absorption, and other components could be tested at commercial scale and operational conditions would reduce uncertainties and risks surrounding the deployment of new technologies.

Market risk can be mitigated by establishing renewable or low-carbon power standards. Although various states are starting to do that, we lack national standards that would create a market for new technologies. Renewable fuel standards do exist and function in much the same way, although it is important to set these standards carefully. Longer-term power purchasing agreements similarly damp out market risk for the deployment of innovative technologies. These are all good things that the government can and should do to mitigate market risk.

To address capital risk, the DOE has been executing loan guarantees. Earlier in 2010, the department issued guarantees for the first new U.S. nuclear plants in 30 years. Other programs are stimulating technological energy innovation in this country, as well as the demonstration and deployment of new energy technologies at commercial scales.

The DOE is also working to accelerate the transition of energy technologies out of the laboratory. The Secretary of Energy and the leadership team have made connections across organizational units to bring the department’s resources to bear more productively on the energy problem. We have set up a number of new mechanisms, including Energy Research Frontier Centers that focus on the basic research side of energy technologies and the scientific barriers to technological progress, and Energy Innovation Hubs that span basic research and technology demonstration and concentrate resources where opportunity exists for rapid commercialization of discovery. The Advanced Research Projects Agency–Energy (ARPA-E), which funds high-risk projects in the search for new technical platforms, with the expectation of soon hitting a few home runs, has been inaugurated in the first year of the administration. Finally, the DOE is executing demonstration projects with industry to accelerate technologies out of the development cycle and into commercial viability. In doing so, we must do a better job of capturing the knowledge from each demonstration and making it widely available to the community trying to solve these problems.

Simulation is an important tool the research community can use to mitigate risk. The United States currently leads the world in applying high-performance computing to realistic physical models. That capability is largely resident in the DOE’s science and weapons programs and principally stems from a deliberate effort started in 1995 to understand the nuclear stockpile in the absence of underground nuclear testing. That program accelerated computing capabilities by a factor of 10,000 in little more than a decade. More importantly, it taught us how to combine data from integral systems with the results of laboratory experiments into the codes that are truly predictive for complex systems. Simultaneously during the past decade, the DOE’s Office of Science has been making that same class of machines available to the open scientific community for problems ranging from climate modeling to protein folding to materials science.

It is now time to focus simulation capabilities on energy systems. Combustion devices (internal combustion engines, boilers, and gasifiers), fission reactors, carbon capture and storage facilities, and the electrical grid are all applications where validated high-performance simulation would help to optimize designs, compress design cycles, and facilitate the transition of technologies to scale. Right now, this is a unique U.S. capability and so is a competitive advantage that must be seized. Recent simulation successes of Cummins in diesel engines and Goodyear in tires give some sense of what is possible. But real impact will require more deliberate efforts to bring today’s simulation capabilities into broader commercial practice and to improve them for future applications. Foremost among the latter would be accelerating computing capability by another factor of 1,000 within the next decade.

We hope to have made a convincing case that first, the energy supply business is not simple—there are nuances and aspects that may not be readily apparent to most people not directly involved with the industry—and second, that the government’s key role in catalyzing a transformation of the energy system is to mitigate risk for the private sector. Setting a predictable and well-considered playing field of policies and economics is the most important thing government can do. Beyond that, the DOE should facilitate large-scale demonstration projects and support precompetitive research, as well as the technology transfer necessary to move new technologies into the private sector.

Reducing Barriers to Online Access for People with Disabilities

As ever more education, employment, communication, entertainment, civic participation, and government functions move primarily or exclusively online, the high levels of inaccessibility on the Web and in Internet-enabled mobile technologies threaten to make people with disabilities into the second-class citizens of the information society. Unless the policy approach toward Internet accessibility for people with disabilities is reconceptualized for the current social and technological realities, people with disabilities will face exclusion from every core element of society.

In the United States, people with disabilities are the largest minority group. Some 54.4 million people, or 18.7% of the population, have a disability. This number will increase rapidly as the baby boom generation ages, because 53% of persons over 75 have a disability.

People with disabilities already face significant challenges in employment and education. Persons with disabilities face unemployment at more than three times higher levels than the rest of the population and suffer similar gaps in educational attainment. Yet 75% of people with disabilities who are not employed want to work. Only 30% of high-school graduates with disabilities enroll in college, as compared with 40% of the general population. One year after high-school graduation, only 10% of students with disabilities are enrolled in two-year colleges, and a paltry 5% are enrolled in four-year colleges.

Despite the fact that the United States has the world’s most comprehensive policy for Internet accessibility and that clear guidance for creating accessible technologies already exists, designers and developers of Web software and hardware technologies in industry, academia, and government often exploit holes in existing policy to ignore the needs of people with disabilities. As a result, most Internet-related technologies are born inaccessible, cutting out some or all users with disabilities.

People with disabilities use the Internet and related technologies at levels well below those of the rest of the population. The main reason for this is not a lack of interest or education, but that the Internet is inherently unfriendly to many different kinds of disabilities. These barriers to access and usage vary by type and extent of disability. Since the advent of the World Wide Web, study after study has demonstrated the inaccessibility of Web sites and other elements of the Internet. Recent studies of the accessibility of U.S. government Web sites, for example, have found that at least 90% of the sites have major access barriers, even though they are supposed to have been accessible for nearly a decade under the law. The levels of accessibility in commerce and educational settings are even worse. The failure of the current policy approach can be seen in the results of these studies.

Challenging interfaces

People of differing abilities obviously face different challenges in accessing the Internet. Persons with visual impairments can face challenges in the lack of compatibility of Web content with screen readers, which are software applications that provide computer-synthesized speech output of what appears on the screen, as well as equivalent text provided in the back-end code. Screen-reader users typically have problems when designers fail to put appropriate text tags on graphics, links, forms, or tables. For persons with motor impairments, such as limited or no use of fingers or hands, the barriers are created by cluttered layout, buttons and links that are too small, and other important navigability considerations (such as requiring the use of a pointing device) that can render entire sites and functions unusable. For persons with hearing impairments, the lack of textual equivalents of audio content can cut off large portions of the content of a site, and interactive Web chats and other conferencing features may be impossible. People with speech and communication impairments can also be excluded from interactive Web chats and other conferencing features. For persons with cognitive impairments, such as autism, dementia, or traumatic brain injury, issues of design, layout, and navigability are the difference between being able to use a site and not being able to use it. People with specific learning disabilities, depending on their nature, may face the same barriers as people with visual impairments or people with cognitive impairments. For people with seizure disorders, rates of flickering and flash can jeopardize their health.

Experiences with the Internet often vary by type of disability. The same Web site often offers opportunities for one group and excludes another. Consider Web-based distance education. A student who uses a wheelchair may find that being able to take courses online makes education much easier. But if the course Web site is not designed to be accessible for students with limited mobility in their hands, participation in the course may be limited or impossible. Similarly, a Web-enabled mobile device with a touch screen may seem like a miracle to a user with a hearing impairment and a nightmare to a user with a visual impairment, if it is not designed to provide alternative methods for interactions. Therefore, the Internet and related technologies present a complex set of problems for persons with disabilities, both as a larger population and as separate populations according to type of disability.

An accessible Internet holds enormous potential to heighten the inclusion of people with disabilities, facilitating telework, online education, participation in e-government, and formation of relationships that overcome barriers and challenges in the physical world.

Although the range of potential barriers to persons with disabilities in the online environment is extensive, there are ways to develop and implement technologies so that persons with disabilities are included. There are known and achievable means to address the access barriers listed above. However, many developers of Web sites and related new technologies simply do not consider persons with disabilities when they create or update products. Yet the inaccessible Web sites and technologies that result from this disregard of accessibility run afoul of federal civil rights laws for persons with disabilities. Many of the issues of inclusion and exclusion online for persons with disabilities have been considered in law and policy, but the conceptions of disability under the law, exemptions from compliance, limited enforcement, and the inability of the law to keep pace with technological development all hinder the impact that the laws have had thus far.

Despite all of these barriers, the Internet has been justifiably viewed as having enormous potential for promoting social inclusion for persons with disabilities. In 2000, people with disabilities who were able to access and use the Internet were already reporting notably larger benefits from the Internet in some areas than was the general population. Adults with disabilities in 2000 were more likely to believe that the Internet improved the quality of their lives (48% to 27%), made them better informed about the world (52% to 39%), helped them meet people with similar interests and experiences (42% to 30%), and gave them more connections to the world (44% to 38%) than the general population. Currently, some Internet technologies are a significant benefit to people with specific types of disabilities, whereas others offer potential opportunities to all persons with disabilities.

Smartphones, although excluding many other persons with disabilities, have been a boon for those with hearing, speech, or other types of communication impairments, who can now use the phones to communicate face-to-face much more efficiently than they previously could. Similarly, with video chat, these same individuals can now carry on conversations over the phone in new ways. For the broader populations of people with disabilities, the Internet has a great deal of potential to create new means of communication and interaction through online communities devoted to particular types of disabilities. People who might never encounter someone with a similar disability in their physical environment can now interact directly with people with similar conditions worldwide. For people whose disabilities limit their ability to leave their homes, the Internet has the potential to provide a far greater world of interaction. People with disabilities even have the option to choose to live their online lives as people without disabilities, if they so wish.

Beyond the clear potential socialization and communication benefits, the Internet offers an enormous array of new ways to pursue education and employment. For people who might find it very difficult or even impossible to travel to a building for work or school, the Internet provides the ability to work or take classes from home. These potential benefits might be the greatest benefits in the long term for promoting social inclusion of persons with disabilities, given that the current levels of employment and education for persons with disabilities are catastrophically low as compared with the rest of the population.

Based on the importance of all of these types of engagement with the technology, the lack of equal access to the Internet will become an even more serious problem in the future. As more activities in the areas of communication, employment, education, and civic participation move primarily and then exclusively online, the effects of unequal access on persons with disabilities will multiply and mushroom. As more functions are available exclusively online (for example, if taxes can be filed only online and the tax Web site is inaccessible), individuals with disabilities are placed in an untenable situation. Inaccessible online education alone could seriously erode the ability of people with disabilities to have a place in society. Yet the virtual world is currently extending the comprehensive physical exclusions of the past.

The extreme irony of the situation is that an accessible Internet holds enormous potential to heighten the inclusion of people with disabilities, facilitating telework, online education, participation in e-government, and the formation of relationships that overcome barriers and challenges in the physical world. We must create a new approach to public policy that will better eliminate the virtual barriers that have been built, ensuring that people with disabilities are not marginalized by society.

The reasons for online inaccessibility

What does it mean to have an accessible interface? In the technology world, it means that your computer interface will work for people with disabilities, many of whom use an assistive technology to access software, operating systems, and Web sites. Commonly used assistive technologies include a screen reader, which provides computer-synthesized speech output of what appears on the screen; speech recognition, which allows for hands-free input; and various alternative keyboards and pointing devices.

Guidelines from nongovernmental organizations provide concrete technical specifications explaining how to build accessible interfaces. Most Web accessibility regulations around the world, including those in the United States, are based on the Web Content Accessibility Guidelines, a set of standards from the World Wide Web Consortium.

Despite the existence of assistive devices and accessibility guidelines, if a Web site is not designed in a manner that it is flexible enough to work with various assistive devices, there is nothing that the user can do that will lead to successful use of the site. It’s not a matter of a user with a disability upgrading to a new version of software or purchasing a new hardware device. If a Web site isn’t designed for accessibility, no action on the user’s side will make interaction successful.

Yet the technical solutions are easy. They don’t involve any type of advanced coding. They generally involve adding appropriate markup, such as using good descriptive text to describe graphics, table columns, forms, and links. These solutions are the responsibility of Web site developers, designers, and Webmasters. No additional technical expertise is needed, just an awareness of the need to provide appropriate labels.

At first glance, an accessible Web site won’t look any different from an inaccessible one. An accessible Web page is simply a well-coded Web page, or as one federal web manager told us, “the same coding techniques that make a Web page accessible also help with search engine optimization, because all of that markup helps search engines find and properly classify your Web page.”

When a Web site is designed to be accessible from the beginning, there are no additional costs involved. If a Web site has already been designed, the amount of time and money required to retrofit it for accessibility depend on the size and technical nature of the site. Obviously, adding more textual labels will take a greater amount of time, depending on the number of static Web pages that must be edited. If a Web site uses a content management system, often the page templates can be edited very quickly, so that the page layout itself is accessible. Then, it’s only up to the content developers to make sure that they have labeled their pictures and provided closed-captioning or a transcript on multimedia. If a Web site is designed using inherently accessible technology such as HTML, the time and costs to make the site accessible should be limited. If a site is designed using an inherently inaccessible technology, such as a site built entirely in Flash, more time and expense will be required to it make it accessible.

Although all people with disabilities may be affected by inaccessible Web sites, those who are blind or have low vision are often the most affected. Computer interfaces are still primarily visual, and when the nonvisual equivalents are not coded properly, blind or low-vision individuals may have access to none of the content. Individuals with hearing impairments can access most content, except for the audio, when developers don’t provide transcripts or captioning. Individuals with motor impairments, who may be unable to use standard keyboards or mice, may have trouble interacting with Web sites that provide content that is reachable only via pointing devices. Many of the design features that help blind users also help people with motor impairments, because making a Web site user-friendly for the blind means making sure that all content can be accessed via a keyboard, which is also what is needed by people with motor impairments. There is still relatively little research on Web accessibility for people with cognitive impairments, with the small body of literature indicating differing types of effects based on different cognitive impairments. Reflecting this lower level of attention, U.S. regulations have not included guidelines that meaningfully address cognitive impairments.

Government obstacles

Today, people with disabilities cannot access much of the information on federal Web sites that is available to those without disabilities. For example, in October 2010, some content on the Web site at ready.gov, which provides emergency readiness information, was inaccessible, meaning that blind people could not access the information about hurricane preparedness and were not even aware that the information is there. Web sites that offer information about government loans and jobs are also inaccessible. Many federal Web sites state that users with disabilities should contact them if they have any problems accessing content, but then the online contact forms are themselves inaccessible.

These accessibility problems exist despite the fact that the federal government has pursued a robust legal program to promote equal online access through Section 508 of the Rehabilitation Act, the Americans with Disabilities Act (ADA), the E-government Act, the Telecommunications Act of 1996, and other related laws. These laws create the most comprehensive legislative approach to accessibility in the world. U.S. law focuses on the civil rights aspects of disability, which emphasize the ways in which society can better allow individuals with disabilities to function. Following the lead of the federal government, many states have also passed accessibility laws, such as Maryland’s Information Technology Nonvisual Access law and California’s Information Technology Accessibility Policy.

However, compliance with and enforcement of these laws have not been very effective. A recent study found that more than 90% of federal home pages were not in compliance with Section 508. Although the Justice Department has responsibility for collecting data from federal agencies on compliance every two years, it has not collected any data since 2003. The section508.gov Web site, which is managed by the General Services Administration, was redesigned in the summer of 2010, but the new version is not in compliance with Section 508. For instance, the feedback form has form fields that are not labeled properly, so that although the form looks normal to a user who can see, a user who is blind cannot determine what each form field is supposed to represent.

Each federal agency has someone in charge of compliance with 508, and the names are available on the section508.gov Web site. But that apparently has had no impact on actual compliance. Federal Web sites are not required to have an accessibility policy statement, and when they do, the statements often provide no more information than “we are compliant with Section 508” and even offer misleading information. Many states have regulations similar to Section 508 that address state government Web sites, but compliance and enforcement are often nonexistent at the state level.

In addition to the fact that no government agency is in charge of accessibility, there are several other barriers to compliance and enforcement with accessibility laws. People with disabilities have the responsibility to monitor accessibility and bring complaints and claims against agencies and companies that violate accessibility laws. This approach puts the burden on people with disabilities to enforce their own rights in a way that no other minority or traditionally disadvantaged group is asked to do. Even when people with disabilities are able to successfully make accessibility claims, they usually do not succeed. Under all of the disability laws, public and private entities can claim that the requested accommodation is not financially or practically reasonable and therefore is an “undue burden” under the law, meaning that the entity does not need to provide the accommodation because it represents too much effort in terms of time or cost. A final major problem is that the laws focus on the technologies, not the users of the technologies or the reasons why people use the technologies. Without a clear focus on the information and communication needs of the users with disabilities, the laws will permanently be far behind the current technologies.

The legal situation for private Web sites is even less clear. The courts in their interpretations of accessibility laws have sometimes created additional barriers to accessibility enforcement, often because of a limited understanding of the Internet and of accessibility. This problem is amply demonstrated by a federal district court opinion relating to the ADA—National Federation of the Blind v. Target, 2006—that found that the Target Web site, because it was closely integrated with physical stores, could be seen as being legally required to be accessible because of this nexus. However, the same opinion explicitly limited the holding to companies with an online presence that is closely integrated with a physical presence. As such, the current case law says that Target must have an accessible Web site, but Amazon.com, Priceline.com, and Overstock.com may not need to worry about accessibility. It also implies that a company can have both physical and online presences—with the online presence being inaccessible—so long as the Web site is not tightly integrated with the physical presence. Although technology companies have started to include accessibility features more consistently in mainline operating systems and devices, such as Microsoft Windows 7 and the Apple iPad, those are designed to be used by millions of users, and they have the benefit of the large number of accessibility and usability experts at Microsoft and Apple. For instance, text-to-speech and screen magnification come preinstalled so that there is no need to purchase any additional assistive technology. Web sites, on the other hand, tend to be developed by millions of different companies and organizations, often without accessibility experts involved and, surprisingly, without even basic knowledge of accessibility.

Promising developments

In short, although the United States has a robust slate of laws related to online accessibility, the laws have not had the effect of making the Internet widely accessible to persons with disabilities in the United States. A large part of the explanation is that the existence of laws and regulations is not sufficient. There must also be established mechanisms to develop guidelines, monitor compliance, promote innovation, and provide meaningful enforcement powers to ensure compliance. In the United States, no such agency exists. In fact, issues related to online accessibility are spread across agencies, and often no group has monitoring or enforcement roles with the laws and regulations, which include the undue-burden loopholes to avoid compliance.

However, there has been a recent surge in federal government focus on accessibility:

In March 2010, the Access Board released a draft for public comment of the first major revision of Section 508 and the accessibility provisions of the Telecommunications Act. The intent is that new guidelines, which are slated to be adopted in late 2010 or early 2011, will cover telephones, cell phones, mobile devices, PDAs, computer software and hardware, Web sites, electronic documents, and media players. If the new guidelines are implemented as suggested, the principles of accessibility will be strengthened considerably, although they continue to focus primarily on sensory and motor impairments. As mentioned earlier, this focus on sensory and motor impairments is primarily due to the concrete nature of the accommodations needed, along with the 30-year track record of existing research on how to successfully design computer interfaces for people with sensory and motor impairment, as compared to a shorter history with fewer concrete guidelines on how to design for people with cognitive impairments.

In June 2010, the Departments of Education and Justice took the unusual step of issuing a joint statement to educational institutions to say that the use of inaccessible e-book readers and similar devices by elementary, secondary, and postsecondary institutions was a violation of both the ADA and Section 508. Because many e-book texts and readers are not inherently accessible to readers with visual impairments, the movement by some universities to require the use of e-books was neglecting the needs of students and faculty with visual impairments. This means that educational institutions must consider the accessibility of not just the Internet and computers, but of newer mobile, Internet-enabled technological devices as well. There is no prohibition against using accessible e-book readers or other mobile devices, just the obligation for educational institutions to ensure that any of these that they adopt are not going to exclude students and faculty with disabilities.

Computer interfaces are still primarily visual, and when the nonvisual equivalents are not coded properly, blind or low-vision individuals may have access to none of the content.

In July 2010, a memo from the Office of Management and Budget and the federal chief information officer announced that although the Justice Department has not collected data on compliance since 2003, it would, in conjunction with the Government Services Administration, begin to collect data on compliance again as soon as fall 2010.

In July 2010, the Department of Justice also began pursuing a series of revisions to the ADA to account for changes in technology and society since the passage of the law. These updates include accessibility of movie theaters, furniture design, self-service machines used for retail transactions, access to 911, and Web site accessibility. The latter is the most significant proposal, because it would clearly extend the coverage of the ADA to the Web sites of all entities covered by the ADA: local and state governments and places of public accommodation. In such a case, the requirements of the ADA would apply widely to entertainment and commerce online, resolving the disagreements in the courts about the applicability of the ADA to e-commerce. All of these strengthened regulations, however, will be of value only if they are actually complied with, monitored, and enforced.

Finally, in October 2010, President Obama signed the Twenty-First Century Communications and Video Accessibility Act of 2010 into law, which includes provisions to expand the use of closed captioning and video description for online content; facilitate accessible advanced communications equipment and services such as text messaging and e-mail; promote access to Internet services that are built into mobile telephone devices such as smartphones; and require devices of any size to be capable of displaying closed captioning, delivering available video description, and making emergency information accessible. As with previous technology guidelines, however, these new standards include the ability to opt out if an undue burden exists.

Promoting greater accessibility

Despite the laws and enforcement activities by federal and state governments, the goals and intended outcomes of accessibility deserve greater consideration than they receive. Clearly, the most important goal is increased access to the information, communication, and services that are increasingly central to education, employment, civic participation, and government. Additionally, accessibility laws and regulations have the potential to provide incentives for the creation of new technologies, to make existing technologies usable by a wide range of users beyond people with disabilities, to involve people with disabilities in the development of regulations and technologies, to foster the creation of better-quality tools for developers, to make evaluation easier, and to educate the general populace about the importance of equal access for people with disabilities. For instance, eBay has recently been working on making both its buying and selling experience accessible, opening up the door for users with disabilities as consumers, sellers, and entrepreneurs.

During 2010, the U.S. government moved to strengthen regulations and policies related to Web accessibility; however, this is not enough. Evaluating compliance, improving enforcement, and increasing the availability of information about compliance are all necessary to promote and improve Web accessibility. There are a number of potential actions that can be taken to promote accessibility within industry and government.

The key concept to keep in mind is that the technical solutions for Web accessibility already exist. Coding standards for accessibility already exist, as do evaluation methods and testing tools. Because the technical knowledge already exists, the key challenges are knowledge dissemination, compliance, and enforcement. The first four actions below can be readily implemented, whereas the last two would require a sizeable reconceptualization of the approaches to accessibility monitoring and enforcement:

Creation of a chief accessibility officer within the federal government, dealing specifically with information and communications technology accessibility. Microsoft has such an officer, which has led to improvement in the accessibility of its interfaces. Although the White House currently has a special advisor on disability policy, this person deals with every issue related to disability policy, not specifically with computer interfaces.

Compilation of best practices related to processes for monitoring and enforcement of Section 508 within agencies. Although the www.section508.gov Web site currently has a link for good practices, it does not provide information except for technical specs, and many of the links are broken. Agencies need to have guidance on how to monitor and enforce compliance within their organizations. For instance, the monitoring processes used by recovery.gov, soon to be published in the textbook Interaction Design, are the types of best practices that need to be documented from other agencies.

Increased openness and transparency requirements explaining how agencies can ensure that their Web sites are compliant with Section 508. For instance, although many federal Web sites have an accessibility statement simply noting that their site is Section 508-compliant, there is limited information about what features make the site compliant, how the site was evaluated for compliance, and how the site maintains compliance. There currently are no requirements for federal Web sites to provide any information on site accessibility. Providing this roadmap to users with disabilities would be helpful.

Frequent, publicly posted evaluations of site accessibility across the government would be helpful in bringing the problem to light. For instance, the progress dashboard on the open government page at the White House (http://www.whitehouse.gov/open/around) describes how agencies are making progress toward the goals required by the Open Government Initiative. But it would be helpful to have similar data posted about agency progress toward accessible Web sites.

Altering laws to reduce the ability of covered entities to avoid compliance through undue-burden clauses. As noted above, these clauses have been widely used by corporations and government agencies to opt out of compliance with accessibility guidelines. Undue burden was originally conceived as a tool to be used in limited circumstances in which significant expense or effort would lead to the additional inclusion of only a small number of users or in which the expense or effort were simply beyond the resources of the organization. In practice, however, it is regularly used by companies and government agencies as a way to avoid many accessibility considerations, regardless of level of effort or expense. So long as these clauses exist, many accessibility guidelines will lack any meaningful force.

Finally, creation of a government enforcement agency devoted to accessibility monitoring and enforcement, which could be headed by the new chief information officer. Rather than continuing a decentralized approach, such an agency could create regulations, monitor and enforce compliance, support research, and better include persons with disabilities in the development of accessibility regulations. A dedicated agency could also educate the public and government employees on the importance of accessibility as an issue of inclusion and civil rights.

Without changes such as these, people with disabilities will not be able to fully participate in online opportunities in education, employment, communication, and government. Simply put, people with disabilities need accessibility to be included as equal members of the information society. Public policy has promoted the rights of persons with disabilities in the United States for four decades, and as technology evolves, so must legal guarantees of rights for persons with disabilities.

Making Stories Visible: The Task for Bioethics Commissions

A little before lunchtime on December 6, 1957, when the United States made its first attempt to match the triumph of Russia’s Sputnik 1 by launching its own Vanguard TV3 satellite into orbit around Earth, David Rejeski was one of millions of wide-eyed American grade-schoolers, raised on a steady diet of science fiction stories, whose day was being thrillingly interrupted by the future. Chins propped against crossed arms, ears held closely to an array of radio speakers perched atop desktops—the glory of being allowed to bring their own radios to school, of all places, and actually take them out in the middle of class!—David and his classmates listened breathlessly to the broadcast. The disembodied voice of the announcer from Cape Canaveral trembled through the static; the strange, stormy rumble that marked the sound of liftoff spread across the airwaves. Unhappily, the Vanguard was an ill-fated rocket that never made it into space; it lost thrust just two seconds after blasting into the air, sinking back onto the launch pad like a bad firecracker and disappearing into the flames that exploded from its still brimming fuel tanks. The new year would have to come and go before the Explorer 1 finally became the first successfully launched U.S. satellite and the nation’s next dramatic move in the Cold War space race.

But for at least one little boy—already spending every spare moment building rockets and operating ham radios, already enshrined as the proud president of a basement chemistry club whose activities scared his mother half to death, already deep into a self-administered program of scientific study spanning the fields of aerodynamics, electronics, and physical matter—there could have been no greater rush than the one that came from huddling around the radio in school that day. It wasn’t, you see, just the news he was listening to. Eavesdropping on the fate of that slim 72-foot rocket plucked David straight out of the four bland walls of his classroom and plunged him into a sensationally exciting issue of one of his beloved Captain Marvel comic books. For when science was a tool caught between good and evil (as it seemed to be that morning, and as it seemed the day a few years later when David figured out how to put together a homemade Geiger counter so he’d know if it was safe to walk out of his house after the Bomb fell), the stories, books, and movies that filled the young boy’s imagination became a powerful way to understand how science functioned in the real world.

More than five decades later, David Rejeski has grown basketball-player tall and cultivated a rumpled shock of salt-and-pepper hair that brushes over his ears; together with a matching mustache, it gives him a little of the look of a leggy Einstein. He has big, graceful hands that he still doesn’t shy away from getting dirty. The first degree Rejeski earned was a B.F.A., and in one of his lives he dreams up and sculpts beautiful pieces of handcrafted furniture, like smooth hardwood tables whose tapering legs are inspired by the shape of chopsticks or whose surfaces bear the intricate texture of thousands of individually chiseled facets. In his work life, though, the one in which he finds himself wearing the uniform of suits and ties and uses those wood-calloused hands to gesture with broadly as he speaks before government officials, Rejeski grew up to be a scholar of science, policy, and technology. Among many other responsibilities, his current job as the director of the Woodrow Wilson Center’s Program for Science and Technology Innovation involves assessing how the public understands both the promise and the peril of emerging scientific endeavors. He studies, in other words, fields such as nanotechnology and synthetic biology that mark today’s bold new frontiers in science the way space travel did 50 years ago. And though it might seem surprising, he’s still got Captain Marvel on the brain.

These days, though, when Rejeski encounters stories about science, he’s a little less wide-eyed and much more savvy about where they come from and what they mean. He was paying close attention, for instance, on the bright May day this year when biologist J. Craig Venter announced his institute’s historic accomplishment: creating the first viable bacterial cell whose genetic material had been written in digital code and then synthesized in a lab. Venter was standing at a press conference podium wearing a sober blue jacket and shirt, not sitting cross-legged around a campfire with shadows at his back. His voice was nonchalant, even matter-of-fact, not theatrical. But Rejeski could see that Venter, who has a reputation for being a renegade researcher with little regard for the “rules” society attempts to place on science, was telling a powerful story designed to downplay the potential risks of synthetic biology. Previously, Venter had compared the process of working with genome base pairs to solving a jigsaw puzzle or connecting the spools and sticks of Tinker Toy pieces—the kind of analogy that has always irritated Rejeski mightily. “When you use those metaphors,” he grumbles, “when you talk about building blocks and Legos, it infantilizes the science. It becomes something that children can play with, and therefore it can’t be dangerous.” At Venter’s press conference, that storyline grew up a little; but it was still calculated to simplify what is an incredibly complex biological process.

“Storytelling and narrative are absolutely critical to science. The public uses stories to understand science, and so do scientists, whether they’re doing it on purpose or not.”

“This is the first self-replicating species we’ve had on the planet whose parent is a computer,” Venter said to a roomful of journalists. He spoke of long months spent “debugging” errors in the synthetic DNA and of “booting up” the cell into which it had been transplanted. Finally, he explained that the scientists who had created the cell, known as Mycoplasma mycoides JCVI-syn1.0, had “encoded” a series of messages into its genetic material, including the names of authors and key contributors, the URL of a Web site, and three literary quotations about the nature of discovery and creation.

By framing his work through the narrative of computer engineering, Venter was crafting a story about synthetic biology that presented it as a safe, repeatable, and controllable technology. Life, ran the story beneath his words, is essentially information. Organisms are information-processing machines. Creating life is like making a machine; if its design contains errors, we will find and fix them. And like a machine, the nature of a synthetic organism is so malleable to engineering that its DNA can be stamped with its creators’ intentions. “What Venter’s doing,” says Rejeski, “is making use of an engineering narrative that sends a message to the policy people and the public that all this has a high degree of controllability. People tend to think, well, engineers do a fairly good job. Most of the time, bridges don’t fall down. But a cell is essentially a stochastic system, and we don’t have that kind of control over it. Venter’s got enough of a microbiology background to know better. He’s using a reassuring story that makes everything seem much simpler and less risky than it really is.”

Science and storytelling appear antithetical. Science deals in a non-narrative form of rationality, offering facts where stories offer interpretations. But Rejeski pushes back on that easy dichotomy. “Storytelling and narrative are absolutely critical to science,” he will tell you. “The public uses stories to understand science, and so do scientists, whether they’re doing it on purpose or not.” One place where the two realms intermingle is the space Rejeski happens to inhabit every day: evaluating the human significance of new scientific discoveries. What is life? What would it mean to live in a world where humans synthesize life?

Lacking a single “objective” answer to these questions, our responses to them depend on framing and perspective—aspects of storytelling. The philosopher Fern Wickson made this clear when she closely examined nine common cultural and scientific narratives about nanotechnology, each “a story that begins with particular presuppositions and ends in support for particular areas of nanotechnology development.” In some, nanotechnology is shown as controlling nature; in others, transgressing its boundaries or treating its ills. Yet though these stories are clearly distinct, Wickson writes, each is presented “as a simple description of the way things are … this often masks the beliefs that underlie each of the different narratives and the research directions in which they tend to lead.” In other words, many narratives about science are invisible. Not recognized as stories built on particular assumptions and expressing particular points of view, they can seem to be simple accounts of reality. This is particularly true of stories that accumulate around emerging disciplines such as nanotechnology and synthetic biology, whose applications, implications, and limitations are not yet well understood by the public or scientists themselves.

Rejeski is among the few working in science who have made this issue his business, a fact that he laughingly admits can make him feel a little like Pandora: constantly opening the lid of a box most researchers and policymakers would rather keep shut tight. Yet with questions of law—Should we press on with this technology? With what limitations?—answers depend on the story one chooses. It is important to make those choices with our eyes open to the ways different stories, including those told by scientists and engineers, frame and interpret reality. This is the point Rejeski emphasized this past July during his invited testimony before the newly formed Presidential Commission for the Study of Bioethical Issues (PCSBI). Chaired by University of Pennsylvania president Amy Gutmann, the PCSBI is the government body that was charged by President Obama with assessing the risks and benefits of synthetic biology as soon as Venter’s feat went public.

At about 9 a.m. on July 9, Rejeski, dignified in a dark gray suit that hung just a hair too large on his shoulders and a striped tie that he reached up to smooth several times as he began to speak, took the place assigned to him in the cool carpeted conference room of the Ritz-Carlton hotel in downtown Washington, DC, where the PCSBI had chosen to hold its first round of meetings. To his front and sides were the 13 members of the commission and two fellow panelists; together, these central attendees were seated at tables that formed a closed square. Behind them, and out of Rejeski’s sight, about a half dozen rows of chairs were slowly filling up with members of the public. He didn’t need a good view to know that these probably weren’t teachers or electricians or firemen who just happened to have a personal fascination with genetics; instead, the audience was made up of a small and very specific set of people with a vested interest (money, mostly) in synthetic biology. Industry insiders. In fact, the meeting, which began with Gutmann introducing a designated federal officer to “make it legal,” resembled nothing so much as the formal gathering of a board. Which is perhaps why it was so much fun for Rejeski to know that besides graphs and other data, in a few minutes he was about to show these people slides of comic books, movie posters, video games, and cartoons.

He was the first speaker of the day, and he began simply enough. “Let me start,” he opened, “by saying that we have devoted about six years of our time … trying to bring the voice, or voices, of the public into the conversation about science policy on emerging technologies.” If you weren’t paying attention, you might have missed his next sentence, delivered almost as a throwaway as he searched on the table for the clicker he would need to control the rest of his presentation. It didn’t draw a laugh from the crowd but was obviously charged with a deeply dry humor that emerged from Rejeski’s sense of how little attention is paid to this kind of work. “In terms of how we do this?” he said, “It’s pretty easy: We talk to them.”

In the past few years, he’d traveled from Spokane to Dallas to Baltimore, Rejeski said, simply asking people what they knew and how they felt about synthetic biology. And what he’d found was that because most people don’t understand the science behind it, the combination of these two words tends to set off a fast-moving train of loose associations in people’s minds, fueled by half-remembered news stories. “The train,” he explained, “goes something like this: Synthetic biology, is that like artificial life? Is that cloning?” Rejeski’s pace, normally measured and thoughtful, became brisker as he counted out the links, which he said took most people about 15 seconds to get through. “Is that stem cells? Is that GMOs?” He stopped. Raised a pair of bushy eyebrows. When asked about the possibility of someone creating synthetic life, Rejeski explained, there was a clear trend among the people he met: “‘I’m worried about this.’ Over half. ‘I’m excited about it.’ Less than half.” But if they didn’t know much about this field of science, why exactly would public perception skew toward fear? Though most people in the room wouldn’t realize it, Rejeski’s answer would take him back to the little boy he’d once been.

For most of the past 10 minutes, the images appearing on Rejeski’s slides had been perfectly conventional. True, a Gary Larson cartoon had sneaked in that gave him great pleasure to include. (In the first panel of the original, a man admonishes his dog to stay out of the garbage; in the second, a word-balloon shows the only thing getting through to the dog: its name. Rejeski had searched the Internet for hours to find the cartoon, one of his favorites, and carefully modified it to show how the public understands scientific communications about synthetic biology. In his version, the balloon in the top panel read “synthetic bacterial cell genome…artificial DNA base pairs…sustain life replicating;” the one on the bottom, “blah blah SYNTHETIC blah blah blah LIFE blah. When it went up, a smile came into view under the speaker’s mustache that he couldn’t quite hold back.) But otherwise, Rejeski’s testimony had largely been illustrated with a series of neat color-coded bar graphs depicting the vast amounts of data he’d collected. He’d “stuck to the script,” as he would later put it.

In the last minute of his testimony, though, his tone shifted. He returned, for just a moment, to the way in which he’d first begun to relate to science: through the lens of story. “Human beings,” said Rejeski, quoting the late novelist David Foster Wallace, “are narrative animals. That is how we understand science.” Even as he spoke, he clicked over to one of his last and most surprising slides—one that hadn’t even made it into the first version of his testimony, but that in the end he couldn’t resist using. It was full of stories. “This,” Rejeski began cheerfully, pointing to a colorful vintage comic book cover complete with costumed superhero jetting across the sky, “is Captain Marvel and the Wonderful World of Mister Atom.” He gestured to the right, where he’d placed images from Spiderman 2—the looming villain Doc Ock standing with his back to us, four long metallic tentacles twisting out of his back like snakes—and a frightening screenshot from an Xbox 360 game called Bioshock, set in a post-apocalyptic world populated by insanely violent, genetically mutated humans. Further down, if you’d been in the Ritz Carlton that day, you’d have seen the cover of Michael Crichton’s bestselling novel Prey: a black, buzzing cloud of tiny escaped nanobots darkening the sky like a Biblical plague; and an image from the new genetic engineering horror flick Splice: a bald, hoofed, three-fingered humanoid with huge blank eyes and a perky tail, crawling on top of a lab table.

“The thing that the scientists have to understand is that people will fall back on these narratives long before they will ever pick up a biology book. And they are incredibly pervasive, ubiquitous, and powerful.”

“These are deep, deep narratives,” Rejeski said. He described large circles in front of him with his hands as he talked, as if pushing the stories towards the commission members, willing them to understand their importance. Rejeski himself felt that importance keenly. These stories, he knew, were the primary source of the unease he’d sensed about synthetic biology from the people he’d talked to; these stories, functioning mostly on an unconscious level, were the fire fueling fears about escaped organisms, new terrorist threats, and the hubris of designing life. “The thing that the scientists have to understand,” Rejeski concluded, in a voice that could not have been more urgent and sincere, is that “people will fall back on these narratives long before they will ever pick up a biology book. And they are incredibly pervasive, ubiquitous, and powerful.”

When he thinks about it later, Rejeski still isn’t sure how his testimony was received. He was warmly thanked by several commission members, he says, but can’t tell whether those were simply formalities. Frankly, he says, he’s just glad nobody attacked him in the corridor or called him crazy to spend so much time talking about comic books and movies. “I guess that means I’m still kind of tolerated,” he chuckles. One thing Rejeski does admit is that the stories he chose to deconstruct in his talk aren’t the only narratives about synthetic biology that have an impact on the discourse; not by any means. As his own frustration with Venter’s conveniently adopted metaphors indicates, scientists themselves are not immune from the storytelling impulse. In fact, the day before Rejeski spoke, PCSBI heard from an array of scientists and engineers whose narratives, unlike those of Captain Marvel and Michael Crichton, remained largely unexamined—invisible to the substance of the debate.

One of these scientist-storytellers was Drew Endy, assistant professor of bioengineering at Stanford University and the director of BIOFAB, a facility that makes standardized DNA parts freely available to academic labs, biotechnology companies, and individual researchers. If Rejeski is approachable and avuncular, Endy, whom a recent Stanford Magazine profile called synthetic biology’s “most fervent evangelist” and described as emitting “a sense of barely contained energy,” has the charmingly intense air of a round-spectacled John Lennon after a recent haircut.

Endy, like Rejeski, is well aware of how much more powerful scientific narratives become when they are interwoven with popular culture. In 2005 Nature published a comic book written by Endy titled Adventures in Synthetic Biology. In its 12 brightly colored pages, Sally the Professor instructs “Dude,” a plucky young science student, about the basics of synthetic biology. Dude’s mastery of the subject comes from experimenting with a bacterium with the friendly name of “Buddy.” Through his efforts, Dude learns that the genome is the “master program” and that organisms can be “reprogrammed” to perform unprecedented functions. The story is suffused with a sense of adventure and, yes, scientific playfulness. Life is portrayed not only as infinitely malleable, but also as essentially interchangeable with human artifacts. After all, life is the “stuff” Dude is “building,” and he does so with inverter devices that incorporate bits of DNA. The story does contain one accident in which Buddy explodes, but this happens early on and Dude learns from his mistake. The wildly optimistic, even hubristic, message of the comic is that with sufficient knowledge humans can master life and reprogram it to suit their desires. Its last lines, which could have come straight out of a Dick and Jane picture book, read “Look at us! We’re building stuff!”

Not surprisingly, where Rejeski drew the commission’s attention to the dystopian stories of pop culture, Endy focused on the utopian potentials of synthetic biology. He did so through a subtle storyline that drew on a well-established analogy between the genetic code and the structure of human language. And in so doing, Rejeski later reflected, he was making a “brilliant” narrative move that tied synthetic biology to an old, unthreatening, and much-loved technology.

Dating back to James D. Watson and Francis Crick’s own descriptions of the structure of DNA, the linguistic metaphor for understanding the genome refers to the chemical bases that make up each molecule of DNA as “letters.” As such, each three-letter codon, or unit of genetic code, becomes a “word,” and the genome itself is the ultimate publication: “the book of life.” And if, as Endy suggested that day in his testimony, organisms are information that can be sequenced, stored in a database, and edited, then it’s easy to see the tools of synthetic biology as tools for reading, writing, and publishing. To bring this story home, Endy made use of the narrative of literature and the printing press. Today’s genetic engineering projects, he pointed out, are limited to using fewer than 20,000 base pairs of DNA. “20,000 characters,” Endy mused. “That gets you things like the Gettysburg Address, which is around 1,500 characters. It gets you an editorial in the New York Times.” He nodded as he spoke, as if cementing the comparison. But advancements in the tools belonging to the field of synthetic biology promised a future in which genetic engineering could involve a 400-fold increase in the number of characters (the number of DNA base pairs) that scientists could put together. “What,” Endy asked carefully, in a rhetorical move worthy of Socrates, “would be the sort of stuff you could write with 8 million characters? You certainly get one-act plays, like No Exit. You get The Color Purple, which is not even a million characters. You even get War and Peace.”

As with his comic book, Endy’s testimony framed synthetic biology as a creative activity with limitless possibilities. But in order for such inspired human creativity to truly flourish, Endy emphasized that it was imperative for government policies to be instituted that would facilitate what he called “freedom of the DNA press.” For instance, more public funds should be channeled into synthetic biology research, and individuals should be as free as possible to use this technology. “The ability to synthesize DNA in genomes is like a printing press,” Endy explained, “but it’s for the material that encodes much of life. If one publisher controlled all the presses, that would give a publisher tremendous leverage over what’s said.”

There could hardly be a more seductive narrative about synthetic biology. Writing an organism’s DNA, ran Endy’s hidden story, is fundamentally a creative endeavor. It is a process by which we might reach the greatest heights of artistry and express the most profound truths, as long as our efforts are not stifled or censored. Like freedom of expression, the freedom to create new forms of life should be a fundamental right. Who knows where the next Shakespeare or Melville will come from? To restrict access to the tools of synthetic biology would be a form of censorship.

Are these valid assumptions and an appropriate framing of synthetic biology? Perhaps. But perhaps not. Unlike human languages, for example, the “alphabet” of DNA does not lie inert on a printed page but takes physical form when the proteins it encodes are synthesized. And although it is easy to accept that the exercise of artistic creativity demands little or no government oversight, it is less clear that the same is true of all scientific explorations.

So why, despite its flaws, did Endy choose to frame his discussion of synthetic biology inside this particular narrative? Rejeski has an idea. What Endy was telling the commission, Rejeski points out, “is a story about scientific evolution, not revolution. It says that this is just an extension of existing science, and there’s nothing disruptive or novel about it. Remember people doing work on recombinant DNA in the 1970s? They said the same thing. Oh, we’re just doing what nature’s been doing for a long while. It was a convenient story. And in that sense, Endy was brilliant to pull it back to the Gutenberg printing press. I mean, who’s afraid of the printing press?”

Tellingly, however, whereas the cultural narratives raised by Rejeski the following day were immediately dismissed as overblown, no one in attendance on this occasion—not commission members, other speakers, or anyone in the audience—approached the narrative framework that lay beneath Endy’s words with a critical eye. No one wondered whether this storyline might not be, in its own way, just as mythic as the Trojan horse or Pandora’s box. Instead, the narrative remained implicit, and therefore unexamined. As if to prove the effectiveness of the scientist as storyteller, commission member Barbara F. Atkinson, currently the executive vice chancellor of the University of Kansas Medical Center, raised Endy’s evocation of the freedom of the DNA press during the question session that followed his panel. She had been “caught by” this comparison, Atkinson said. Could the panel members recommend specific policy recommendations PCSBI might make to support the workings of the genetic free press?

The absence of criticism directed at Endy’s narrative stems from the assumption that scientists and engineers are what biologist/philosopher Donna Haraway calls “modest witnesses.” They are ventriloquists for the objective world, adding nothing of their own voices. Their “narratives have magical power, they lose all trace of their histories as stories … as contestable representations, or as constructed documents in their potent capacity to define the facts. The narratives become clear mirrors, fully magical mirrors, without once appealing to the transcendental or the magical.”

In opening the commission’s first meeting, Gutmann noted that “it is key for this commission to be an inclusive and deliberative body, encouraging the exchange of well-reasoned perspectives with the goal of making recommendations that will serve the public well and will serve the public good.” In support of that mission, the commission would go on to hear hours of testimony from engineers, biologists, theologians, philosophers, social scientists, bioethicists, lawyers, and others. It would be told by some that Venter’s work is nothing but an incremental step in a long history of genetic manipulation, and assured by others that the achievement represents a complete scientific game-changer. Commission members would be urged by some to advise a near-moratorium on synthetic biology in order to prevent an unjust bioeconomy, and encouraged by others to step hard on the accelerator to bring new products to market. To frame these diverse and often conflicting views, each speaker would bring a story or stories to the table.

Crucially, however, the testimony PCSBI has heard in the months since it was first formed has not treated all narratives with equal scrutiny. Thanks in part to Rejeski’s efforts, the commission has made progress in rendering visible the most pervasive cultural narratives about artificial life, seeing these stories as imperfect and unscientific constructions by artists, the media, and other myth-makers. But those who have testified before the commission have been far less likely to turn a critical eye on scientific and engineering narratives, instead allowing these stories to remain implicit and therefore invisible. To produce a truly thoughtful and deliberative report on both the practical and ethical implications of synthetic biology, PCSBI must ensure that no story, no matter its provenance, goes unexamined. It must render each of the stories it is being told about this science more visible, exposing their interpretive frames and subjecting their assumptions to critical scrutiny. Because policymaking has to happen on the basis of one story or another, it is best to inform those decisions with an explicit account of the available options. Advisory bodies must not allow a single narrative to become the invisible lens through which the issue at hand is viewed. To do so constitutes a technocratic overreaching of expert advice, because it makes one story seem to be simply the facts. Policymaking would be constrained in advance to choices within a single narrative, and policy discourse would be limited to the terms and goals established by that story alone. Advisory bodies should clarify and expand, not limit, our choices.

To produce a truly thoughtful and deliberative report on both the practical and ethical implications of synthetic biology, PCSBI must ensure that no story, no matter its provenance, goes unexamined.

One welcome critical treatment of narratives was at work in the testimony of Randy Rettberg, a principal research engineer in Biological Engineering and Computer Science and Artificial Intelligence at MIT and the director of MIT’s Registry of Standard Biological Parts. Speaking at PCSBI’s September 14 meeting, Rettberg told a story from his youth. “When I was a junior in high school,” he began, “I decided that my father was an architect of buildings, and I wanted to be an architect of computers.” At the beginning of his career, he went on, that seemed an impossible goal, but thanks to a development in the field that enabled processors to be built out of a set of tiny standardized parts known as transistor-transistor logic, the dream became achievable.

The testimony that followed was striking. Having told a story with obvious connections to synthetic biology, Rettberg immediately went on to clarify the underlying assumptions behind the narrative, pointing out which ones ought to be accepted and which discarded. What was accurate about his story, according to Rettberg, is the idea that making simple interchangeable parts freely available to a large population of researchers might revolutionize the genetic engineering industry in terms of what it is able to produce and who is able to produce it. Less accurate, but much more quickly grasped by listeners, is the idea that synthetic cells are actually like little computers, with each internal component operating in a fundamentally logical manner. This, cautioned Rettberg, was “not really right.”

As Rettberg’s testimony illustrates, stories about synthetic biology are often based on faulty comparisons. Yet it is clear from the meetings that PCSBI has held so far that they are also a ubiquitous part of the debate, because they serve as a way to make sense of complex and sometimes contradictory scientific information. The commission should acknowledge these multiple narratives, confront them with the careful attention Rettberg gave his own story, and explicitly probe them for valid and invalid assumptions. In its report, PCSBI should outline more than one set of policy options, bolstering each one with clear justifications for its premises and articulating, where appropriate, when a proposal stems from a particular narrative about this new technology. If, for instance, PCSBI were to adopt Endy’s recommendation that a substantive public investment be made in the tools of synthetic biology, it should not do so without first thoroughly examining the suppositions behind the narrative of the “DNA free press.” In so doing, it will multiply and clarify options for policymakers rather than handing them just another story with its black box of assumptions.

When Rejeski was invited to speak before the commission, he thought long and hard about what he would say. Was he really going to show up for its first meeting with slides of comic books, movie posters, video games, and cartoons under his arm? How exactly would his listeners respond? “I don’t think 90% of people spend a lot of time,” Rejeski muses, “asking whether the narratives people tell about science are valid, or just being used as a means of convenience. Do they hide issues we need to be thinking deeply about? Or do they unnecessarily exacerbate our fears? I mean, generally this stuff is all taking place subconsciously. There’s no attempt to expose these stories. That would be like doing psychoanalysis on yourself, for God’s sake!” Rejeski stops short, as if momentarily surprised by the sharpness of his own analogy. But then he chuckles. “And there’s definitely no one else up there talking about comic books. I’m almost embarrassed sometimes to be bringing that stuff to the table. I always imagine that there are people who are saying ‘Oh man, I’m not going there. That’s off the wall.’”

In the end, though, Rejeski seems to enjoy the idea of antagonizing people, just a little bit. “I’ve reached the point in my life,” he reflects, “where I’m not particularly concerned whether I please the scientists or the policy folks. I think somebody’s got to talk about this stuff, because it has huge implications that go right into the regulatory system.” Think about that evolution, not revolution, storyline, he says earnestly. “That’s one you really want to pull the veil back on. Scientists know that the wrong story will have direct links into regulation that they want to suppress. If synthetic biology is seen as truly novel, Rejeski points out—if the narrative we tell about it resembles a science fiction plot instead of harking back to an old technology—then it will trigger the Toxic Substances Control Act and various Food and Drug Administration regulations. “These aren’t,” he concludes, “just superficial stories. So if I were 30 years younger and my career was at stake, I might be more sensitive; but now? I have no problem pissing people off if I think there’s something that has to be said.” When he says this, it’s easy to imagine Rejeski as a character in his own compelling story. Not Pandora; not really. More like an 11-year-old boy holding up a homemade Geiger counter, using his own good sense to make invisible forces visible.

The Need for Climate Engineering Research

Like it or not, a climate emergency is a possibility, and geoengineering could be the only affordable and fast-acting option to avoid a global catastrophe. Climate change triggered by the accumulation of greenhouse gases emitted into the atmosphere has the potential of causing serious and lasting damage to human and natural systems. At today’s atmospheric concentrations, the risk of catastrophic damage is slight—though not zero. The risk will probably rise in coming years if atmospheric concentrations continue to increase. Although not everyone agrees with this assessment, it is supported by the bulk of the scientific evidence.

For the moment, the United States and other nations are trying to address this risk by controlling emissions of carbon dioxide (CO2) and other greenhouse gases into the atmosphere, with mixed success at best. The time may well come, however, when nations judge the risk of climate change to be sufficiently large and immediate that they must “do something” to prevent further warming. But since “doing something” will probably involve intervening in Earth’s climate system on a grand scale, the potential for doing harm is great.

The United States needs to mount a coordinated research program to study various options for mitigating climate change in order to ensure that damaging options are not deployed in haste. The United Kingdom and Germany have initiated research programs on such climate intervention technologies, and many U.S. scientists are already engaged in this topic, funded by a hodgepodge of private funds and the redirection of federal research grants. Some senior managers at federal agencies such as the National Science Foundation (NSF), Department of Energy (DOE), and National Aeronautics and Space Administration would like to initiate research funding, but they cannot act without political cover, given the understandably controversial nature of the technology. Given the rapid pace at which the research debate about governance is moving in the United States and abroad, delay in establishing a federal program will make it progressively harder for the U.S. government to guide these efforts in the public interest. There is, therefore, a need to establish a coordinated program with deliberate speed.

Making an objective analysis of the economics of CDR systems is one area where cross-cutting research is needed.

Of course, it remains critically important that the United States and other nations continue efforts to reduce emissions of greenhouse gases into the atmosphere. Indeed, much deeper cuts are needed. Reducing emissions will require, first and foremost, the development and deployment of low-carbon–emission energy systems. But even with improved technology, reducing emissions might not be enough to sufficiently reduce the risk of climate change.

Scientists have identified a range of engineering options, collectively called geoengineering, to address the control of greenhouse gases and reduce the risks of climate change. One class of geoengineering strategies is carbon dioxide removal (CDR), which removes greenhouse gases from the atmosphere after they have already been released. This approach may involve the use of biological agents (such as land plants or aquatic algae) or industrial chemical processes to remove CO2 from the atmosphere. Some CDR operations may span large geographic areas, whereas other operations may take place at centralized facilities operating in a relatively small area. Another class of strategies is solar radiation management (SRM), which involves a variety of methods for deflecting sunlight away from Earth or otherwise reducing the levels of solar energy in the atmosphere.

These two strategies are radically different. CDR seeks to address the underlying cause of the climate problem: elevated greenhouse gas concentrations. These approaches are not inexpensive and take time to implement at scale. The more promising of these approaches introduce no unprecedented new environmental or political risks and introduce no fundamentally new issues in governance or regulation. Some CDR approaches, such as the planting of forests, are already considered in international climate negotiations.

SRM seeks to diminish the adverse climate effects of elevated greenhouse gas concentrations without addressing the root cause of the problem. The best of these approaches are shockingly inexpensive (at least with respect to direct financial costs of deployment) and can be deployed rapidly. However, they do introduce unprecedented environmental and political risks, and they pose formidable challenges for governance and regulation. No SRM proposal has yet been seriously considered in an international climate negotiation.

Both approaches may contribute to cost-effective environmental risk reduction, yet there are no federal research programs systematically addressing these options. How should such programs be structured? Given that the two strategies are so different, it would make sense for the government to develop at least two research program areas. One should focus on CDR and other options to reduce the concentrations of greenhouse gases that have already been released to the atmosphere. Another program area should focus on SRM and other options to diminish the climate consequences of increased greenhouse gas concentrations. Each of these strategies is examined below.

CDR

Because of the longevity of atmospheric CO2, managing the long-term risk of climate change will require us to reduce the atmospheric concentration from current levels. Managing emissions is necessary but not sufficient. But CDR can make a difference only if CO2 is captured on a huge (gigaton) scale. The sheer scale of the challenge means that CDR always will be relatively slow and expensive.

Research on CDR should be divided into four different research programs. Little coordination is needed among these different research activities; they are so different that there is little to be gained by combining the research under a single umbrella. The research programs would focus on:

Biomass with carbon capture and storage. Plants remove CO2 from the atmosphere when they grow. When burned in power plants to produce energy, plants release their accumulated CO2, producing power that is roughly carbon-neutral. If the plants are burned in power plants that capture CO2 and store it underground in geologic reservoirs, then the net effect is to move carbon from the active biosphere to the deep geosphere, reversing the effect of producing and burning fossil fuels This approach is already being investigated within DOE and the U.S. Department of Agriculture (USDA), and the interagency cooperation seems to be working well.

Chemical capturing of CO2from air. Laboratory tests have demonstrated that chemical engineering approaches can be used to remove CO2 from ambient air. This CO2 can then be compressed and stored underground in geologic reservoirs. Because the concentration of CO2 in air is much lower than the concentration in power-plant exhaust gases, capturing CO2 from air normally would be more expensive than capturing it from power plants. But there are ways around this problem. For example, facilities to remove CO2 from ambient air could be made more cost-efficient by locating them near cheap but isolated sources of energy, such as natural gas fields located in remote areas. Furthermore, we may be willing to pay high prices to remove CO2 from the atmosphere should the consequences of high atmospheric CO2 concentrations prove worse than anticipated. For example, industrial CDR might be seen as preferable to SRM. [Full disclosure: One of us (Keith) runs a start-up company developing this technology.] DOE is the logical choice to lead this research.

Increasing carbon storage in biological systems. A number of approaches have been suggested for increasing carbon storage in biological systems. These approaches include encouraging the growth of forests and promoting the use of agricultural practices, such as “no-till” agriculture, that foster the storage of carbon in soils. DOE, USDA, and NSF have supported research on some of these methods, and this approach has received some attention in international climate negotiations. However, biological systems are relatively inefficient in their ability to capture CO2. It is estimated that it would take approximately 2.5 acres of crop land to remove the CO2 emission from just one U.S. resident—an impractical requirement. But even though these approaches are unlikely to play a leading role in climate mitigation, some techniques may prove cost-effective, especially when the land can be used for multiple purposes or when other benefits may accrue.

It also has been suggested that the biomass accumulated in plant matter could be buried, either on land or at sea, in a way that would ensure long-term storage. Advocates of such methods argue that they would confer a considerable advantage over, for example, growing a forest and leaving it in place. With biomass burial, the same land could be used repeatedly to capture CO2, whereas a forest grows only once and does not significantly increase its carbon store after it has reached maturity. Farm waste might be another source of material that might be suitable for burial. Overall, however, current evidence suggests that it would make more environmental sense not to bury biomass but to use it in place of coal in electric power plants, which are notorious CO2 emitters.

In another biological approach, carbon storage in the ocean could perhaps be increased somewhat by fertilizing the ocean with nutrients, such as iron, nitrogen, and phosphorus, which would encourage tiny organisms to bind the carbon in their physical structures. However, most observers have concluded that ocean fertilization is unlikely to be an attractive option that can be deployed at large scale. Fertilizing the ocean with iron to promote storage has received the most attention, because in areas where iron is a limiting nutrient for biological growth, this would probably be the most cost-effective option. However, there are many questions regarding the effectiveness of these approaches in storing carbon for long periods. Furthermore, because the oceans are a global commons, ocean fertilization options, unlike nearly every other CO2 removal method, raise a range of thorny problems related to governance and regulation. NSF and DOE have funded some studies of ocean fertilization, but the research is now largely dormant. Also, some of the governance issues are being addressed under the London Convention and Protocol, an international effort to protect the marine environment from human activities, and the Convention on Biological Diversity, an international agreement to protect the planet’s wealth of living organisms.

Distributed chemical approaches. In general, these approaches involve using massive amounts of basic minerals that react with acidic CO2 to form new stable minerals. These approaches amount to an acceleration of the natural weathering cycle that in the very long run removes CO2 from the biosphere. One such approach is based on the fact that the CO2 in seawater is eventually incorporated into solid carbonate minerals within bottom sediments. The rate of these chemical processes can be accelerated by sprinkling finely crushed limestone over certain parts of the ocean. Alternatively, calcium or magnesium oxides can be added to seawater, increasing the water’s capacity to hold CO2 in storage and prevent it from ever returning to the atmosphere. These approaches would also neutralize carbon acidity in the ocean, helping to alleviate a problem known as ocean acidification.

None of these distributed chemical approaches is a magic bullet. There also are a number of environmental concerns, including the scale of mining that would be required. Nevertheless, such approaches might prove cost-effective relative to conventional carbon capture and storage from power plants.

As envisioned, the research programs on CDR might best be housed within DOE, where they would fit neatly into the agency’s current carbon capture and storage research program. Preliminary research should focus on assessing the barriers and potential of each proposed approach, including costs and benefits.

Indeed, cost is a primary critical issue with many proposed CDR methods. Thus, as a critical complement to research focused on the technical hurdles facing CDR, research also must focus on economic questions. To be quantitatively important, CDR methods would need to be deployed at large scales, but none of these methods is likely to be both scalable and inexpensive. Several of these options, however, could potentially play an important role as part of a portfolio of climate response options, and there may be particular niches or market segments for which these approaches represent the most cost-effective environmentally acceptable option.

Making an objective analysis of the economics of CDR systems is one area where cross-cutting research is needed. (Such research also can help answer important questions about the environmental impact of the methods.) Yet these specific exceptions prove the rule that there is little to be gained by grouping research efforts together. Such targeted analyses are best performed by independent agencies and investigators, because the government agencies that fund technology R&D typically become advocates of their technology and thus poorly suited to provide objective analysis of its performance.

Solar radiation management

Earth can be cooled by a variety of engineering methods, some of them more practical than others, given current technology. There are four main classes of SRM proposals, which are described below in approximately decreasing likelihood of feasibility at large scale:

Stratospheric or mesospheric aerosols. Small particles high in the atmosphere can potentially scatter or reflect sunlight back to space, exerting a net cooling effect on Earth’s climate. However, because of particle aggregation and gravitational settling, it is not clear that such an aerosol layer could be sustained indefinitely. Thus, maintaining this much solar reflection high in the atmosphere could involve spreading material over a broad altitude range or deploying “designer” particles that are less susceptible to aggregation. Further, it would be desirable to be able to control the latitudinal distribution of these particles; ideally, it would be possible to turn them “on” or “off” at will to exert a high degree of geographic and temporal control. The potential for designing such particles is unknown at this time.

Whitening marine clouds. It has been proposed that low clouds in some oceanic regions could be whitened with a fine spray of seawater, and if done on a large enough scale, this could cool Earth considerably. This proposal rests on widely accepted understanding of cloud physics and how that physics is likely to affect climate. Two lines of study—on natural gases that emanate from the oceans and on ship exhausts—indicate that the proposed method should work at some level. Initial calculations suggest that the method could conceivably offset 10 to 100% of the global mean temperature increase from a doubling of atmospheric CO2 concentration.

Satellites in space. It has been proposed that vast satellites could be constructed in space to deflect sunlight away from Earth. The scale of such an undertaking is so enormous that most observers do not feel that such an effort is likely in this century. Nonetheless, placing a sunblock between Earth and the Sun is a simple and effective conceptual approach to addressing threats from global warming. Such a strategy could potentially be of interest at some point in the distant future if the global community finds the need to construct systems that would deflect sunlight for many centuries.

Whitening the surface. It has been proposed that whitening roofs, crops, or the ocean surface would reflect more sunlight to space, thereby exerting a cooling influence on planetary temperatures. With regard to crops, there is simply not enough crop area or potential for change in reflectivity for this sector to be a game changer. Similarly, there is not enough roof area for changing roof color to make a substantive difference in global climate change, although whitening roofs in some cases may confer co-benefits (such as reducing cooling costs and helping reduce the urban heat island effect). Various proposals have been made to whiten the ocean surface, stemming back to at least the early 1960s, but the ability to do so has not been demonstrated.

In their current form, the best SRM methods have several common properties: They have relatively low direct costs of deployment, they may be deployed rapidly and are fast-acting, and they are imperfect. They are intrinsically imperfect because greenhouse gases and sunlight act differently in Earth’s climate system. Nevertheless, every climate model simulation that has applied some “reasonable” reduction in absorption of sunlight has found that these approaches could potentially diminish most climate change in most places most of the time, at least for a doubling of atmospheric CO2 content.

Long-established estimates show that SRM could potentially offset global average temperature increases this century at a direct cost that is several hundred times lower than the cost of methods that achieve the same cooling by reducing greenhouse gas emissions. This is because such a tiny mass is needed: A few grams of sulfate particles in the stratosphere can offset the radiative forcing of a ton of CO2 in the atmosphere. At a cost of a few thousands of dollars per ton for aerosol delivery to the stratosphere, the direct cost of offsetting the global mean temperature increase from a doubling of atmospheric CO2 is estimated to be on the order of $10 billion per year. Of course, the need to operate satellite, atmosphere, and ground-based observation systems to monitor outcomes could increase costs substantially.

Although SRM efforts might be able to diminish most climate change in most places most of the time, it is also likely that these approaches will harm some people in some places some of the time. People who suffer harm may seek compensation. If militarily or politically powerful, they could seek to prevent continued deployment and thus could generate military or political conflict. Even if environmental benefits exceed environmental damages overall, indirect costs associated with the possible need to compensate those adversely affected could dominate the overall cost picture. It does not even need to be the case that the climate intervention system actually causes the climate damage; if people in a region believe that they are harmed by such a system, this could be enough to motivate conflict.

The fact that SRM approaches can cool the planet rapidly is known because nature has already performed experiments that scientists have analyzed. After the eruption of Mt. Pinatubo in the Philippines in 1991, Earth cooled nearly 1° Fahrenheit (about 0.5° degree Celsius) in less than a year. The aerosols stayed in the stratosphere for only a year or two, but scientists’ calculations suggest that if that amount of reflection of sunlight to space could be sustained—perhaps by injecting a continuous stream of material into the stratosphere—it could compensate for the amount of warming produced by a doubling of atmospheric CO2 content. The eruption was not without adverse consequences, however, because the Ganges and Amazon Rivers had their lowest flow rates on record.

Of course, a world that is cooled by the diminished absorption of sunlight is not the same as one cooled by a reduction in greenhouse gas concentrations. For the same amount of cooling, an SRM-cooled world would have less rainfall and less evaporation. SRM could affect Earth’s great weather systems, including monsoonal rains and winds. Thus, SRM techniques are not a perfect alternative to greenhouse gas emissions reduction and can at best only partially mask the environmental effects of elevated CO2. Still, SRM may be the only fast-acting approach to slowing or reversing global warming. Therefore, it may have the potential to become a powerful tool to reduce the risks associated with unexpectedly dangerous climate consequences.

Moving to a research plan

Even given the potential benefits, the idea of deliberately manipulating Earth’s energy balance to mitigate human-driven climate change will be interpreted by many people as dangerous hubris. Indeed, a common and not entirely inappropriate first reaction to SRM is to reject it out of hand as an effort destined to fail. Society has some memory of past cases where overeager technological optimism led to disastrous harm. But it is also necessary for society to avoid overinterpretation of past experience. Responsible management of climate risks calls for emissions cuts, but also for clear-eyed exploration and evaluation of SRM capability.

Opinions about SRM are changing rapidly. Only a few years ago, the topic generally was not openly discussed. Many people, within and beyond the scientific community, now support model-based research. However, field testing remains controversial and will probably grow even more so. Because of the serious and legitimate concerns about the enormous leverage that SRM technologies may provide in regulating the global climate, it is crucial that the development of these technologies be managed in a manner that is as transparent as possible.

In designing an SRM R&D program, the federal government should follow several basic guidelines. Of key importance, the program should include organizations conducting research aimed at developing and testing systems, and different organizations conducting research aimed at predicting the environmental consequences of their deployment. Further, high-quality observing systems will be needed to support both of these functions. If the organization responsible for developing systems is also performing the environmental assessments, there will be an apparent conflict of interest. There may be an incentive to find that currently funded projects represent environmentally viable strategies. Therefore, in designing an SRM research program, it will be wise to separate systems R&D from environmental assessment efforts. Within this broad framework, research efforts should take a red team/blue team approach, wherein one team is tasked with showing how an approach can be made to work, and another team is tasked with showing why the approach cannot produce a system that can actually diminish environmental risk at an acceptable cost.

A government research program on such a consequential and controversial topic needs a formal mechanism to enable input by stakeholders outside government. A research advisory board that includes representatives from major nongovernmental organizations such as think tanks and environmental advocacy groups could serve this purpose.

The research program would have three phases:

Phase 1. This phase would center on exploratory research. Funding for this research should start at $5 million per year and then gradually ramp up to $30 million per year. It would focus on doing “no-brainer” research: high-yield research that can be conducted with computer models or within a laboratory. While this obvious low-lying scientific and technical fruit is being picked, an effort must be initiated to design an SRM research plan for the next decade.

The planning process must include all stakeholders, and the final plan must include a proposed institutional arrangement to ensure its execution. However, it would not be wise to delay getting started with research until the detailed plan and institutional arrangements are in place. In addition to advancing the science, starting research now will start help train a cohort of graduate students who can provide the creativity and leadership that will needed in the next phase.

Phase 2. This phase, funded at a level of $30 million to $100 million per year, will involve doing sustained science with small-scale field experiments. Early tests would focus on understanding processes. Later tests potentially could be large enough to produce barely detectable climate effects and reveal unexpected problems, but be small enough to limit risks. Because experiments could expand gradually toward large-scale deployments, it is important that experiments proceed with effective and appropriate governance and regulation.

Phase 3. This phase, if warranted by earlier studies, would involve the development of a deployable system (or at least in some way prepare for that development). Building the capability for deployment at climate-relevant scales would require substantially larger investments than previous phases. Depending on the systems chosen, this phase of research could take on very different characteristics. Clearly, such development must be performed with as much transparency, democratic control, and international cooperation as possible.

Some of the problems of international governance are almost certain to be new and difficult. In efforts to reduce CO2 emissions, the key governance challenge is motivating many actors to take collective action toward a common goal. For SRM, in contrast, the main problem is establishing legitimate collective control over an activity that some might try to do unilaterally without prior consultation or international risk assessment.

In building international cooperation and developing standards, it may be best to start from the bottom, by developing knowledge and experience before formalizing universal agreements. A first step could be developing a transparent, loosely coordinated, international program to support research and risk assessments by several independent teams. As part of this process, efforts should be made to engage large groups of experts and interest groups, such as former government officials and leaders of involved nongovernmental organizations. Developing iterative relationships between governance and the emerging current scientific and technical research would be the core of this bottom-up approach.

Reasons for action

A common question is why SRM experiments should be conducted in the field now. The answer is that if a climate emergency, such as widespread and sustained crop failures throughout the tropics or a collapse of large parts of Greenland into the ocean, should arise, it would be reckless to only then begin SRM field tests. If there is at least some risk of a climate emergency and some likelihood that SRM techniques could help relieve the situation, then it seems logical to test the approaches before the time when an emergency is more likely to develop.

Moreover, in the event of a crisis, even if atmospheric CO2 content were to stabilize, which would require dramatic reductions in greenhouse gas emissions, global mean temperatures would continue increasing. SRM techniques are the only options that could potentially cool Earth quickly, so having them ready at hand (or at least as ready as possible) could provide significant benefit.

Some critics also argue that pursing SRM research at any substantial level will reduce society’s resolve to reduce emissions of greenhouse gases. But evidence from other cases suggests that this is unlikely. For example, research that resulted in the development of seatbelts and airbags in cars has still provided an immense benefit even if it, in a small way, influenced people to drive faster and more recklessly. Moreover, if SRM proves to be unworkable or to pose unacceptable environmental risks, the sooner scientists know this, the faster they can take these options off the table. Indeed, if SRM approaches are not subjected to serious research and risk assessment, SRM might incorrectly come to be regarded as a safety net. The stakes are simply too high for us to think that ignorance is a good policy.

Technophilia’s Big Tent

As the effects of climate change and other environmental stresses become more apparent, some technological prophets are alarmed, while others are more sanguine than ever. Jared Diamond has gone from Guns, Germs, and Steel (1997) to Collapse (2005). James Lovelock’s original Gaia, a light of hope in the gloomy late 1970s, has been succeeded by The Revenge of Gaia: Why the Earth Is Fighting Back—and How We Can Still Save Humanity (2007) and The Vanishing Face of Gaia: A Final Warning (2009). In 2010 Lovelock even told a newspaper interviewer that “democracy may have to be put on hold” to deal with “global heating.” An opposing Singularity movement, led by Ray Kurzweil, sees instead a millennial convergence of innovation potentially leading to a new golden age and even the conquest of death itself.

Kevin Kelly’s What Technology Wants (available October 14) is a plea for nuanced optimism. Kelly rejects submission to inexorable trends. To the contrary, while acknowledging the inevitability of change, he recognizes its risks and seeks to promote better choices. He admires advocates of social control of technology such as Langdon Winner and David Nye, and the more radically skeptical educator and philosopher Ivan Illich, finding value even in the Unabomber Manifesto. Conversely, he lauds Kurzweil’s utopian vision only as a myth, “like Superman.” A former editor of the Whole Earth Catalogue, Kelly the technology critic just wants to know the best tool for the job, whether it’s the latest electronic device or the result of 3,000 years of refinement.

Throughout the book, Kelly makes a strong case against nostalgia and for technology’s benefits to the welfare of ordinary men and women around the world. He cites studies of the high rate of violent death in early human societies—which could be deadlier than the 20th century with its horrific wars. Human longevity continues to increase. The world’s increasingly urban population is far better nourished and healthier than its peasant forebears.

These improvements flow from the spread of information. The culture we create, Kelly argues, is not so much a collection of things as an ever-multiplying realm of ideas—patents, blueprints, drawings, poems, music—that interact in unpredictable but ever more complex ways, just as the DNA and other information of living creatures has been changing for billions of years, creating an ever richer web of life, evolving into enhanced “evolvability.” The technium is the term Kelly gives to this dynamic order, influenced by evolutionary thinkers such as the Russian biogeochemist Vladimir Vernadskii and the French philosopher Pierre Teilhard de Chardin. It’s not a spiritual principle for Kelly, though; it’s an inevitable consequence of a small number of physical and chemical principles. Beside the pragmatic Kelly there is a cosmic Kelly who forthrightly sees progress as a consequence of the evolution of the universe.

ESSENTIAL TO KELLY’S ARGUMENT IS THAT STRIVING TO AVOID ALL ADVERSE RESULTS IN THE FUTURE CAN CREATE PROBLEMS OF ITS OWN, BLOCKING SOLUTIONS AS WELL AS DISASTERS.

Despite his rejection of vulgar determinism, Kelly urges us to embrace and help accelerate long-term change. Unlike some other enthusiasts, he denies that the new necessarily replaces and obliterates the old. The technium, like life on Earth, is a reservoir of concepts that often endure even after they go out of favor. Selecting a two-page spread of apparently obsolete farm tools from an 1894–1895 Montgomery Ward catalogue, he discovered in a few hours online that every one was still available in some form because the old designs still served their purpose—as the historian of technology David Edgerton reminded us in The Shock of the Old. But where Edgerton cites such facts to question belief in progress, Kelly redefines progress as the optimal combination of old and new.

Kelly also does not slight the technium’s messes: “Hiding behind the 10,000 shiny high-tech items in my house are remote, dangerous mines dug to obtain rare earth elements emitting toxic traces of heavy metals. Vast dams are needed to power my computer.” This grimy underside may comprise nearly half the technium. Yet the technium sincerely wants to clean up its act. It offers tools such as satellite photography for making its own environmental costs more transparent and thus influencing consumer and producer behavior. And in the long run the technium is moving to ever-lighter and even intangible objects, from a stack of 78-rpm records to music download with only a minute carbon footprint. Technology analysts have been studying this “dematerialization” of consumption since the late 1980s, but Kelly makes it a foundation of his world view. Technology wants to be lighter. For this reason, Kelly does not subscribe to the view of the economist Nicholas Georgescu-Roegen that entropy is an implacable constraint on the human standard of living. In fact, he believes that the long-term history of the universe reflects a trend from energy to mass to information, a process he calls exotropy, adopting a concept popularized by a radical futurist movement of the 1990s. There may still be no free lunch, but for Kelly it’s more important that recipes now can be exchanged globally, virtually without cost.

No matter how bad the situation looks now, the technium has powerful built-in corrective mechanisms. Essential to Kelly’s argument is that striving to avoid all adverse results in the future can create problems of its own, blocking solutions as well as disasters. To the Precautionary Principle now popular in Europe Kelly opposes an idea from the 1990s Extropian movement: the Proactionary Principle. Try as many innovations as possible, but monitor them closely for long-term indirect problems. The technium doesn’t want prior constraints or advance orders.

The tension between Kelly’s green Whole Earth Quarterly communitarianism and his Wired cornucopianism, between Illich’s concept of conviviality and the Extropian Max More’s transhumanism, makes What Technology Wants a dialectical wonder. Yet there are a few errors. Kelly quotes Carl Mitcham’s statement that “[m]ass production would be unthinkable to the classical mind, and not just for technical reasons.” But late-classical philosophers undoubtedly read and wrote by the light of mold-stamped clay oil lamps from industrial-scale workshops. One brand, Fortis, was coveted enough to have been counterfeited. Kelly also invokes the alleged dependence of U.S. railroad gauges (and thus of the size of space shuttle rocket engines shipped over them) on the distance between wheel ruts of ancient Roman war chariots–an oft-cited erroneous example of path dependence that has been laid to rest by the economic historian Douglas J. Puffert.

Kelly’s assertion of long-term human progress is also hard to prove—or disprove. He sees democracy and equality as among the desires of the technium. And so they often have been. But when the Plains Indian tribes acquired horses from Mexico in the 18th and 19th centuries and expanded their technological capabilities, they became more likely to wage war on each other. More recently, 19th-century slaveholders and 20th- and 21st-century tyrants have been enthusiasts and early adopters of many technological innovations. Is it not possible that the technium, like its avatar Wernher von Braun, is really interested in whatever social order, free or unfree, will advance it most steadfastly at the moment? Was the humorist Mort Sahl closer to the truth when he observed about Braun’s work that it aims at the stars but sometimes hits London?

Apart from electronic miniaturization and social networking, the technium’s progress has slowed in the past decade. We seem to be losing ground against infectious disease. The threat of pandemic influenza still has no solution, and there is no vaccine or therapy against Alzheimer’s disease. The number of new FDA-approved drugs declined by more than 50% between the 1996–1999 and 2006–2009 periods. Only 29 were approved last year. Some epidemiologists have been predicting a “post-antibiotic era” in which we cannot develop new therapies as quickly as resistant strains are evolving. We are also making slow progress in designing radically new energy systems such as nuclear fusion and thorium reactors, announced with great optimism in the later 20th century. The improved but venerable diesel engine is competitive in fuel economy with the latest hybrids. Battery capacity is advancing slowly. Dematerialization is failing to create skilled jobs and investment opportunities comparable to more massive 1950s and 1960s innovations such as the Xerox 914 photocopier. We may yet be able to overcome these challenges, but Kelly does not confront them directly.

In particular, he puts too much trust in the number of patents as a measure of innovation. It isn’t necessarily true that each patent is “a species of idea”; it’s really a way of disclosing and legally protecting one or more aspects of an idea. A patent, by blocking competitors from pursuing certain paths of innovation, may actually reduce product diversity. The 2008 Berkeley Patent Survey of technology company executives found that patents offer “mixed to relatively weak incentives for core innovative activities, such as invention, development, and commercialization.” Increasing patent applications might indeed signify more creativity, but they might also reflect “salami slicing” of the same work into smaller units, to use a phrase applied to some scientific papers.

Kelly believes that our ability to accelerate cultural evolution is opening a new stage of human consciousness; that we are evolving a growing ability to evolve. He writes that we should “see more of God in a cell phone than in a tree frog,” but he is surely also aware that the frog is a miracle of biochemical complexity beside which the phone is primitive and anything but divine. Our real risk is not global catastrophe—I’m with Kelly against the doom-sayers—but banalization, the loss of the variety of the biological species and human languages and cultures that we have only begun to understand. It’s the pathos of the technium that no sooner does it start to reveal the wonder of the world to us (through, among other things, images and sound recordings of amphibians and other wildlife) than it threatens to remove them. What Technology Wants is a brilliant book, essential reading for all debates on the human future. But despite Kelly’s formidable learning and generous open-mindedness, the state of the world economy and environment will leave many readers with the question: What has the technium done for us lately?

Drug Kingpin

Daniel Carpenter’s magnum opus about the origins, operations, and organizational nuances of the Food and Drug Administration (FDA) represents in many respects superb historical scholarship. A former chief counsel for the regulatory agency praised it as “the best pure history of the development of FDA regulation of new drugs ever written.” Yet I found it to be less useful in offering insight into what makes regulators tick, and to harbor some surprising omissions.

Carpenter, a professor of government at Harvard University, emphasizes the importance of the “organizational reputation” of entities ranging from military and diplomatic bodies to disaster relief groups and regulatory agencies. The central concept of his perspective on regulation is the influence of “audiences,” the entities that either affect the regulatory agency (the judiciary, Congress, nongovernmental organizations, the media, and the regulators’ political masters) or are affected by it (regulated industry and the public). This formulation views the FDA as a receptacle in which patients, pharmaceutical companies, the media, and legislators deposit their trust or mistrust.

Carpenter dissects the various components of reputation: “performative (did the agency get it right?), technical (does the agency have the know-how and methods to get it right?), procedural (does the FDA follow accepted procedures suggested by law and science?), and moral (does the agency show compassion to those affected by its decisions? Is it captured or inappropriately influenced?).”

What is the benefit to a regulatory agency of enjoying a good reputation? In Carpenter’s view it “supports” or legitimatizes power, which he subdivides into three species: directive, gatekeeping, and conceptual.

Directive power is the ability to require that a company or researcher do—or refrain from doing—something. Gatekeeping power, which is enjoyed by few federal agencies, derives from the need for regulators to approve a product (a drug or medical device, for example) before it can be legally sold. (The other major gatekeepers for similar kinds of products are the Environmental Protection Agency, which licenses pesticides; and the Department of Agriculture’s Animal and Plant Health Inspection Service, which oversees animal vaccines and genetically engineered plants.) Conceptual power is the ability to mold the ambient methods, practices, and jargon associated with drug development and its regulation; in other words, many of the terms used by regulators (“new drug application,” “new molecular entity,” and “food additive,” for example) have become legal “terms of art.”

Carpenter believes that in order to enhance their reputations, bureaucrats seek others’ esteem, which is enhanced by association with organizations of good repute and vice versa. In his opinion, the FDA has had the advantage of employees dedicated to enhancing its reputation for scientific competence. This model postulates incentives for bureaucrats both to arrogate power and to do good work.

I’ll buy that there’s an inclination to arrogate power—FDA Commissioner Frank Young used to quip that “dogs bark, cows moo, and regulators regulate”—but during my decade and a half at the FDA, I discovered potent incentives for regulators’ actions that are not best for individual patients or public health but serve only bureaucratic self-interest; self-interest that is best realized by policies and actions that give rise to new responsibilities, bigger budgets, and more expansive bureaucracies.

Other critical incentives derive from the asymmetry of outcomes from the two types of mistakes that regulators can make. A regulator can err by permitting something bad to happen, such as approving a harmful product; or by preventing something good from becoming available by not approving or by delaying a beneficial product. Both outcomes are bad for the public, but the consequences for the regulator are very different. The FDA’s approval process for new drugs has long struggled with this dichotomy.

The first kind of error is highly visible, making the regulators susceptible to attacks by the media and patient groups and to congressional investigations; for the individuals involved, it can be a career killer. But the second kind of error—keeping a potentially important product out of consumers’ hands—is usually a nonevent and elicits little attention, let alone outrage. As a result, regulators introduce highly risk-averse policies and make decisions defensively, avoiding approvals of potentially harmful products at all costs and tending to delay or reject new products ranging, in the FDA’s case, from fat substitutes to vaccines, painkillers, and prosthetic joints. If a regulator does not understand or is vaguely uneasy about a new product or technology, his or her instinct is to delay or interdict. In Carpenter-speak, regulators feel that they enhance their reputations and gain colleagues’ esteem by not approving a potentially harmful product.

Carpenter believes that “the reputation and the power of the Food and Drug Administration in the governance of pharmaceuticals have waned appreciably,” which he ascribes to a political “rightward shift” that dates from the passage of the Food and Drug Administration Modernization Act (FDAMA) of 1997. He goes so far as to characterize the (unsuccessful) bills that preceded the 1997 act as flirting “with the evisceration and privatization of FDA gatekeeping power—by authorizing experiments with ‘third party’ review.”

These assertions are puzzling for several reasons. First, it is difficult to comprehend how legislation as tepid as FDAMA could be considered any sort of milestone or inflection point. For the most part, it merely codified longstanding practices and procedures that were already in place; there was virtually nothing in the legislation that regulators didn’t want or couldn’t live with. Second, third-party review had actually been tried successfully several years earlier. In a two-year pilot program (1992–1994) undertaken at the urging of President George H. W. Bush’s Council on Competitiveness, the FDA contracted out reviews of new drug application (NDA) supplements and compared the results of these evaluations to in-house analyses. The contractor was the Mitre Corporation, a nonprofit technical consulting company. In all five of the supplements reviewed by Mitre, the recommendations were completely congruent with the FDA’s own evaluations. Moreover, the time required for the reviews was two to four months, and the cost ranged from $20,000 to $70,000—fast and cheap compared to federal regulators.

That experiment was hardly unprecedented. Except for the final sign-off of marketing approval, the FDA has at one time or another delegated virtually every part of its various review and evaluation functions to outside expert advisors, consultants, or other entities. Far from savaging the powers of the FDA, third-party review was merely intended to make regulation more efficient and less expensive, using an approach that is similar to pharmaceutical regulation in the European Union and that had been experimented with successfully in the United States.

Third, most FDA watchers, myself included, would argue that the agency’s power has increased, not decreased, in recent years. During the past decade, legislative expansion of the FDA’s discretion permits regulators to dictate the content of drug labeling (instead of being arrived at through a process of discussion and negotiation), to require post-marketing (phase 4) clinical trials as a condition of approval, and to impose various “risk evaluation and mitigation strategies” that can profoundly limit the ultimate sales of a drug. These developments represent a marked enhancement of the FDA’s directive and gatekeeper powers.

Moreover, the FDA’s leadership has unilaterally and sometimes extralegally introduced what amount to new requirements for the approval of drugs in addition to the statutory ones of safety and efficacy. These new requirements include a demonstration of superiority to existing therapies, which is often a far more difficult (and expensive) standard to meet. Also, the FDA’s budgets have seen stunning, unprecedented increases in recent years. Nevertheless, Carpenter postulates that “the rise of libertarian models and conservative politics in the United States, the accretion of power to the global pharmaceutical industry, and the globalization of economic regulation have all weakened the authority and force of the Administration’s capacities and actions.”

I do agree that the FDA’s reputation has withered, but for reasons different from those cited by Carpenter. He notes that the FDA has been widely criticized for being too lax and overly collaborative with industry (and seems to agree with this assessment), but much of the quantitative evidence suggests that the agency has become increasingly risk-averse, hyperregulatory, and problematic to industry during the past 20 years. Metrics that support this view include increases in the number of clinical trials, patients, and procedures reported in NDAs; lengthening of the time required for the average clinical trial; and, especially, skyrocketing costs to bring a drug to market.

For a work of this magnitude, there are surprising omissions. One is the phenomenon of information cascade, the way in which incorrect ideas gain acceptance by being parroted until eventually we assume they must be true even in the absence of persuasive evidence. Obviously, it is intimately related to reputation, and arguably many of the misconceptions about the FDA stem from the constant drumbeat of dubious accusations from media, congressional, and advocacy group that regulators are insufficiently risk-averse and are too lenient and collaborative toward the drug industry. The information cascade concept was popularized by Carpenter’s Harvard colleague Cass Sunstein, who now heads the regulatory side of President Obama’s Office of Management and Budget.

Another surprise is the incompleteness of Carpenter’s discussion of FDA leaders’ willingness to accede to undue political influence on product-specific regulatory decisions. Justifiably, he excoriates then-Commissioner Lester Crawford for contravening both the data and agency consensus by overruling a decision to approve the day-after contraceptive pill, known as Plan B, for over-the-counter status, but he neglects the equally egregious interference by then-Commissioner David Kessler, who obeyed orders from above about which products should be expedited and which delayed by regulators. For example, the agency approved a dubious female condom after being informed by the secretary of Health and Human Services that it was a “feminist product” and that delay was not acceptable; and at Kessler’s direction, FDA officials went to extraordinary lengths to look for reasons not to approve biotechnology-derived bovine somatotropin, a veterinary drug, because Vice President Al Gore considered it to be politically incorrect. These omissions and Carpenter’s repeated caviling about the actions of Dan Troy, the brilliant FDA general counsel during the administration of George W. Bush, raise the specter of a political agenda.

I found Carpenter to be overly sympathetic in his characterization of FDA epidemiologist David Graham, who appears to harbor an idée fixe about the supposed dangers of many widely prescribed and FDA-approved medicines. FDA managers permit Graham to publicly contradict agency policy (and the consensus of the extragovernmental medical community), presumably because he is regarded by some members of Congress and their staffs as a whistleblower. Talk about compromising the agency’s reputation.

Finally, Carpenter observes that in constructing this magnum opus he has “incorporated methods and insights from many disciplines—history, pharmacology, political science, law, medicine, public health, mathematical finance and economics, sociology, mathematical statistics, and anthropology. To be frank, I have mastered none of the trades, and this book represents a highly imperfect combination of research methods.” He is correct, and in the end he will please few practitioners from any one discipline. Carpenter’s polymathic approach added more weight than insight to his 800-page tome.

Costly water

Obsession may not be the best word to describe Americans’ attitude toward bottled water. Few people are preoccupied with the product; many purchase it without a second thought. But therein lies the problem. As Peter Gleick amply demonstrates, packaged water comes at a surprisingly high price. People in the United States purchase nearly nine billion gallons of bottled water a year, spending hundreds of millions of dollars on a product that is virtually identical to that which freely flows from their taps. We engage in such senseless behavior, Gleick contends, because we have bought the claims of advertisers and marketers. Large beverage corporations spend heavily to disparage public water supplies and to tout, often misleadingly, their own products.

Many of the campaigns against tap water and in favor of the bottled alternative are both amusing and outrageous. As Gleick outlines, Coca-Cola, owner of the Dasani brand, once developed a “six-step program” to help the Olive Garden Restaurant chain reduce what they call “tap water incidents”: unprofitable episodes of customers ordering free refreshment. The very names of bottled brands can appear comical when juxtaposed with the source of their water: Arctic Wolf Spring water is actually bottled in New Jersey and Arctic Spring in Florida. And although the water sold under the Alaska Premium Glacier brand does indeed come from Alaska, it flows out of Pipe 111241 of the Juneau municipal water system. More humorous still are claims made by marketers of pseudo-scientifically enhanced water. Penta Water, for example, is supposedly “restructured … through molecular redefinition.” Religiously inspired water quackery comes in for well-deserved mockery. The makers of Kabbalah Water, plugged by both Madonna and Britney Spears, allege that “because of its unique crystalline structure and fractal design, [our product] is an excellent information transmitter.”

Although only a few minor companies make claims as outrageous as those from Kabbalah, many firms assert that their products are safer and more wholesome than public tap water. Tap water is easy to malign, Gleick shows, both because it was historically dangerous and because contamination incidents do occur. But municipal waterworks are closely regulated by the Environmental Protection Agency (EPA), which sets rigorous rules for a wide variety of impurities and acts quickly when thresholds are passed. Bottled water, in contrast, is less stringently overseen by the Food and Drug Administration (FDA), which sets lower standards for key contaminants such as coliform bacteria. And although the FDA does restrict some pollutants, such as lead, more stringently than the EPA, such regulations apply only to bottled water marketed across state lines. FDA inspections of major bottling plants, moreover, have revealed some significant health violations. Although it required sleuthing, Gleick was able to uncover nearly 100 incidents in which bottles had to be recalled. It is difficult to resist the author’s conclusion that bottled water, overall, is not safer than tap water.

Even if bottled water has no health advantages over tap water, it might still taste better, as its purveyors claim. Many of the large bottlers begin with municipal water, which they subject to additional filtration and other methods of supposed purification. Such procedures, however, remove the minerals that give a desirable taste. As a result, additives are necessary. Coca-Cola, Gleick reveals, “adds a carefully prepared mix of minerals … back into the water to create a finished product with a standardized taste…. Thus, Dasani from San Leandro is virtually indistinguishable from Dasani from Detroit.” But is the resulting product actually preferred by consumers? Numerous surveys show that most people cannot tell the difference between the various brands or between bottled water and tap water from high-quality municipal systems. One blind taste test actually showed that the most expensive water turned out to be the one people liked least, Gleick writes.

At first glance, spring water is a more honest product than reprocessed tap water. Spring water is typically bottled with little treatment and is thus marketed as a “natural” product. But natural filtration through aquifers does not necessarily remove all pathogenic organisms. One study found Giardia and Cryptosporidium in 20% of U.S. springs. As a result, Gleick argues, there are good reasons to be “especially concerned about the safety of spring waters.” Equally worrisome are the effects of the industry on spring-dependent ecosystems. In the arid southwest, rare oasis environments have been diminished by gargantuan bottling plants. Even in the more humid parts of the country, aquifers have dropped, sometimes desiccating wetlands.

Environmental repercussions

The environmental damage caused by bottled water is by no means limited to groundwater depletion. The manufacturing and distribution of plastic bottles are energy-intensive, consuming the equivalent of between 100 and 160 million barrels of oil in 2007, Gleick says. Bottled water also generates a massive stream of plastic waste. The industry has responded to this criticism by reducing the plastic content of its bottles, by stressing recycling, and by experimenting with biodegradable containers. Such approaches, Gleick argues, may be helpful but are ultimately inadequate. So-called biode grad able bottles often degrade poorly and may end up contaminating the recycling stream. Bottling and trucking water, Gleick says, is simply much more expensive and environmentally degrading than transporting water in pipelines.

As the hidden and not-so-hidden costs of bottled water gradually come to light, a reaction against the industry has gathered strength. Cities such as San Francisco have banned municipal purchases, and a number of prominent restaurants no longer carry the product, serving plain and carbonated tap water instead. Citizens’ groups have sued springwater firms for depleting aquifers and in several instances have shut down existing and proposed waterworks. In 2008, the sales of bottled water in the United States declined for the first time. Although industry spokespeople attributed the drop to the recession, the public awareness campaign spearheaded by writers such as Gleick seems to be having an impact.

The bottled water industry is fighting back with intensified lobbying efforts, advertising campaigns, and lawsuit threats. A number of firms now trumpet their environmental responsibility. Such an approach, however, can amount to little more than “greenwashing,” with minimal actions undertaken to support grandiose claims. Fiji Water, which claims carbon neutrality, comes in for special scorn, because it ships most of its bottles halfway around the world. Still, Gleick is cautiously supportive of several smaller, self-proclaimed “ethical” companies. In particular, he cites Ethos Water, which has pledged to give 50% of its profits to organizations supporting water and sanitation projects in developing countries.

The lure of convenience

Consumers buy bottled water, Gleick writes, for four main reasons: safety, taste, style, and convenience. He debunks the first three of these rationales with ample evidence and wit. But he devotes much less attention to the issue of convenience, which is not so easily dismissed. On this score, the campaign against bottled water may face more intractable obstacles than the author realizes.

In a society as affluent as the United States, an individual bottle of water is trivially cheap for many consumers, regardless of the overall costs to the environment. And if the water in the bottle is no better than what comes out of the tap, at least it will be cold when purchased. In setting out on a trip, whether driving across the countryside or strolling through a city, few people think to pack their own sink water, and fewer still take the trouble to add ice and use an insulated container. In the United States, with its mobile lifestyle and penchant for cold beverages, bottled water is often much more convenient than tap water.

Such thoughtless convenience can be partially addressed through education about hidden costs, which is exactly what Gleick provides in Bottled and Sold. Environmentally conscious consumers, a growing cohort, will often forgo ease in the interest of sustainability. But many others will opt for expediency over responsibility every time.

Educational efforts against groundless consumer behavior also confront intrinsic human irrationality. As behavioral economists have shown, people typically think more highly of goods that they have purchased than they do of identical products that they have acquired for free. Beverage corporations may take advantage of such predictably irrational behavior, but they cannot be blamed for creating it.

Gleick argues that another way to reduce our dependence on bottled water is to invest more extensively in public supplies. High-quality water flowing at low cost through municipal pipelines will dissuade some from purchasing bottles. The issue of convenience, moreover, can be partially addressed by public water fountains. But in many areas, new fountains are no longer being installed and existing ones are not being maintained. Major public facilities, including sports stadiums, have been erected with no water outlets, effectively forcing spectators to purchase bottles. If we are to overcome our dependence on bottled water, Gleick argues, we must restore the public spout.

But as the author recognizes, merely increasing the number of fountains would not be adequate. Many people demand cold water of the highest quality and distrust any dispenser on which another person’s lips may once have rested. Gleick thus advocates a technological fix, endorsing modern “hydration stations” that include filters, coolers, variable stream heights, and more. Such top-of-the-line water fountains can be costly to manufacture and install and require maintenance and power. We are a long way, in other words, from virtually free public tap water. But Gleick tends to downplay issues of cost when discussing projects that he supports. Thus, on one page he castigates the state government of Connecticut for spending $500,000 annually on bottled water, then praises Minneapolis on the next for devoting the same sum to construct a mere 10 public fountains. Environmental auditing would be beneficial here, comparing all the costs of bottled water to those of hydration stations. I suspect that the latter would come out well ahead, but I would like to see the accounting.

A path forward

In the end, Gleick calls for a “soft path for water,” one emphasizing incentives for efficient water use, appropriate regulatory approaches, and expanded public participation in decisionmaking, in addition to expanding and renovating public water delivery systems. Unmentioned is the fact that the hydrological engineering processes and facilities necessary to provide universally high-quality drinking water are not necessarily “soft” in the environmental sense of the term. In many farm regions of the United States, tap water, whether derived from wells or municipal systems, is contaminated with nitrates and other agricultural chemicals. To provide the entire country with public water as wholesome as that of New York City or San Francisco, large dams would have to be built, rivers partially diverted, and vast new pipeline networks constructed. San Francisco’s municipal water, much touted by Gleick, flows from the Hetch Hetchy Reservoir in Yosemite National Park, which is widely regarded as an environmental abomination. When Gleick urges people to adopt a “drink local philosophy,” my doubts mount. Not only would Los Angeles need to shed most of its population, but even Gleick’s home city of Berkeley would not be able to maintain itself. In Berkeley and neighboring East Bay cities, local water supplies proved inadequate as far back as 1923, leading to the damming of the Mokelumne River in the Sierra Nevada and the construction of yet another trans-state pipeline.

If Gleick underplays the costs entailed by public water systems, he also occasionally exaggerates the benefits of abandoning the bottle. Springwater extraction facilities can indeed deplete aquifers, but at the national and global scales the damage that they cause is trivial compared with that of agriculture. Irrigated faming in arid environments destroys vast ecosystems—the Aral Sea is a good example—whereas water bottlers merely threaten local habitats. Gleick dreams of a day when aquatic ecosystems around the world are restored, but for that to happen we must expand our scope well beyond that of the bottled water industry.

The objections raised above do not in any way discredit Gleick’s basic thesis. The evidence that he marshals convinces me that our current level of reliance on bottled water is economically senseless and environmentally destructive and that enhanced investments in public facilities would be beneficial. Despite his occasional hyperbole, Gleick’s overall position is tempered and reasonable. He has no desire to enact any bans, and he does see a place, if a minor one, for bottled water. If he sometimes avoids difficult discussions of inevitable tradeoffs, such omissions are understandable. A more comprehensive and balanced account would not appeal to a large audience; constantly qualifying one’s arguments is a poor strategy for selling books. But to convince skeptics, the more dispassionate approach of environmental economics, which tallies costs and benefits, hidden and overt, on both sides of any issue at hand, has much to recommend it.


Martin W. Lewis () is a senior lecturer at Stanford University and the author of Green Delusions: A Environmentalist Critique of Radical Environmentalism. He blogs at Geocurrents.info.

Archives – Fall 2010

CHERYL GOLDSLEGER, Perspective, Great Hall Constuction, National Academy of Sciences, Graphite on paper, 2009.

Perspective

Cheryl Goldsleger’s work stems from an interest in architectural space and what it reveals about how a society is organized. Her work reflects a fascination with the way different structures are built and the diverse needs that architecture meets. This study reflects a body of work currently in progress based on the NAS building.

Strengthening Global Nuclear Governance

Motivated in large part by climate change and the need for carbon-free energy sources, governments and companies around the world are pushing to revive nuclear energy. Developed and developing countries alike have expressed interest. For developing countries, however, building a nuclear power plant can be particularly problematic, both for the countries and the world overall. The lack of regulatory and operating experience of developing countries considering nuclear power could pose major challenges to the global rules now in place to ensure the safe, secure, and peaceful use of nuclear energy.

The challenges facing the global governance regime can be seen in the case of a promising candidate for nuclear energy, the United Arab Emirates (UAE), and a far more worrisome one, Nigeria. Although it has the world’s sixth largest proven oil reserves and fifth largest proven natural gas reserves, the UAE has been making a strong case for nuclear power based on its rapid economic growth and a desire to retain its oil and gas for export rather than domestic use. The federation has moved aggressively to court foreign reactor vendors, sign nuclear cooperation agreements with other countries, and hire foreigners, lured by extraordinary salaries, to run its regulatory authority. It recently ordered its first nuclear power plants from a South Korean consortium. The UAE has sought to be a nonproliferation model by signing an Additional Protocol to its International Atomic Energy Agency (IAEA) safeguards agreement, as well as renouncing any ambition to enrich uranium or reprocess plutonium, and concluding a so-called 123 Agreement with the United States that could provide additional legal assurances.

At the other end of the spectrum, Nigeria, which has repeatedly declared its desire to acquire nuclear power, is the epitome of a bad candidate. Although oil-rich like the UAE, it has a long history of mismanaging large projects, including its oil industry. Its national electricity grid has one of the worst transmission and distribution loss rates in the world, with only a fraction of its generating units operating at a given time. Violence often breaks out in the Niger Delta because of various economic, social, ethnic, and religious tensions, seriously disrupting the country’s predominantly foreign-owned oil industry. Of the developing countries pursuing nuclear power, Nigeria’s scores, calculated by the World Bank, for political violence, government effectiveness, regulatory quality, and control of corruption rank second worst. The country is not a party to key nuclear governance accords. Fortunately, to date its nuclear energy plans have gone nowhere.

Global governance needs to be prepared to address the challenges of the array of developing countries seeking nuclear energy, not just those most likely to succeed. The institutions for doing so are, for the most part, already in place, so the central question is whether they are able to adapt to the needs of developing countries. They are struggling thus far and have much work to do.

Nuclear hopes and realities

The Survey of Emerging Nuclear Energy States (SENES) compiled by the Nuclear Energy Futures Project—a joint undertaking of the Centre for International Governance Innovation (CIGI) in Waterloo, Canada, and the Canadian Centre for Treaty Compliance (CCTC) at Carleton University—tracks the progress of all aspiring nuclear energy countries from an initial governmental declaration of interest to the eventual connection of a reactor to the electricity grid. The project has identified the following as having an official interest in nuclear power: Central Asia (Kazakhstan and Mongolia); Africa (Algeria, Ghana, Kenya, Morocco, Namibia, Nigeria, Senegal, and Tunisia); Europe (Albania and Belarus); the Middle East (Bahrain, Egypt, Iran, Jordan, Kuwait, Libya, Oman, Qatar, Saudi Arabia, Syria, and the UAE); South America (Venezuela); South Asia (Bangladesh); and Southeast Asia (Indonesia, Malaysia, Philippines, Thailand, and Vietnam).

To track states’ progress, SENES uses the IAEA’s Milestones in the Development of a National Infrastructure for Nuclear Power, which identifies three broad milestones that must be accomplished before a state is considered ready for nuclear power: (1) ready to make a knowledgeable commitment to a nuclear program, (2) ready to invite bids for the first nuclear power plant, and (3) ready to commission and operate the first nuclear power plant. The vast majority of the developing states identified in SENES could not now legitimately claim to have reached or gone beyond the first milestone. Only Iran is close to starting up a reactor. No others have even begun construction. The Philippines has a partially completed reactor in Bataan, on which it may resume work. Apart from the UAE, only Egypt, which has aspired to nuclear power for more than 30 years, is known to have invited bids for a plant, which puts it at milestone 2. Jordan and Vietnam are considering several potential vendors.

All states pursuing nuclear power, whether developed or developing, will face problems of cost, industrial bottlenecks, personnel constraints, and nuclear waste, but developing states face unique challenges. Because they are poorer, they often lack the finances, institutional capacity, and physical infrastructure to support a large-scale, multibillion-dollar nuclear power plant project.

For relatively poor countries, paying for a nuclear power plant is a massive hurdle, even if the costs are spread out over several years. There is no precise way to measure whether a country can afford a nuclear power plant, especially since decisions may be driven by politics, national pride, energy security, industrialization strategy, or in the unlikely worst case, nuclear weapons hedging, rather than sound financial analysis or a rational national energy strategy. Although stretching a national budget to buy a nuclear power plant may in theory be possible, this always implies opportunity costs, especially in the vital energy sector. Development banks do not lend for nuclear energy projects, and private investors are likely to be wary. The only developing countries that may be able to ignore such constraints are those with oil-based wealth, such as Nigeria, Saudi Arabia, Venezuela, and the small Gulf States. But the recent drop in the price of oil and international financial turmoil will probably make even these states wary of committing to expensive projects such as a nuclear power reactor.

A second major barrier to aspiring nuclear states in the developing world is having the physical infrastructure to support a nuclear power plant or plants. This includes an adequate electrical grid (at least 10 times the size of a 1,000-megawatt reactor), roads, a transportation system, and a safe and secure site. The IAEA’s milestones document includes a comprehensive list of hundreds of infrastructure targets, including physical infrastructure, for aspiring nuclear states to meet before they should commission a nuclear plant. This includes supporting power generators, a large water supply, and waste management facilities. Meeting all of the targets will be a major challenge for most developing states, requiring them to invest billions of dollars in infrastructure upgrades for several years.

The combination of new, relatively untested, and more complex types of nuclear reactors and developing countries that lack operational and regulatory experience is worrisome for the global nuclear safety regime.

Finally, there is the challenge of governance. A country’s ability to run a nuclear power program safely and securely depends on its capacity to successfully and sustainably plan, build (or at least oversee construction of), and manage a large and complex facility and its associated activities. For a nuclear reactor, such a commitment stretches over decades, at least 60 years from initial planning to decommissioning. For high-level, long-lasting nuclear waste, some of which can remain radioactive for millennia, the commitment is essentially forever. Although the existing nuclear energy states have learned through experience and trial and error, this is not possible or permissible in the current era. Norms, expectations, and standards have evolved. The IAEA estimates that it can take at least 10 years for a state with no nuclear experience to prepare itself for hosting its first nuclear power plant. Many aspiring nuclear energy states have struggled with managing large investment or infrastructure projects for a wide range of reasons, including political violence, mismanagement, and corruption. It is telling that all of the aspiring developing states except Oman, Qatar, and the UAE score 5 or below on the 10-point scale of Transparency International’s Corruption Perception Index.

It seems clear at this early stage of the so-called nuclear revival that for the vast majority of developing states, nuclear energy will remain as elusive as ever. They will simply be priced out of the nuclear energy market, because of the high capital costs of nuclear power plants and the required investment in infrastructure and institutional capacity. Most will need to make unprecedented progress in their economic development, infrastructure, and governance before nuclear power is a feasible option. Because of the low probability of an influx of developing countries into the nuclear business, the risk to the current global governance system is less than what it otherwise would have been. Despite this, global governance needs to be prepared for the handful of developing states that might succeed in acquiring a nuclear energy sector; those that may make the attempt, however ill-advised; and those that seriously consider the option and need assistance in doing so.

Nuclear safety

It is impossible to quantify the impact of a nuclear revival in the developing world on global nuclear safety because it is unclear how large that revival is likely to be. However, it is possible to identify some qualitative implications for safety. Some of these arise from the type of country that is acquiring a nuclear reactor for the first time. Others arise from the new reactor designs that are being purveyed by companies to the newcomers and the terms and conditions under which they are sold. The combination of relatively untested and more complex types of nuclear reactors with developing countries that lack operational and regulatory experience is worrisome for the global nuclear safety regime.

From a global governance perspective, the most obvious source of specific concern is the patchy adherence by developing states to the key safety-related international agreements. Astonishingly, considering their announced enthusiasm for nuclear energy, 4 of the 30 developing countries supposedly interested in nuclear energy—Bahrain, Kenya, Namibia, and Venezuela—are not party to any of the relevant nuclear safety conventions.

The most important of them, the 1994 Convention on Nuclear Safety (CNS) and the 1997 Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste, commit parties to the highest standards of nuclear safety for civilian nuclear power plants, spent fuel, and nuclear waste. This implies compliance with an array of IAEA safety standards and guidelines. The treaties also draw parties into their increasingly effective peer review systems. Thirteen of the 30 developing aspirant states have neither signed nor ratified the CNS, and only 6 are party to the Joint Convention. These states are thus neither integrated nor socialized into the global nuclear safety regime. They are ineligible to attend and participate in the treaty review conferences that conduct peer reviews. Even some nuclear aspirants that are party to the agreements, including Bangladesh, Kuwait, and Nigeria, fail to participate in the regime by attending the review meetings, despite a legal obligation to do so.

If such prominent developing states cannot even commit resources to attending meetings, they’re likely to experience difficulty in fulfilling the more significant legally binding obligations of the conventions. These include establishing the necessary legislative, regulatory, and administrative steps to implement their obligations, setting up a national regulatory body, conducting a comprehensive and systematic safety assessment before a nuclear power plant or waste repository is allowed to operate (and repeating this throughout the lifetime of the facilities), and ensuring that relevant levels of maintenance, inspection, and testing are conducted by facility operators.

Currently, all of the developing countries seeking nuclear energy lack the requisite national laws and regulations, agencies and practices, trained and experienced personnel, and appropriate safety culture to safely host a nuclear plant. None has the capacity to manage nuclear waste, except that currently resulting from medical or research applications. Some with relatively advanced nuclear energy plans, such as Indonesia, the UAE, and Vietnam, are beginning to put in place the necessary prerequisites. Others, such as Algeria and Egypt, have been operating research reactors and using radioactive sources for peaceful purposes for some time, so they have some institutional elements in place and some experience to draw on. Few developing states will be able to afford the UAE approach of buying everything required from abroad. Even an advanced nuclear state such as the United Kingdom is having difficulty finding enough qualified regulatory staff to prepare for its national nuclear revival.

No country is able to buy two critical components: a ready-made safety culture and a robust independent regulator. Since the accidents at Three Mile Island in 1979 and Chernobyl in 1986, both caused and exacerbated by human error, there has been a realization that the human factor is the most difficult element to control for in nuclear safety. Hence there is an increasing emphasis on developing and sustaining a nuclear safety culture, where safety is paramount rather than incidental. With regard to an independent nuclear regulator, the sacking of the Canadian nuclear regulator by the Canadian government in 2008, justified partly on political grounds, indicates that even mature nuclear energy states have difficulty establishing a truly independent regulator. Among the aspirant developing states, Algeria, Bangladesh, Libya, Nigeria, Senegal, Syria, Venezuela, and Vietnam all rate especially poorly on the World Bank’s regulatory control index.

Nuclear security

The global governance regime for nuclear security is much less mature than that for nuclear safety. States are more secretive, often understandably, about nuclear security than about nuclear safety. International cooperation and transparency are therefore constrained. Heightened fears of nuclear terrorism have led to improvements in the global regime since 9/11. Yet as the April 2010 Nuclear Summit convened by President Obama indicated, concerns remain about unsecured nuclear material and facilities worldwide, including those connected with current and future peaceful applications of nuclear energy.

The acquisition of nuclear reactors by states with a poor security record and nonexistent security culture would be a significant challenge to the nascent global nuclear security regime. Nuclear power reactors and associated facilities, even in the construction phase, may be high-value targets for secessionist movements, nonstate actors, or potentially even other states. Inexperienced countries may be more vulnerable to unauthorized access to facilities or seizure of nuclear materials. Many developing states, despite having relatively large armed forces and in some cases capable and often oppressive security apparatuses, may not have the type of sophisticated rapid response capability necessary for protecting nuclear infrastructure. They are likely to be left out of the intelligence-sharing arrangements of the Western states and not trusted with actionable information. A newcomer will take years—the IAEA estimates at least five—to establish legislative and regulatory frameworks and security infrastructure, systems, and practices. As in the case of nuclear safety, it may take such states much longer to establish an acceptable security culture.

The international conventions in this field are far from universal in adherence and application and nowhere near as effective as in the nuclear safety field. The principal treaty, the 1980 Convention on the Physical Protection of Nuclear Material (CPPNM), currently applies only to international shipments of nuclear material. The 2005 Amendment to the Convention, which would extend the regime to the domestic realm of each party, is not yet in force. The 2007 International Convention for the Suppression of Acts of Nuclear Terrorism essentially focuses on criminalizing nuclear terrorism. Although legally binding in respect to their broad provisions, both agreements leave detailed implementation up to each party. International verification of compliance and penalties for noncompliance are unknown, as are requirements for peer review.

Even so, a number of aspiring developing states are not party to the conventions. Bahrain, Iran, Venezuela, and Vietnam are party to none, and other key contenders for nuclear energy such as Egypt, Malaysia, Syria, and Thailand have not signed either the CPPNM or its Amendment. Beyond simply becoming parties, the extent of compliance with these agreements is largely unknown publicly because of the lack of transparency and treaty-mandated peer review.

An additional binding instrument is United Nations Security Council resolution 1540 of April 2004, with subsequent reiterations. This obliges all states to put in place implementation measures to prevent nonstate actors such as terrorists from acquiring any type of so-called weapons of mass destruction, including nuclear or radiological weapons, and to periodically report progress to a Security Council committee. Among the measures expected to be put in place are those to protect the civilian nuclear industry. But compliance by developing countries is mostly episodic and incomplete. None of this engenders confidence in the ability of aspiring nuclear energy states to manage the security of nuclear facilities that they may acquire.

Nuclear nonproliferation

From the outset of the nuclear age, it was feared that states would seek to acquire civilian nuclear energy as a cover for a nuclear weapons program. The result is the evolution of an international nonproliferation regime based on the 1968 Nuclear Non-Proliferation Treaty (NPT) and its safeguards system. This has indeed helped prevent the spread of nuclear weapons to scores of states but has not entirely prevented proliferation. Safeguards have been considerably strengthened since the case of Iraq, but more needs to be done. Moreover, the regime still suffers from its original central contradiction: Some states have accorded themselves the right to retain nuclear weapons apparently in perpetuity, whereas all others are under the legally binding obligation never to acquire them.

The current renewed enthusiasm for nuclear electricity generation is raising fears of nuclear hedging, in which states seek to establish the peaceful nuclear fuel cycle so they can move quickly to nuclear weapons acquisition when required, either clandestinely or by leaving the NPT. The international regime is currently being challenged in this manner by Iran, which is engaging in the type of ambiguous hedging behavior that some say an unbridled nuclear energy revival could unleash.

Yet it is easy to exaggerate the threat. The handful of developing countries that can overcome the hurdles to acquiring nuclear power will, in all probability, acquire only one or two reactors in the next two decades. Although some already have varying degrees of nuclear expertise and research capacity, all will be reactor importers and thus reliant on outside assistance. None, with the sole exception of Iran, will probably acquire an advanced nuclear program with a complete nuclear fuel cycle. Most of the states that acquire nuclear power will not be able to fabricate their own fuel, much less succeed in enriching uranium, which is a technologically challenging and expensive process. None is likely to be legitimately interested in reprocessing plutonium, either for dealing with nuclear waste or for fast reactors.

Because all of the aspiring developing states, along with all other nonnuclear weapon states, are party to the NPT and have comprehensive safeguards agreements, they will be required to apply nuclear safeguards to all of their power reactors and associated facilities. In addition, there will probably be strong pressure on such states to conclude an Additional Protocol to their comprehensive safeguards agreement, making illicit diversion or a hidden clandestine nuclear weapons program more difficult than in the past. Most have, in fact, either signed one or already have one in force. However, key aspiring states—Egypt, Oman, Qatar, Saudi Arabia, Syria, and Venezuela—have not yet signed one, which is of some concern.

The most worrying development would be if the new entrants seek the full nuclear fuel cycle, including uranium enrichment and plutonium reprocessing, which can be used to make reactor fuel or nuclear weapons. Jordan is reportedly resisting the UAE model of foregoing such options, because it may wish to enrich its own domestic uranium resources at some stage rather than relying on others for enrichment services. Turkey has also raised this possibility. One developing country with nuclear power already, Brazil, has its own enrichment plant and is an NPT party but refuses to sign an Additional Protocol. Joint enrichment plans by Argentina and Brazil are being aired. South Korea is pressing the United States to support its plans to reprocess plutonium using an allegedly more proliferation-resistant technology called pyroprocessing.

The quest for energy security is helping legitimize demands for the full fuel cycle. New enrichment technologies such as laser separation may attenuate the current technological and cost barriers. The resistance of key developing states to IAEA and Russian attempts to establish nuclear fuel banks that would provide assurances of supply of nuclear reactor fuel has added to concerns that the future of nuclear energy faces a major political impasse. This is partly driven by anti-Western political gamesmanship by Cuba, Iran, Pakistan, and Syria, but also by genuine developing-country fears that they are being deprived of valuable technological options.

Although the NPT guarantees its parties the “inalienable right to the peaceful uses of nuclear energy,” this is conditional on the acceptance of nuclear safeguards and does not oblige any state to share any particular technology with any other. The United States and other countries, including key members of the G8 and the Nuclear Suppliers Group, are seeking to prevent additional states from acquiring enrichment or reprocessing capabilities, sometimes to the chagrin of even their allies such as Canada. One proposal for resolving this issue over the long term is for the existing possessors of such technology to give up their national capabilities through multilateralization or internationalization of these “sensitive” aspects of the nuclear fuel cycle. Numerous proposals are on the table for pursuing this vision, but its realization would involve enormous compromises on all sides. The issue ultimately reflects the bitter division between the nuclear haves and have-nots that is embedded in the NPT, a resolution of which can come only with the achievement of nuclear disarmament.

Strengthening nuclear governance

Global governance must be strengthened to cope with the expected increase in the number of nuclear facilities operated by the existing nuclear energy states. But much more needs to be done about aspiring developing countries. And this has to be accomplished without giving the impression that the goal is to deprive such states of their rights to the peaceful uses of nuclear energy, while at the same time impressing on them that rights come with the fulfillment of obligations that are in the interests of all.

A nuclear energy revival of whatever size and shape presents risks and opportunities for the IAEA. The opportunities include the potential to shape the revival in a way that did not occur in the early days of nuclear energy or in the first round of significant nuclear energy expansion in the 1970s and 1980s. The most urgent task is for the IAEA to bring all states into all of the nuclear governance regimes for safety, security, and nonproliferation as soon as possible, to inform them of their rights and responsibilities, and to assist them with implementation and compliance. The IAEA is also well positioned to provide expanded advisory services to help new entrants plan their programs from the ground up so as to ensure that they have in place the best possible regulatory, safety, and security measures, are fully compliant with nuclear safeguards, and have the necessary infrastructure and personnel. The IAEA is able, for instance, to assist states in conducting feasibility studies, which it has done for the member states of the Gulf Cooperation Council and Jordan. IAEA documents such as Considerations to Launch a Nuclear Power Programme, Milestones in the Development of a National Infrastructure for Nuclear Power, and Evaluation of the Status of National Nuclear Infrastructure are thorough and informative in setting out the requirements for a successful program. The ideal outcome would be for the IAEA to quietly use the new interest in the peaceful uses of nuclear energy as leverage to convince states to put in place all of the prerequisites for a safe, secure, and proliferation-resistant enterprise.

There is, however, a danger that the IAEA will be swamped by such demands. Yury Sokolov, IAEA Deputy Director General of Nuclear Energy, estimated in July 2009 that during the coming two years, the agency is expected to assist 38 national and 6 regional nuclear programs, a threefold increase from the previous reported period. To be able to continue functioning effectively, the IAEA’s member states, essentially the Western countries, will need to increase the agency’s budget to meet the ever-increasing demands placed on it, as well as ensuring that it is equipped with modernized facilities, up-to-date technology, and expert human resources.

But the global governance system also needs to be able to discourage states when nuclear energy appears not to be an appropriate choice. Although the IAEA’s detailed briefings and documentation may deter some from proceeding, the agency is neither mandated nor competent to provide advice on more appropriate alternative energy policies. In these cases, the International Energy Agency in Paris and the new International Renewable Energy Agency established in Bonn in 2009, along with countries with advanced national energy plans, are better placed to assist. Despite having a mandate to promote only nuclear energy, the IAEA should be able to develop partnerships with others to offer comprehensive energy policy advice. The threat of climate change and the need to urgently reduce carbon emissions may ultimately steer the international community into collaborating better on rational, comprehensive national energy plans.

Nuclear vendor states and companies are also in a position to strengthen nuclear global governance, not least to protect the long-term interests of developing countries. Responsibility for ensuring the safety and security of nuclear power plants lies not just with the customer states but with vendor states and their companies. Seller and recipient states usually sign bilateral nuclear cooperation agreements to provide a political framework for reactor sales within which their companies must operate, notably by adhering to their requirements relating to safety, security, and nonproliferation. The 2009 US-UAE 123 Agreement is a model in this respect. Some nuclear regulators in vendor countries are beginning to recognize the need to balance commercial interests with broader considerations. French regulator Andre-Claude Lacoste has reportedly suggested to President Nicolas Sarkozy that he be “a little bit more pragmatic” about signing nuclear cooperation agreements with countries now devoid of nuclear safety infrastructure. France has in fact established a unit within government to assess the institutional readiness of potential French nuclear reactor customers and advise them on how they might be assisted to prepare. It is not clear whether other vendors such as Russia and South Korea are making similar efforts.

In addition to ensuring that their customers are well prepared, vendor companies must also ensure that their product can be operated as safely and securely as possible. Most of the new entrants will probably purchase the latest nuclear technology, so-called Generation III or Generation III+, especially because it is advertised as being safer, more efficient, and more likely to achieve economies of scale. Luckily, the new designs will probably be deployed first in experienced states that have rigorous licensing procedures. Countries with companies that sell reactors need to engage in continuing efforts to harmonize safety requirements and the licensing and other regulatory requirements for new reactor types. The Multinational Design Evaluation Program, run in cooperation with the Nuclear Energy Agency and the IAEA, should be strongly pursued. In addition, vendor states and companies should assist the IAEA in revising its safety standards to take account of the new generation of reactors, because its current standards were written with existing light-water reactors in mind.

Vendor companies have compelling reasons to help strengthen global governance, because a major accident, a nuclear 9/11, or yet another state that acquires nuclear weapons under the guise of a peaceful program would probably sound the death knell of the predicted nuclear revival. The April 2010 CIGI/CCTC report The Future of Nuclear Energy to 2030 advocates the establishment of an international forum to bring together all states and companies, including vendors and utilities, involved in international nuclear reactor sales in order to harmonize criteria for such sales. Such a forum could consider an industry code of conduct, which could take into account the nonproliferation record of potential purchasers, along with their safety and security records and intentions, and the security context in the region where they are located. Industry bodies such as the World Association of Nuclear Operators should seek membership by developing-country operators even before nuclear power plants are built, so that they can begin to absorb the lessons learned from the experience of others. The new World Institute for Nuclear Security is another avenue for acclimating newcomers into the norms and requirements of nuclear security.

Developing countries will need to be convinced that strengthening nuclear global governance is not a plot by the developed world to deprive them of the benefits of nuclear energy, but rather an important way of ensuring that nuclear energy is used in a safe, secure, and peaceful manner, to the benefit of everyone. The fact that some developing countries, notably China, India, and South Korea, are entering the reactor sales business will help because such states and their industries will be eager to avoid a disaster arising from their product. But in the longer term, the sting will be taken out of nuclear energy politics only by the resolution of the perceived inequality resulting from the most advanced nuclear energy states also being the ones in possession of nuclear weapons.

Nuclear Waste Disposal: Showdown at Yucca Mountain

If the nation is to seriously confront a growing inventory of highly radioactive waste, a key step is to determine the merits of its geologic repository project at Yucca Mountain in Nevada. A board of the U.S. Nuclear Regulatory Commission (NRC) has for nearly two years been conducting an open and transparent licensing proceeding to accomplish exactly that. Moreover, in its forceful ruling of June 29, 2010, the board rejected as contrary to law a motion by Secretary of Energy Steven Chu to withdraw the licensing application and shut the proceeding down. Yet the administration’s attempt to abandon Yucca Mountain continues and in our view poses a significant risk of a major setback for public acceptance of nuclear energy.

The licensing application was filed by the Bush administration under the Nuclear Waste Policy Act (NWPA) of 1982, and the proceeding itself began in October 2008. The NRC staff has almost completed its safety evaluation of repository performance for many tens of thousands of years. With this report in hand, the licensing board (acting for the commission) could begin hearing and adjudicating scores of critical contentions by the state of Nevada and other opposing parties. If the case for licensing is convincing, the granting of a construction license could come in 2012. But the licensing board is a creature of the NRC, and if the commission should order the proceeding terminated in keeping with Secretary Chu’s motion, the board must comply.

The attempt by the current administration to withdraw the licensing application and abandon Yucca Mountain follows a commitment made by Barack Obama in early 2008 during the competitive scramble for Nevada delegates to the Democratic National Convention. Hillary Clinton, then the hands-on favorite for the nomination, had long sided with Nevada in its opposition to a repository at Yucca Mountain. Not to be outdone, Senator Obama declared his own categorical opposition to the project. Earlier this year, when President Obama, acting through Secretary Chu, moved to withdraw the licensing application, no scientific justification or showing of alternatives was offered. The project was simply dismissed as “not a workable option.”

To cover Obama’s political debt to Nevada, repository licensing would be terminated without congressional review and approval despite the fact that this vital project was sanctioned by Congress in elaborate detail and handsomely funded by a fee imposed on tens of millions of consumers of electricity produced by nuclear reactors. The licensing proceeding marks the culmination of a 25-year site investigation that has cost over $7 billion for the Nevada project itself and over $10 billion for the larger national screening of repository sites from which the Yucca Mountain site was chosen.

What’s at stake

To summarily kill the project would cap with still another failure a half-century of frustrated endeavors to site, license, and construct a geologic repository. The roughly 64,000 metric tons of spent reactor fuel that await permanent geologic disposal are now in temporary storage at 120 operating and shut-down commercial nuclear power reactors in 36 states. In addition, there are the thousands of containers of highly radioactive waste arising from the cleanup of nuclear weapons production sites in Washington, South Carolina, and Idaho.

Now pending before the U.S. Circuit Court of Appeals for the District of Columbia are lawsuits brought by Washington, South Carolina, the National Association of Regulatory Utility Commissioners, and several other plaintiffs to stop the licensing withdrawal. Most tellingly, the plaintiffs allege violations of the NWPA of 1982, with its detailed prescriptions for repository site selection, approval, and construction licensing. But also in play is the Administrative Procedure Act, under which agency decisions can be voided as “arbitrary and capricious” and an abuse of discretion.

In its refusal to accede to the Department of Energy’s (DOE’s) motion to withdraw the licensing, the licensing board questioned why the Congress, in enacting the NWPA, would have set out an elaborate sequence of steps and procedures for the selection and approval of a repository site if in the end the Secretary of Energy could undo everything by withdrawing the licensing application. “Unless Congress directs otherwise, DOE may not single-handedly derail the legislatively mandated decision-making process,” the board said.

The Court of Appeals initially called for arguments in the pending litigation to begin this September but has now decided to first await an outcome at the NRC.

Coupled with the attempted withdrawal of the licensing application is a self-evident violation of the Federal Advisory Committee Act of 1972, which is intended to keep advisory committees from being “inappropriately influenced by the appointing authority or any special interest.” According to its charter, the Blue Ribbon Commission on America’s Nuclear Future (BRC), which Secretary Chu unveiled early this year, is to conduct a “comprehensive review of policies for managing the back end of the nuclear fuel cycle, including all alternatives for the storage, processing, and disposal of civilian and defense used nuclear fuel [and] high-level waste …” Left unstated, to say the least, was the fact that the commission was created in substantial part to show that Yucca Mountain was not being abandoned without identifying a full suite of waste management options—but with no intention to have the repository project serve as a baseline for this review.

In March 2009, Secretary Chu and Nevada’s Senator Harry Reid, the Senate’s Democratic Majority Leader and a relentless foe of Yucca Mountain, struck a deal wherein Reid would drop his proposed legislation for a blue ribbon commission that Congress would appoint in favor of a commission that the Secretary of Energy would choose. In a press conference announcing the formation of the BRC on January 29, 2010, and later at their first formal meeting, commission members were told by Secretary Chu and White House aide Carol Browner that Yucca Mountain is past history and is not among the waste management options to be considered.

A blue ribbon agenda

The BRC’s eminent co-chair, Lee Hamilton, the former Indiana congressman who served as vice chairman of the 9/11 commission, has made the general point that his study group’s “recommendations will be ours and ours alone.” Indeed, whatever the motivations of those who created it, the BRC is an independent advisory body chartered to provide a comprehensive review of waste management alternatives, and it cannot reasonably and honorably exclude Yucca Mountain from that review. The intellectual gyrations at play with respect to Yucca Mountain may be especially disturbing to those commission members well versed in nuclear energy issues, such as Richard Meserve (a former chair of the NRC), Per Peterson (chair of nuclear engineering at the University of California, Berkeley), and Phil Sharp (head of Resources for the Future and formerly a congressman from Indiana).

In turning its back on Yucca Mountain, the commission would put itself at high risk of failing to produce a report of significant policy impact and of coming across as little more than a fig leaf of respectability for the president’s decision to abandon the repository. We don’t think it will do that. This body could in fact prove itself enormously useful, not least by an insistence on recognizing and protecting the integrity of the NRC as an independent regulatory agency.

The commission could also emphasize that solid public acceptance of nuclear energy, together with the continued storage of large amounts of spent fuel in temporary surface facilities, may well turn on a credible promise of a geologic repository becoming available within the next few decades. This we see as a fundamental political reality that is accorded too little weight by the utility industry, the Secretary of Energy, and the NRC itself.

The utilities that are generating nuclear energy certainly want a repository, but they do not want their lack of one to stand in the way of public support and federal subsidies for a nuclear expansion. So from this contorted position they argue the safety and acceptability of surface storage of spent fuel for decades into the future while quite properly pressing the government to honor its long-past-due obligation to take custody of most of that fuel.

But the politically critical nexus between reactors and spent fuel disposal has been evident since 1976, when Californians approved a referendum that declared that no more nuclear plants could be built in the state until a means for permanent disposal of spent reactor fuel and high-level waste was achieved.

Waste confidence

The NRC’s successive “waste confidence” rule-makings during the past 25 years have been a milder response to the same issue. A lawsuit begun by the Natural Resources Defense Council in 1977 gave rise to the first such NRC rule-making in 1984. In that ruling, “reasonable assurance” was found on three critical points: that at least one mined geologic repository would be available by the years 2007–2009; that spent fuel from any reactor could go to geologic disposal within 30 years of the expiration of the reactor’s operating license; and that during the interim, the spent fuel could be safely kept in surface storage facilities either at the reactor site or elsewhere.

These confidence findings were renewed in 1990, then again in 1999, but with the difference that the latter finding envisioned a geologic repository becoming available “within the first quarter of the twenty-first century.” In September 2009, a new confidence proceeding was initiated wherein the NRC expressed reasonable assurance of having a repository within 50 to 60 years of the licensed life of existing reactors, which for some reactors may extend to the year 2060.

In plain English, what this meant was that the commission would be comfortable not having a repository until sometime well beyond the year 2100, when our great-great grandchildren may be left to worry about the disposal of nuclear waste arising from the generation of nuclear electricity from which we benefit today. The NRC, with two vacancies at the time, had but three members to consider this confidence finding and only one was willing to adopt it without receiving public comment on policy changes affecting Yucca Mountain. That one was the commission’s new chair, Gregory B. Jaczko, formerly a senior aide and close associate of Senator Reid. President Bush appointed Jaczko to the commission in 2005 and reappointed him in 2008, and last year President Obama named him chairman.

Since then, the NRC has undergone major changes in membership, and whether there is among the five commissioners a legally qualified quorum of three to decide pending Yucca Mountain issues is being challenged. Of the two members who opposed issuance of a confidence finding last year, Commissioner Kristine L. Svinicki continues to serve but her former colleague Dale E. Klein has completed his term and departed.

Meanwhile, three new members—George E. Apostolakis, William D. Magwood IV, and William C. Ostendorff—have come aboard. At their Senate confirmation hearing in February, Senator Barbara Boxer of California asked each of the three this question on behalf of Senator Reid: “If confirmed, would you second guess the DOE decision to withdraw license application for Yucca Mountain from NRC review?” All three answered, no. In the pending litigation, Washington State and South Carolina, plus a few other parties, cite this exchange as compelling grounds why, by law, they should recuse themselves from any decision on the Yucca Mountain licensing issue.

Apostolakis, a professor of nuclear science and engineering at the Massachusetts Institute of Technology (MIT) and a member of the National Academy of Engineering, has in fact since recused himself. But his stated reason for doing so was not his response to Senator Boxer but the fact that he chaired the Sandia National Laboratory panel that reviewed the Yucca Mountain performance assessment and found it adequate to support submittal of a license application.

Commissioners Magwood and Ostendorff, on the other hand, have now refused to disqualify themselves, contending that Boxer’s question was vaguely put and that they were at the time unaware that a White House decision to withdraw the licensing application would be coming up for NRC review. But the DOE had already filed a motion to stay the licensing board proceeding and announced that a motion to withdraw the licensing application would soon follow. Counsel for Washington et al., citing Supreme Court precedents, argue that whether a judge or regulatory official recuses himself should turn not on “the reality of bias or prejudice but its appearance” and on whether a “reasonable man, [knowing] all the circumstances, would harbor doubts about the judge’s impartiality.”

For final disposal of long-lived nuclear wastes, geologic containment is the only option, and Yucca Mountain is the one place where this might happen in the next few decades.

Of course, in principle there’s nothing to keep Magwood and Ostendorff from deciding not to join their chairman, Gregory Jaczko, in overriding the licensing board. This would deny Jaczko a majority on the issue and leave in force the board’s refusal to stop the licensing. But however that may be resolved by the commissioners, the matter of the new waste confidence finding is also pending. All five commissioners, including Magwood and Ostendorff, have issued position papers in which, despite differences in detail, there is broad agreement as to strategy. They have studiously avoided recognition of the elephant in the room, Yucca Mountain. The project’s fate is either ignored or treated as by no means impeding a confidence finding.

The commissioners are counting on continued surface storage for up to 120 years or even much longer, and on having either a mined geologic repository or some other means of final disposal available “when necessary.” The House report that accompanied the Nuclear Waste Policy Act almost 28 years ago noted that “an opiate of confidence” had led to a long trail of paper analyses and plans that had come to nothing. The record of frustration and failure that preceded that 1982 Act may well be extended right up to the present if the commissioners rubber-stamp the administration’s withdrawal plans for Yucca Mountain or ignore the implications for waste confidence of the project’s being abandoned at the very point of construction licensing.

Whatever happens at the NRC, the BRC must weigh in with its own judgments. A central fact to be recognized is that geologic storage or disposal of highly radioactive waste will not begin within this generation without a renewed commitment to Yucca Mountain. Apart from the continued surface storage of spent fuel, other waste management options that the commission is considering—spent fuel reprocessing, “recycling,” and transmutation of dangerously radiotoxic species to more benign forms—have little to offer for the next half century or longer.

This is true for a mix of technical and financial reasons explained at length in studies done by experts at Harvard, MIT, and elsewhere. A primary reference is the National Research Council’s Separations Technology and Transmutation Systems report of 1996. For the foreseeable future, waste management systems resting on such technologies would come at prohibitive cost and could not in any case eliminate all of the dangerously radioactive and long-lived wastes of concern. For final disposal of such waste, geologic containment is the only option, and Yucca Mountain is the one place where this might happen in the next few decades.

Redefining Yucca Mountain

The commission has an opportunity to broadly redefine the Yucca Mountain project to suggest how advantage might be taken of the repository’s early potentialities and how uncertainties about its long-term performance might be reduced. Bear in mind that operation of the repository would come in two phases. There is, first, a pre-closure phase of up to several hundreds of years during which spent fuel and high-level waste would be emplaced retrievably. This is followed by a post-closure phase that begins when the repository is sealed.

Built in volcanic rock high above the water table and accessed by gently inclined ramps from the ridge slopes, a Yucca Mountain repository would be ideally situated to serve for monitored geologic storage of spent fuel, which ultimately could be retrieved if, say, fuel recycling should become economically attractive. Regrettably, in 1987, when the investigation of repository sites was narrowed to Yucca Mountain, the Congress, as a concession to Nevada, declared that no “monitored retrievable storage facility” could be built in that state. Here, Congress was, without doubt, referring to the kind of monitored retrievable surface storage facility that some sponsors of the NWPA of 1982 had deemed no less essential than a geologic repository and much more easily achieved.

But DOE officials did not believe that the NRC, under its licensing policies, would permit them to seek a license allowing retrievable emplacement of spent fuel and high-level waste early in the pre-closure phase while work continued on meeting the more stringent standards for permanent emplacement. They knew, too, that to propose such a two-phased strategy would arouse Senator Reid’s wrath.

But the BRC could strongly advocate a two-phased approach to licensing, with vigorous pursuit of repository design alternatives to continue in parallel with the program of monitored retrievable geologic storage.

The National Research Council’s Board on Radioactive Waste Management has long recommended that repository design be approached in a phased, stepwise manner that allows intensive testing and analysis and a flexible, adaptive response to the setbacks and surprises sure to come. This concept was most recently articulated in the board’s 2003 report One Step at a Time: The Staged Development of Geologic Repositories for High-Level Radioactive Waste.

In sorting things out, the commission might note with emphasis that commercial spent fuel and defense high-level waste differ greatly in the degree of hazard posed. Because there is relatively little presence of plutonium and other actinides of long half-life in the defense wastes, the period of hazard for these wastes may be as short as 10,000 years, compared to up to a million years for spent fuel.

A fair deal for Nevada

As for Nevada’s grievances, the commission doubtless will note that when the Congress, in its 1987 amendment to the NWPA, narrowed the search for a repository site to Yucca Mountain, this came as an abrupt departure from the procedure originally mandated to go to a single candidate site only after an in-depth, in-situ exploration of three candidates. But the volcanic tuff site at Yucca Mountain had emerged from the first round of studies as clearly superior to the other two candidates: the site in volcanic basalt at Hanford, Washington, and the one in deep bedded salt in Deaf Smith County, Texas. A more tentative or contingent congressional choice of Yucca Mountain would almost certainly have survived an impartial technical review, so in our view the hasty adoption of what soon came to be known as the “screw Nevada bill” was as unnecessary as it was politically provocative.

We think Nevada’s cause for redress turns chiefly on regional fairness and equity, on having been fingered to take dangerously radioactive and long-lived nuclear waste that probably no other state would willingly accept. A major question for the BRC to consider is what compensation is due the state chosen for the nation’s first repository for permanent disposal of spent fuel and high-level waste? The state could, for example, be given preference in the siting of various other new government-sponsored or -encouraged enterprises, civil or military, nuclear or non-nuclear, promising to bring Nevada more high-tech jobs and attract other business.

Even today, Nevada’s Nye County (host to Yucca Mountain) and several other rural counties see a duly licensed repository project as a distinct economic asset and quite safe. Also, some of Nevada’s more visible Republican politicians openly advocate the project, too, but on condition that the “nuclear dump” many Nevadans envision be made more acceptable by adding other nuclear-related industrial activities. Although Senator Reid surely has had the wind at his back in opposing the repository, the oft-repeated claim that Nevadans are overwhelmingly opposed to the repository is a canard that dies hard.

President Obama, at the Copenhagen climate change summit last December, announced a goal of reducing carbon emissions by 83% by the year 2050. In pondering the nation’s nuclear future, the BRC must be aware that a nuclear contribution on a scale truly relevant to that hugely ambitious goal might entail a fivefold expansion of the present suite of 104 large reactors and a fivefold increase in the annual production of spent fuel from 2,000 to 10,000 metric tons. Surely this is not the time to abandon the only currently viable option for very long-term geologic retrievable storage of spent fuel, and possibly final disposal.

But also at stake is the reputation of the NRC as an independent, trustworthy overseer of the civil nuclear enterprise. The NRC has been dealt with abusively by the Obama administration and Senator Reid in the matter of Yucca Mountain. So now will the commissioners acquiesce in the policies of the senator and the White House, or will they reassert the NRC’s dignity and independence by upholding their own Yucca Mountain licensing board? Also, will they see the speciousness of their pending waste confidence finding that would ignore the blatantly political undoing of a sophisticated technical endeavor to build the world’s first geologic repository for highly radioactive waste? How the commissioners exercise their great trust will soon be apparent.

Where Are the Health Care Entrepreneurs?

Health care in the United States is notorious for market imperfections. Costs are higher and outcomes worse than almost all analyses of the industry suggest are reasonable. Indeed, few other industries perform worse than health care in serving their consumers. In other industries characterized by inefficiency, efficient firms expand to take over the market, or new firms enter to eliminate inefficiencies. But such organizational innovation has been rare in health care. Two main barriers stand in the way: lack of good information on health care quality and the dominance of payment systems that reward volume of care rather than its value.

Recent reform legislation promotes changes in each of these areas. Whether the legislation addresses these problems sufficiently is something that only time will tell. Still, it is clear that there are a number of actions that can help to promote innovation and entrepreneurship and in the process improve the performance and lower the costs of the health care system.

Relative to the economy, health care spending has increased by a factor of four in the past half century. Of course, not all medical spending increases are problematic. A good share of rising costs is attributable to the development and diffusion of new technologies, which often bring significant value. But alongside valuable innovation in medical care is an enormous amount of waste. More than one-third of medical spending, upwards of $700 billion annually, is not associated with improved outcomes. Indeed, rising health care costs are the leading contributor to projected federal deficits during the next few decades and make expanding health insurance coverage difficult to afford.

In other industries characterized by inefficiency, efficient firms expand to take over the market, or new firms enter to eliminate inefficiencies. But such organizational innovation has been rare in health care.

Inefficient spending is an example of low productivity; more is spent than is needed to produce the output achieved (or equivalently, less output is produced than is possible given the inputs employed). One way to gauge the relative efficiency of health care over time is to compare its productivity growth with that of other industries.

Productivity growth is notoriously difficult to measure in health care. Accurate productivity assessment requires a good output measure. Health is difficult to measure and even harder to decompose into medical and nonmedical factors. As a result, official data are much better on productivity outside of health care than they are in health care. Still, official data do provide some insight. Overall productivity growth in the United States was low from the mid-1970s to the mid-1990s. Since the mid-1990s, however, productivity growth has increased rapidly. Productivity growth in private industry, for example, averaged 1.25% annually from 1987 to 1995 and 2.4% annually from 1995 to 2005. This resurgence of productivity growth is often attributed to greater use of information technology.

The most productive industries were durable-goods manufacturing (6.9% growth annually) and information technology (5.7% growth annually). These industries are fairly different from health care. There are some industries with high productivity growth that are more similar to health care, however. One example is the retail trade. In the past 15 years, productivity growth in retail trade averaged 4.3% annually. Productivity growth in health care during this period is estimated to have averaged –0.2% annually. This is almost surely an underestimate, for various reasons. But even so, the negative value is striking.

Sources of inefficiency

There are three key reasons for the inefficiency of medical care:

First, people receive too much care. Low patient cost-sharing combined with generous provider reimbursement means that neither patients nor providers have incentives to limit care. Thus, many people now receive more medical care than is appropriate for their condition, especially in acute settings.

Consider the treatment of localized prostate cancer. Almost all elderly men have cancer of the prostate. In many cases, however, the cancer grows slowly and the person will die of something else before the cancer becomes fatal or even clinically meaningful. Thus, “watchful waiting” is a common strategy. Yet only 42% of elderly men with prostate cancer receive watchful waiting. The rest receive alternative treatments that are far more expensive and can have adverse side effects. There is no evidence that the more expensive treatments have better outcomes. By some estimates, up to $3 billion annually could be saved by adopting less-invasive therapies.

The prostate cancer example is not unique. The PROMETHEUS payment model estimates that 14 to 70% of costs for common conditions in the elderly, such as joint replacements, heart attacks, congestive heart failure, and dia betes care, are avoidable.

Second, there is inadequate care coordination. For many medical conditions, people need to see generalist and specialist physicians, receive periodic lab tests, take medications, and modify their behaviors. This complex regimen is almost always left to the patient to plan and coordinate, although we know that many people are bad at this. Partly as a result, people receive too little chronic and preventive care, costing both lives and dollars. In the case of diabetes care, for example, only 43% of diabetics receive recommended therapy, and an even smaller share meet guidelines for risk factor control.

It is possible to do better, and a number of integrated provider systems show how. For example, HealthPartners, a health maintenance organization (HMO) in Minneapolis, began a program in the mid-1990s to improve diabetes outcomes. The organization worked with its physicians to identify diabetic patients who had not received recommended screening and provided nurse case managers to call the patients. Physicians were encouraged to start medication therapy in patients for whom diet and exercise were not sufficient. Patients, in turn, were reminded to take their medications and receive recommended screenings. Individual and group sessions developed mechanisms for people to manage their disease, and nurse case managers helped as needed. In the five years after this program was implemented, patients’ rates of high blood sugar fell by half and their diabetes was brought under much better control.

HealthPartners and other high-performing providers have three critical attributes that contribute to their success. They integrate care across different providers by having providers in the same physical or virtual organization; they pay physicians on a salary or productivity basis, not a fee-for-service basis; and they decentralize decisionmaking to encourage productivity.

The biggest problem for HealthPartners was that the economics did not work out well. The cost of the program was a few hundred dollars per diabetic patient per year. Better diabetes control translates into fewer adverse events, but that comes a few years down the road. The HMO feared that many patients would transfer to a new insurer before the benefits of prevention were noticeable. By one estimate, the plan’s return on investment would be favorable over a decade but not by anywhere near the social value of the program.

The lack of coordination that is endemic to chronic disease care is noticed by consumers. According to one survey, 25% of Americans with chronic disease have had the experience of having records unavailable when needed, and 20% have had a doctor order a repeat test. Overall, 35% of Americans felt their time was wasted because of poor organization.

Comparable data on perceptions of other industries are not available. That is not an accident; consumers are rarely as poorly served in other industries as in medical care. In making retirement savings decisions, for example, companies such as Fidelity and Vanguard automate the collection of money and its allocation. Airlines store flight information electronically for easy access throughout one’s trip. And specialty stores in retail bring together different products, so consumers do not have to physically compare products from different suppliers.

To be sure, the retail model of organization has imperfections. Electronics stores encourage people to buy more gadgets than they need and sell them overpriced insurance for what they buy. The fees collected by mutual funds are far higher than those that a perfectly competitive market would suggest. But still, these market organizers have gained enormous market share because of their service quality and low price.

The low level of service quality in health care is ironic given the enormous investment in nonclinical personnel. There are nine times more clerical workers in health care than there are physicians and twice as many clerical workers as registered nurses. But this investment has not paid off in superior outcomes or better customer service.

The third key reason for inefficiency in medical care is flawed production processes. Medical care providers are far less efficient than they should be. Wasted time is rampant. For example, physicians spend an average of 142 hours annually interacting with health plans, at an estimated cost to practices of nearly $70,000 per physician. Similarly, nurses in medical/surgical units of hospitals spend 35% of their time on documentation, considerably more than they spend on patient care. Doctors also routinely redo tests because the prior test results are not available or would require too much effort to obtain.

Better care models exist, generally involving the use of information technology and changes in workplace practices. For example, Kaiser Permanente found that use of information technology combined with organizational changes led to a 35-minute reduction in nursing overlap time associated with shift changes. Similarly, a variety of studies have shown that providing dedicated surgical suites for particular operations results in lower cost per surgery. Yet most full-service hospitals do not organize their operating suites in this fashion.

In some cases, this administrative complexity actually harms patients: 4% of hospitalized patients suffer an adverse event, of which one-third, or 1% of total hospital admissions, are a result of negligence. The Institute of Medicine estimated in a 2000 report that preventable medical errors resulted in between 44,000 and 98,000 deaths annually, making errors one of the top 10 leading causes of death. Errors are also expensive, costing the system about $30 billion annually.

There are many models for reducing medical errors. Adverse drug interactions can be virtually eliminated by computerized physician order entry systems, which cost roughly $8 million each. Yet only 4% of hospitals have fully adopted such systems. Similarly, surgical complications can be reduced through organizational innovations such as surgical checklists and other process steps, but the use of checklists remains relatively low.

All told, the costs of too much acute care, poor care management, and inefficient production are staggering. Excessive care accounts for as much as 30% of total medical spending. Administrative inefficiencies amount to about 10% more. And lack of prevention is associated with countless lost lives, with mixed data on costs. An efficient medical care system would save half or more of this amount, as much as $1 trillion per year.

How change occurs

Medical care is complex, and as in any industry where human action is important, there are bound to be mistakes. The failure of medical care is not so much that mistakes are made, but rather that the system has not developed or broadly adopted mechanisms to minimize those mistakes. In retail, in contrast, producers invest extensively in error reduction (Toyota, for example) and supply-chain management (Wal-Mart). In financial services, an entire set of firms is devoted to helping consumers save.

Some ways to improve production processes are clear. Primary care physicians, for example, could provide a “medical home” for patients with chronic disease. Similarly, multispecialty groups of physicians might combine into accountable care organizations to make sure patients do not fall through the cracks. Alternatively, payers for medical care (insurers or the employers they contract with) could push for coordination. Even further removed, a firm from outside medical care could enter health care and organize the care experience, as Amazon.com did with book sales and Expedia did with airline tickets.

Several health care organizations have become leaders in improving process design. For example, Virginia Mason Medical Center in Seattle committed itself to lean manufacturing principles in 2002. During the next several years, it focused on patient safety, care coordination, supply management, and nursing productivity. Among the returns have been greater patient volume, reduced capital expenditure, and less use of temporary and contract nurses. Similarly, Thedacare in northeastern Wisconsin cut costs by 5% in three years and improved quality by using tools of lean manufacturing. Perhaps the biggest transformation of all was the Veterans Administration (VA). Between 1995 and 1999, the VA handled 24% more patients despite a budget increase of only 10% (compared with a 30% increase in the health care system budget overall). The VA was able to do this through greater use of information technology, more local financial autonomy, and empowerment of regional managers to make decisions based on local conditions.

The VA and Virginia Mason examples suggest that a large amount of total hospital costs are unnecessary. If hospital costs alone could be reduced by one-quarter—an amount well in line with estimates of waste—total system savings would be about 8%.

Many of the investments needed to reduce the inefficiencies of health care and improve the quality of care must occur at the provider level. New information technology (IT) systems need to be introduced at the practice level, and workflow must be arranged for each provider. To understand the economics of provider-driven reform, consider the standard profit equation, in which profits are equal to total revenue (price of the service multiplied by quantity) minus cost. Thus, to be adopted, organizational innovation must positively affect the price or the quantity of services sold, or reduce costs.

The failure of medical care is not so much that mistakes are made, but rather that the system has not developed or broadly adopted mechanisms to minimize those mistakes.

Almost all of these interventions to improve provider-level productivity require upfront investment, either monetary or organizational. Computer systems to check for adverse drug interactions run into the millions of dollars, and changing surgical practices involves reorganizing care throughout the institution. Thus, provider groups need some return in order to make these investments.

Price increases are not a part of a favorable return. Hospitals are typically paid on a fixed-fee basis, independent of quality. For example, Medicare reimburses hospitals for a predetermined amount per stay, depending on the diagnosis of the patient and whether surgery was performed. A less good job earns as much as a better job. Building off Medicare, private insurers generally use per-stay or per-diem payments: A single payment is made for all services provided in that stay or during that day, again independent of quality. As a result, improved quality merits no higher price.

Quantity responses to quality improvements also are limited. One might imagine that more patients would choose to be operated on in hospitals with safety systems or more regular surgical times. But information about such forms of quality is not systematically available. Until very recently, there were no validated measures of provider quality that accurately accounted for differences in patient severity of disease. And even now, measures of clinical and service quality are extremely limited. As a result, hospital choice is based on reputation or uninformed recommendation more than actual data.

Thus, cost savings represent the only area where meaningful gains might offset investment costs. Many productivity innovations will reduce costs. For example, fewer errors means shorter hospital stays, which lower costs. It might also mean lower malpractice premiums. A full analysis of investment in more efficient production has not been undertaken, and it may be that providers should be investing in efficiency improvements on this basis alone. To date, however, the vast bulk of hospitals have concluded that the financial and organizational costs of transforming their institution are not matched by sufficient cost savings.

An example is telling. In the 1990s, Cincinnati Children’s Hospital decided it wanted to become a leader in quality of pediatric care. The hospital’s chief executive officer and its board of directors agreed with the plan. But the finance team saw quality improvement as harming the finances of the institution, which were based on admitting more children and treating them in a high-tech way. No payer reimbursed them more for higher-quality care; in fact, it was penalized.

In the end, the finance team was brought along, but only after someone pointed out an error in the team’s thinking: Having fewer medical errors meant more rapid discharges, which could be offset by admitting more patients from the queue. Thus, there would be no revenue loss from better care. After demonstrating that revenues would not be harmed, the staff at Cincinnati Children’s Hospital went ahead with the quality improvement efforts, and the hospital now is a model for other institutions.

Efforts to limit excessive care and better coordinate care also face financial difficulties. In each case, physicians or nurses must expend effort to make the system better. In the cases of prostate cancer and diabetes, for example, patients need to be counseled about treatment options and informed about the steps involved in good disease management. A successful intervention will probably lower downstream costs. But from the physicians’ perspective, the pricing of medical care makes the switch from invasive medical procedures to advising and counseling problematic. Most physicians are paid on a fee-for-service basis. In the case of Medicare, the service units are independent activities that a physician performs when seeing a patient: a routine office visit, a procedure, or an interpretation of an image. Quality is not a part of the calculated fee. Rather, the fee is based on intensity: Procedures are valued much more highly than is counseling. Further, many of the simple services that are involved in good care management are not reimbursed at all. There is no billing code for e-mail interaction, nor is there any payment for having a nurse place a reminder in the file to call a patient, and stressed providers focus on the quantity of services they bill to the exclusion of the outreach function.

Coupled with this financial disincentive is the traditional norm that separates the practice of medicine in a medical setting from social interventions. Doctors are trained to diagnose and treat patients. They are not trained to counsel or reach out to patients. Physicians can be made to see their job differently, but the incentives to change need to be very strong. In the current system, these incentives are weak, if present at all.

Payers not stepping in

Given the poor incentives transmitted to medical care providers, the obvious question is why payers—insurers, the employers who contract with them, and third-party firms that purchase and manage the care from individual providers—do not intervene. Payers have a number of options. They could require providers to adopt and use interoperable electronic medical records, and they could move to quality-based payment systems to provide incentives for more efficient care.

Why do they not do so? Four explanations have been proposed:

Network externalities. In this explanation, a single payer finds it difficult to have compensation arrangements or accreditation rules that are substantially different from those of other payers. Medicare and Medicaid together account for about 40% of acute care payments, and private insurance accounts for another 40%. (The remaining 20% is from other payers, including worker’s compensation, the VA, public health agencies, and individual consumers’ out-of-pocket payments.) Within the private insurance market, there might be three or four large insurers, for an average market share by each plan of about 10%. It is difficult for an insurer to fundamentally change the practice of medicine when it accounts for only 10% of the market. For example, even an insurer that put 20% of a physician’s revenue at risk for poor performance would affect only 2% of the typical provider’s income. Given the fixed cost associated with provider change, this incentive system is unlikely to do much good.

Further, because of the fixed costs of providers changing their practice, even if the insurer were able to change provider behavior, the savings would be realized by all payers. A primary care physician that responds to insurer incentives by hiring a nurse case manager to work with diabetic patients will have that nurse manager work with all patients, not just those of a particular insurer. Thus, the benefits of any insurer investing in better care extend well beyond that insurer.

Two solutions are generally available for solving the network problem. First, integrated firms may arise that provide both insurance and medical care and thus internalize all externalities. Kaiser Permanente is an example of such a firm, and it provides among the highest-quality chronic disease care. As with most high-quality firms in health care, Kaiser has walled itself off from the rest of the health system. Alternatively, providers could propose new contracts to insurers. For example, providers might suggest that cost savings that result from fewer hospital-based errors be shared between the innovating firm and the various insurance companies. The major problem here is Medicare. Medicare reimbursement has been fixed by law, making that part of revenue unalterable.

Lack of information. Within a market, lack of good-quality data means that consumers have a difficult time determining which providers are better and worse. And across markets, lack of good information means that firms with high quality in one geographic area will not necessarily be perceived to have high quality in other areas. The difficulty of measuring quality is a fundamental difference between health care and most other retail products. Retail stores can be virtually identical across the country, allowing firms to earn a national reputation for high (or low) quality relatively easily. In health care, national reputations are uncommon.

The information problem in health care is very much a public good. All insurers would like to have good data on physician quality, but no single insurer has an incentive to create such data, since quality information will rapidly disseminate across the market. Thus, some governmental involvement in information is needed.

Plan turnover. Suppose that an insurer decides to coordinate care on its own. It might hire nurse case managers, work directly with patients, and reconcile different physician recommendations. But investing in better care has up-front costs, whereas many of the savings occur only over time. For example, better diabetes care may lead to fewer complications, but only after 5 to 10 years. Because as many as 20% of people change plans annually, the insurer undertaking the original investment may not realize the savings. Thus, high turnover has been cited as a cause of low quality.

This explanation, however, is not entirely convincing. The high turnover in health insurance is partly endogenous: Customers feel little allegiance to a plan whose perceived quality is low and whose services are comparable to those of every other insurer. In plans with a reputation for good quality—Kaiser Permanente in California, for example—turnover is much lower.

The wrong customer. The issue of turnover raises a general question about who is the appropriate customer when payers consider care management. In the retail trade, the customer is the individual shopper. If Wal-Mart finds a way to save money, it can pass that along to consumers directly. In health care, in contrast, the situation is more complex, because patients do not pay much of the bill out of pocket. Rather, costs are passed from providers to insurers to employers (generally) and on to workers as a whole. If this process is efficient, the system will act as if the individual is the real customer, because that person is ultimately paying the bill. It may be, however, that the incentives get lost in the process, and efforts to innovate are not sufficiently rewarded.

What difference does selling to an employer or selling to an individual make? Even if insurers wrongly think that their customer is the employer purchasing insurance, that employer may still value improved quality. Many firms, for example, invest in wellness programs, which often involve attempts to coordinate care. If the cost savings or productivity benefits of improved health are sufficiently high, this is a natural step for employers.

Impact of recent legislation

Recent legislation has made a start at making health care more efficient by investing in better information and changing compensation practices for providers. On the information end, the key change was the HITECH Act of the American Recovery and Reinvestment Act of 2009. In that legislation, the federal government committed $30 billion over five years to finance a national system of electronic medical records. In addition, the Patient Protection and Affordable Care Act of 2010 mandates that Medicare data be made available to private parties, including insurers and employers, for purposes of forming quality measures. Thus, the nation may be on the verge of significantly reducing the information problems in medical care. In an otherwise contentious legislative process, changes allowing greater investment in health care IT were generally applauded.

Changing the payment system for medical care was more controversial. Broadly speaking, there are three approaches to payment reform. The first approach is to adopt a single-payer system in which physicians are salaried or paid on a fee-for-service basis within an overall budget target. Such a system is common in many countries and can be successful in reducing unnecessary care, assuming that physicians cut back on the appropriate services. The second approach is to turn health care into a market like other markets, where individuals are more in charge of their spending and service use. This would take the form of much higher deductibles in Medicare and incentives to purchase less generous policies in the under 65-market. The idea behind this model is that providers forced to compete for individuals would invest in higher quality, the same way that retail firms do. The third approach is to keep cost sharing as it is but to reform the way that Medicare payments operate, to incentivize value more than volume. The underlying theory is that changes in Medicare, integrated with changes in private reimbursement, will provide incentives for more efficient care delivery.

Following the third path, changes in Medicare reimbursement are a significant part of the recent Patient Protection and Affordable Care Act. Under that legislation, bundled payments will be made by Medicare to groups of providers who jointly agree to care for a patient with a particular condition; providers will split the overall amount and share the profits. Accountable care organizations go a step further, with groups of providers agreeing to accept a capitation payment in exchange for providing all services needed during a year. Pay for performance, or value-based purchasing, is a third payment reform, adjusting fee-for-service payments for primary care to reflect the quality of the care provided. Finally, care coordination and transition payments are introduced to provide support to nurses or primary care physicians who seek to manage care better.

Each of these payment systems has been the subject of experimentation, with some success in each case. Payment bundles are the best developed. Medicare’s Heart Bypass Center Demonstration Project in the 1990s bundled all care for coronary artery bypass graft surgeries; the program achieved savings of more than 15% per episode. Among care coordination efforts, the medical homes initiative of the Geisinger Health System, based in Pennsylvania, achieved a 7% total medical cost savings and a significant reduction in hospital admissions in the first year. In the program, each patient is assigned a health professional who acts as a coordinator, or “medical home,” for all of that patient’s care. Overall, care coordination efforts appear to be able to save about 15% of inpatient costs when they target populations with chronic illnesses.

How these various reforms play out will be interesting to watch. There is ample evidence that we can improve the performance and reduce the costs of today’s health care system. The challenge now is to take these metrics and the additional improvements identified as the health care reforms are implemented to build the fabric for tomorrow’s system.

Goddam Humans

The social sciences have long been considered the runt in the litter of the science family, if not the bastard child of wild conjecture with deluded mathematics. Broad-minded practitioners of the physical and biological sciences admit that the study of human behavior and social systems presents particularly thorny difficulties that are different in kind from those confronted by the “hard” sciences. Many physical and biological scientists take a more rigid view: No laboratory-style structured experiments, no control groups, no repeatable experiments means no science.

The problem is that scientists also harbor the admirable notion that science should be useful to society, and because they are scientists they want to see the insights of science applied in a rational way to the needs of society and to have evidence that these efforts are succeeding. Physicists who identify the prevalence of waste in the energy system would like to see government officials, industry managers, and individuals take actions to improve energy efficiency. Neuroscientists who unlock the secrets of how the human brain learns would like to see their insights reflected in the way schools are organized and classes are taught. Standing between scientific knowledge and its effective use is the unruly morass of human behavior and the maddening array of irrational social, political, cultural, and economic systems that humans have created.

Scientists discover how rocks, liquids, and gases buried underground can be used to perform a myriad of useful and difficult jobs and to transport us easily around the globe, and the goddam humans use them to foul the air, disrupt the climate, and provide one more reason to go to war. Scientists unlock the secrets of the atom, and the goddam humans immediately jump on the opportunity to develop the means to create terrifying explosions. Increased understanding of the genetic machinery underlying all life is a boon to bioterrorists and an opportunity to revive eugenics.

Of course scientists have been active in the efforts to improve air quality, respond to the threat of climate change, control nuclear proliferation, and protect society from terrorism. But these efforts have not produced stunning successes that match those emerging from laboratories. H. L. Mencken once observed that for every problem there is a solution that is simple, neat, and wrong. Scientists are problem solvers, and their rigorous rational methods have been fabulously successful at cataloging and explaining the machinery of the physical universe. They are often eager to propose rational solutions to the problems of the social universe. When these brilliant solutions are not implemented or fail to yield the desired results, the blame is often placed on incompetent civil servants or benighted social scientists who clutter up the plans with ungrounded notions about human behavior or political institutions. But the real blame belongs on the heads of the goddam humans and their social institutions, which stubbornly refuse to act rationally.

We can continue to complain about the goddam humans who will not be guided by rigorous analysis, reliable evidence, and the grace of reason, or we can wade into the murky waters of social science. Elegant solutions that do not produce the desired results might provide some intellectual delight, but clunky solutions that enable us to make real progress will make a meal that will stick to your ribs. And although the grand challenges identified by natural science researchers are a noble quest, we should also respect and pursue the grand challenges facing the social sciences. These challenges are a bit more difficult to define and will be a good deal more difficult to conquer.

The National Science Foundation Directorate for the Social, Behavioral, and Economic Sciences recently appealed to the research community to help it identify the grand challenges that should inform its decisions about what research to fund in the coming 10 to 20 years. They could build on the 2009 National Science and Technology Council report Social, Behavioral, and Economic Research in the Federal Context, which identified significant challenges in understanding the structure and function of the brain, the complexity of human societies and activities, and human origins and diversity. The report argued that research in these areas could contribute to addressing practical policy questions in education, health, cooperation and conflict, societal resilience and response to threats, creativity and innovation, and energy, environment, and human dynamics.

This might seem to be an absurdly ambitious agenda, but the reality is that no matter how much progress is made in the natural sciences, humans will reap the full benefits only when we acquire the societal wisdom that will enable individuals, institutions, and systems to implement this knowledge. Some argue that common sense is sufficient to guide our actions, but there is no common sense. There is only a vast array of individual, national, cultural, economic, religious, philosophical, and ethical senses. In this soup of competing interests and value systems we need to search for some common ground and some way to accommodate all that is not held in common. This is one holy grail for the social sciences.

The confusion is made no easier by the plurality of the social sciences. Just as an understanding of the brain progresses slowly and fitfully because it must encompass the sometimes contradictory findings about physiology, chemistry, and circuitry, the understanding of human behavior and institutions must evolve in ways that are consistent with what is learned through the lenses of psychology, anthropology, economics, sociology, and political science. Expect conflict and contradiction. Although there is a popular belief that science is an edifice of fact and truth, scientists know that it is really an evolving approximation of what we’re pretty sure about for now but that we must always be willing to challenge. Because of the complexity of their questions and the lack of controlled experiments and easily isolated variables, the social sciences face an even higher level of uncertainty.

But the difficulty of the endeavor is no excuse for failing to pursue it. Money will be wasted, blind alleys will be pursued, theories will be invalidated, data will be questioned. What could be more satisfying? Besides, what choice do we have? The problems are real, and those of us who value evidence and analysis must do what we can to apply these tools. We all need to think about which deep questions could lead us to the data and insights we need to combine with the developments in the natural sciences to move toward a safer, healthier, more productive, and more just world.

To prime the pump, consider a couple of significant challenges. One that might be difficult for natural scientists is to consider when other disciplines have a more important contribution to make to solving a problem. Consider the enormous attention given to brain scans in neuroscience. As we try to unravel the mysteries of human behavior, it is tempting to be seduced by the brilliant images that appear on the screen that seem to be telling us what the brain is doing. This is physical evidence, so it has to be given the most weight. But we all know that these images are at best a very crude reflection of what is happening in a dauntingly complex organ. Listening to the subjective reports of research subjects and observing behavior have obvious limits as research tools, but in this case they might be better than the seemingly reliable evidence of a scan. We need to consider many areas in which our preference for what is measurable and quantifiable might lead us to rely on data of questionable applicability and to discount insights gained through less “scientific” means.

For those who care about public policy, there is no way to avoid the study of governance. What have we learned from more than 60 years of experience in regulating nuclear weapons? That no nation has used a nuclear weapon in war is clearly a triumph, but are the mechanisms that produced this result likely to be effective in a world of terrorists and rogue states? We expend enormous time and effort to craft and approve environmental treaties, but we know that many of them fail miserably at achieving their purpose. How can we do better? The wealthy countries seemed to have solved the problem of infectious disease long ago, yet tens of millions of people are dying from these diseases every year. In hundreds of decisions made every year in cities, states, and nations, we know that the best evidence and the most recent knowledge is not being taken into account. We all support the production of new scientific knowledge as essential to meeting the world’s needs. If we are serious about results, shouldn’t we be even more supportive of research necessary to make effective use of this knowledge?

Forum – Fall 2010

University futures

In “Science and the Entrepreneurial University” (Issues, Summer 2010), Richard C. Atkinson and Patricia A. Pelfrey remind us of the extent to which the U.S. economy is increasingly driven by science and technology and the central role the U.S. research university plays in producing both new knowledge and human capital. Although policymakers should already be aware that federal support for academic research is critical to economic prosperity, academic leaders would do well to recall that the movement of ideas, products, and processes from universities into application requires diligent guidance.

Atkinson and Pelfrey underscore the imperative for the ongoing evolution of our research universities as well as the continued development of new initiatives to enhance the capacity of these institutions to carry out advanced teaching and high-intensity discovery. The transdisciplinary future technologies development institutes in the University of California system that the authors describe serve as prototypes. The calls for more robust funding, expansion of areas of federal investment, and immigration policies that welcome the best and brightest from around the world equally merit attention.

We can maintain America’s competitive success by working on several fronts simultaneously:

First, advance the integration of universities into coordinated networks of differentiated enterprises, thus expanding our potential to exert impact across a broader swathe of technological areas. Organize research to mount adequate responses at scale and in real time to the challenges that confront us. The need for transdisciplinary organization of teaching and research is obvious, but transinstitutional collaboration among universities, industry, and government both aggregates knowledge and prevents duplication.

Second, accelerate the evolution of institutional and organizational frameworks that facilitate innovation. A slow feedback loop between the economy, Congress, and academia may be to blame, but the pace of scientific understanding and technological adaptation in areas as critical as climate change or renewable energy is lagging. Rigid organizational structures leave us insufficiently adaptive.

Third, rethink the criteria by which we evaluate the contributions of our institutions. Simplistic methodologies that pretend to establish precise rankings abound, but an alternative scheme might evaluate institutions according to their contributions to selected national objectives. We might seek to determine what an institution has done to help build a more sustainable planet, advance the nation’s position in nanotechnology, or gain a fundamental understanding of the origins of the universe. We might even evaluate the impact of universities in aggregate in their capacity to achieve outcomes we desire in our national innovation system.

Finally, we must come to terms with the concept of outcomes.We may be working toward economic security, national security, and positive health outcomes, but we do so in such a generic way that the nature of the entrepreneurial impact of the university remains fuzzy. We need to define its role, measure its impact, and assess its returns to everything from the general stock of knowledge, to advancing specific technological solutions, to advancing our fundamental understanding of who we are and what it means to be human.

MICHAEL M. CROW

President

Arizona State University

Tempe, Arizona


Richard C. Atkinson and Patricia A. Pelfrey direct us most usefully to new possibilities (for example, in the California Institutes for Science and Innovation) that exist for true collaboration between distinguished institutions of higher learning and the new innovative businesses that are derived from advanced knowledge produced at our great universities. Wise federal and state science and technology policies are required if we are to create proper incentives that realize these collaborative opportunities and their societal benefits.

Therefore I want to focus here, perhaps ironically, on the absence of wisdom: the shortsightedness of our national and state leaders in considering support of these world-class institutions as “discretionary” spending. It is no more so, in fact, than spending on national defense. Indeed, investment in research at our great universities is a form of investment in productive growth and national defense.

In his April 27, 2009, speech to the National Academies of Sciences, President Obama pledged to devote more than 3% of our gross domestic product to R&D. This commitment was followed by one-time spending linked to the U.S. economic stimulus package, but that cannot be mistaken for long-term investments in research at our universities. A year later, we have not witnessed great expansion of federal and state investment in fundamental research; we have, however, witnessed wholesale disinvestments in higher education in virtually every state.

California is the quintessential example of this impulse to disinvest, leaving fewer dollars for its universities than its prisons. Let me suggest what’s at stake by backsliding. Having built the world’s greatest system of public higher education, California now seems determined to destroy it. Californians’ lives would be diminished today without the discoveries born at these great universities. Isolation of the gene for insulin; transplant of infant corneas; the Richter scale; hybrid plants and sturdy vineyards resistant to a variety of viruses, pests, and adverse weather conditions; recombinant DNA techniques that led to the biotechnology industry; the UNIX operating system for computers; initial work leading to the use of stem cells; the discovery of prions as causes of neurodegenerative diseases; and even the nicotine patch are among the thousands of advances.

Perhaps these discoveries would have been made elsewhere, but the thousands of startup companies and new jobs for the skilled and well-educated workforce of California would have been lost, along with hundreds of billions of dollars pumped into the state’s economy from the worldwide sales of these companies.

If California disinvests in its great universities, a downward spiral in quality will follow. It will be difficult for them to hold on to their more creative faculty members when wealthier competitors are ready to pick them off. If those faculty leave, the best students will not come. Hundreds of millions of dollars of the roughly $3.5 billion annual federal grants and contracts will disappear. New discoveries will not be made; new industries will not be born. Californians will have killed the goose that laid the golden egg.

The California legislature and its voters must recognize that it is far more difficult and costly to rebuild greatness than it is to maintain it.

JONATHAN R. COLE

John Mitchell Mason Professor of the University

Columbia University

New York, New York


Science’s influence

Through his years of service as director of the Office of Science and Technology, within the Executive Office of President George W. Bush, John Marburger had more than the customary opportunity to test the authority of science to govern political decisions. “… [In] my conversations with scientists and science policymakers,” he writes in “Science’s Uncertain Authority in Policy” (Issues, Summer 2010), “there is all too often an assumption that somehow science must rule, must trump all other sources of authority.” Indeed, he cites three examples from his experience in the Bush administration when good science advice was (1) misinterpreted (anthrax in the mail), (2) overruled (the launch of the Challenger spacecraft), or (3) deliberately corrupted in the interest of making “the administration’s case for war” in Iraq (aluminum tubes).

What is the source of the naïve idea that science must surely triumph over all other sources of conviction when making public policy? It surely comes from the discipline of research processes, an extraordinarily successful track record for science, and a dash of idealism and wishful thinking. Add to that the tendency of scientists to take little interest in how other sectors of society actually make decisions.

Fortunately, most scientists are not passive when their science is ignored, distorted, or used inappropriately by politicians. The best proof: the 15,000 U.S. scientists, including 52 Nobel Laureates, who supported the strenuous efforts of the Union of Concerned Scientists (UCS) in the past decade to defend scientific integrity in public decisions. They confronted the reality that people in positions of political, economic, or managerial power will select technical experts to justify their decisions but nevertheless will base those decisions, as most citizens do, on their own values, objectives, and self-interest. The efforts of the UCS, together with other scientific institutions such as the National Academies, the American Association for the Advancement of Science, and professional societies, did not accept inappropriate uses of science and science advice by government officials. Knowing that, as Marburger says, “Science has no firm authority in government and public policy,” scientists also realize that the legitimacy of a democratic government depends on rational decision, arrived at transparently, and with accountability for the consequences.

Science has no “right” to dictate public decisions, but we do have an obligation to try to be heard. Fortunately, in the 2008 election campaign President Obama listened. He appointed John Holdren Assistant to the President for Science and Technology and asked him to “develop recommendations for Presidential action designed to guarantee scientific integrity throughout the executive branch.” When this is accomplished, science’s authority in policy may not be unchallenged, but it need no longer be uncertain.

The key to success in our democracy depends not only on consistent efforts, both in and out of government, to hold government more accountable, transparent in its decisions and worthy of the public trust. It also requires a commitment by science to provide information to the public in a form it wants and can understand and use.

LEWIS M. BRANSCOMB

Professor adjunct

University of California, San Diego

LaJolla, California


When a former presidential science adviser speaks about the place of science in policy, people listen. This is why John Marburger’s concluding recommendation is particularly troublesome. Half right and half deeply misinformed, it is apt to sow confusion instead of promoting clarity.

Marburger says, “Science must continually justify itself, explain itself, and proselytize through its charismatic practitioners to gain influence on social events.” This prescription is based on his observation that people apparently trust science because they endow its practitioners with exceptional qualities, similar to what the early 20thcentury German sociologist Max Weber termed “charismatic” authority. Since law and policy, in Marburger’s view, so often operate “beyond the domain of science,” an important way for scientists to ensure their influence is to rule by charisma—or, to coopt a phrase from another setting, by shock and awe.

Not only is this prescription at odds with all notions of democratic legitimation, it is also empirically wrong and represents dangerous misconceptions of the actual relationship between law, science, and public policy in a democracy. Marburger claims that “no nation has an official policy that requires its laws or actions to be based on the methods of science.” Later, he adds, “science is not sanctioned by law.” But it is not law’s role to pay lip service to the methods of science, nor to order people to take their marching orders from scientists. Rather, a core function of the law is to ensure that power is not despotically exercised. Acting without or against the evidence, as the George W. Bush administration was often accused of doing, is one form of abuse of power that the law firmly rejects. It does so not by mindlessly endorsing the scientific method but by requiring those in power to make sure their decisions are evidence-based and well-reasoned.

U.S. law can justly take pride in having led the world in this direction. The Administrative Procedure Act of 1946 helped put in place a framework of increasing transparency in governmental decisionmaking. As a result, U.S. citizens enjoy unparalleled access to the documents and analyses of public authorities, opportunities to express contrary views and present counterevidence, and the right to take agencies to court if they fail to act in accordance with science and reason. In the 2007 case of Massachusetts v. EPA, for example, the Supreme Court held that the Bush-era EPA had an obligation to explain why, despite mounting evidence of anthropogenic climate change, it had refused to regulate greenhouse gases as air pollutants under the Clean Air Act. It was not charisma but respect for good reasoning that swayed the judicial majority.

Marburger’s anecdotal examples, each of which could stand detailed critique, conceal an important truth. Science serves democratic legitimacy by promoting certain civic virtues that are equally cherished by the law: openness, peer criticism, rational argument, and above all readiness to admit error in the face of persuasive counter-evidence. So Marburger is right to say that science, like any form of entrenched authority, “must continually justify itself, [and] explain itself.” He takes a giant step backward when he advocates proselytizing by science’s charismatic representatives.

SHEILA JASANOFF

Pforzheimer Professor of Science and Technology Studies

John F. Kennedy School of Government

Harvard University

Cambridge, Massachusetts


John Marburger discusses instances where it was foolish to ignore specific scientific knowledge claims (e.g., pertaining to Iraqi nuclear capabilities). More generally, however, he makes a thoughtful case that science “has no special standing when it comes to the laws of nations,” and that scientists therefore must enter the political fray in order “to gain influence on social events.”

ALTHOUGH RESEARCH PRACTICES DO A REMARKABLE JOB OF ARRIVING AT CERTIFIED KNOWLEDGE CLAIMS, THE CONTEMPORARY SCIENTIFIC COMMUNITY IS RIVEN WITH SYSTEMATIC BIASES REGARDING WHO BECOMES A SCIENTIST.

I agree with the basic argument, as would most science policy scholars and practitioners. But I wonder if Marburger’s readers might wish to go a step farther; might wish to reflect on whether democratic society actually facilitates a “political fray” that is capable of melding expert knowledge, public values, and political prudence. Or do shortcomings in democratic design prevent experts from providing all the help they potentially could?

Marburger tactfully skirts around the fact that contemporary democratic practices do not structure expert/lay interactions appropriately, failing, for example, to select and train lay participants to be capable of playing their roles. Ask yourself: Do most legislators on the Science and Budget committees in Congress have either the capacity or interest in determining how the National Science Foundation expends the chemistry budget, much less the ability to decide whether green chemistry ought to get greater attention? Do you know of systematic mechanisms for ascertaining the competence of candidates for electoral office, or are we governing a technological civilization with jury-rigged methods from previous eras? Requiring actual qualifications for nominees obviously would be contentious, and even the best system would have nontrivial shortcomings. But if it is irresponsible to mint incompetent Ph.D.s in physics or political science, isn’t it all the more important to select and train competent elected officials?

Nor are scientists prepared to play their roles in a commendable deliberative process. Although research practices do a remarkable job of arriving at certified knowledge claims, the contemporary scientific community is riven with systematic biases regarding who becomes a scientist. Gender, ethnic, and class inequalities are obvious in U.S. science, but equally dubious is the dominance of world science by a handful of countries. Certainly there are some harmonies of purpose worldwide, and of course some scientists strive to speak for humanity at large. Yet glaring inequalities remain, with neither the shaping of cutting-edge research nor the advice given to governments being remotely in accord with the spirit of democratic representation of multiple standpoints. Many scientists doubt the importance of this, I know. But they are mistaken: A basic finding of the social sciences is that people’s ideas and priorities are heavily influenced by role and context. Hence, disproportionately white/Asian, upper-middle-class, young, male scientists are not legitimate spokespersons for a global civilization. The long delay in attention to malaria is one manifestation; tens of billions of dollars poured into climate modeling instead of remedial action is another. For 22nd-century science to become radically more representative of humanity, a first step would be simply to acknowledge that a great many perspectives now are being shortchanged in science policy conversations.

For scientists to make the best possible contribution to global governance, we need relatively thoroughgoing political innovation, both within government and within science.

EDWARD WOODHOUSE

Professor of Political Science

Department of Science and Technology Studies

Rensselaer Polytechnic Institute

Troy, New York


John Marburger explains the need for scientists to contribute expertise to policymaking but warns that it is not as easy as it might seem. To be effective, he contends, scientists “need to understand the political process as it is.” Readers inspired by these words should know that a number of institutions have developed ways for scientists to gain this understanding. For instance, the American Association for the Advancement of Science offers Science and Technology Policy Fellowships, placing Ph.D. scientists in Congress or the Executive Branch for a full year or more. The National Academies give graduate students a chance to work closely with its boards and committees for a semester with its Christine Mirzayan S&T Policy Fellowship. And, for graduate students with less time, the Consortium for Science, Policy and Outcomes runs two-week workshops in DC each summer that give young scientists and engineers a first look at the complexities of political decisionmaking.

Through my experience with these programs, I agree with Marburger that perhaps the most important lesson scientists can learn about policymaking is that the scientific method is not the only way to arrive at a decision. The scientific method is an incredibly valuable tool, but in many policy decisions it can only assist; it cannot determine. This understanding is especially important because scientists who disregard it can undermine the “charisma” that Marburger deems so important for the authority of scientists.

In addition to the good track record that Marburger credits, social scientists argue that a key source of the charisma of scientists is that they are often seen as free from the “contamination” of politics. Sociologist Thomas Merton argued that one of the main reasons why science is a unique form of knowledge is that its practitioners adhere to the norm of disinterestedness. This idea resonates with the public. Unlike politicians, scientists aren’t supposed to have an agenda and therefore can be trusted. Scientists simply want to better understand the world and refuse to let prejudice or personal gain distract from that goal. Political scientist Yaron Ezrahi has written extensively about how useful it can be for politicians to cite the objectivity of science to justify a policy choice, rather than arguing one subjective value over another.

There are times, however, when citizens do not see scientists as objective. When scientific consensus does not support a potential policy, those promoting the policy sometimes question the disinterestedness of scientists. But the perception of bias can also occur when scientists make arguments that extend beyond scientific knowledge. The scientific method cannot be used to determine what types of stem cell research are ethical or how international climate change agreements should be organized. Scientists as citizens certainly should have a say in such matters, but when the public sees scientists as an interest group, the charisma that stems from the ideals of disinterestedness is reduced. Scientists who understand the nuances of the policy process develop ways of balancing these roles. They can speak to what science knows and to what they think is best for the country without conflating the two.

JAMESON M. WETMORE

Consortium for Science, Policy and Outcomes

School of Human Evolution and Social Change

Arizona State University

Tempe, Arizona


Can geoengineering be green?

In their provocative article, “Pursuing Geoengineering for Atmospheric Restoration,” Robert B. Jackson and James Salzman put forth a new objective for the management of Earth. Atmospheric restoration would return the atmosphere “ultimately to its preindustrial condition.” The authors are persuaded that the only responses to climate change are compensation and restoration, and they deeply dislike compensating for a changed atmosphere with other forms of planetary manipulation, notably injecting aerosols into the upper atmosphere.

For the foreseeable future, however, the active pursuit of atmospheric restoration would be a misallocation of resources. It is inappropriate to undertake removal of carbon dioxide (CO2) from the atmosphere with chemicals at the same time as the world’s power plants are pouring CO2 into the atmosphere through their the smokestacks—in the case of coal plants, at a 300 times greater concentration. First things first. Priority must be given to capture of CO2 emissions at all fossil fuel power plants that the world is not prepared to shut down. As for biological strategies for CO2 removal from the atmosphere, early deployment is appropriate in limited instances, especially where forests can be restored and land and soil reclaimed. But biological strategies quickly confront land constraints.

CO2 capture from the atmosphere with chemicals may become a significant activity several decades from now. The cost of CO2 capture from the atmosphere is highly likely to be lower at that time than it is are today. This will be a side benefit of R&D that is urgently needed now to lower the costs of capture from power plants.

Even at some future time when CO2 capture from the atmosphere with chemicals becomes feasible, restoration of the atmosphere is a flawed objective. Imagine success. For every carbon atom extracted as coal, oil, or gas during the fossil fuel era, an extra carbon atom would be found either in the planet’s biomass, in inorganic form on land or in the ocean, or tucked back into the earth deep below ground via geological sequestration. But unless all the carbon atoms were in underground formations, the world’s lands and oceans would differ from their preindustrial predecessors. Why not restore the lands and oceans as well? Why privilege the atmosphere?

Robert Solow, in a famous talk in 1991 at Woods Hole Oceanographic Institute, provided a vocabulary for dealing with such objectives, invoking strong and weak sustainability. Strong sustainability demands that nothing change. Weak sustainability allows only change that is accompanied by new knowledge that enables our species to function as well in a changed world as in the world before the changes.

Strong sustainability everywhere is impossible. Strong sustainability in selective areas of life while other areas are allowed to change fundamentally is myopic and self-defeating. But the embrace of weak sustainability has its own perils: It readily leads to complacency and self-indulgence. We should not even aspire to atmospheric restoration. This single task could well commandeer all our creativity and all our wealth. A much more diverse response to the threat of climate change is required. It would be more productive for us to acknowledge that we cannot leave this small planet unchanged, but also that we are obligated to invent the policies, technologies, behaviors, and values that will enable our successors to prosper.

ROBERT SOCOLOW

Codirector

The Carbon Mitigation Initiative

Princeton University

Princeton, New Jersey


Since the word “geoengineering” was first introduced by Cesar Marchetti in 1973, the technologies embraced under this heading have attracted both apprehension and curiosity: apprehension about tinkering with Earth’s climate system on a grand scale and curiosity about whether there might indeed be a technical fix for the human addiction to fossil fuels. Robert B. Jackson and James Salzman suggest that we can distinguish between good geoengineering and bad geoengineering. They write of the “promise and peril of geoengineering” and of the “risks and uncertainties.” They then suggest that there are three types of geoengineering that potentially offer “the greatest climate benefits with the smallest chance of unintentional harm.”

Jackson and Salzman thus convey some level of acceptance of geoengineering, and yet they pick delicately from the menu of geoengineering options. Their selections (combined with reducing emissions) focus on “atmospheric restoration” with technologies that meet the criteria of treating the causes of climate change rather than the symptoms, minimizing the chance of harm, and having what they believe to be the highest probability of public acceptance. Previous analysts have looked for technologies that could be implemented incrementally and could be halted promptly if the results were unacceptable.

The first choice of Jackson and Salzman is forest protection and restoration. This seems to me to be a no-brainer, with multiple benefits; but it really hardly qualifies as geoengineering, and it does not fully confront the problem of burgeoning greenhouse gas emissions from fossil fuel combustion. It is nonetheless widely agreed that we should be pursuing this goal for many reasons.

Jackson and Salzman then give limited endorsement to research on the industrial removal of CO2 from the atmosphere and the development of bioenergy, combined with carbon capture and storage. It is impossible to provide balanced analysis of these two technologies with the number of words available here, but they give reason for both optimism and serious concern. Surely either would have to be at massive scale to make a difference, would have broad potential for unintentional harm, would have unevenly distributed costs and benefits, and would rely on the collection and disposal somewhere of huge quantities of CO2. And, as the authors note, the time and cost for implementation are much in excess of those for some of the proposals for managing the radiative balance of

Earth, short time and small cost being two of the most beguiling characteristics of some geoengineering proposals. Early in their essay Jackson and Salzman pose the question: “Is geoengineering more dangerous than climate change?” They do not provide a convincing yes-or-no answer to the question, and this is why the discussion of geoengineering is likely to continue. Do we have the knowledge, the resolve, and the wisdom to address one major problem without creating more? Can we envision sustainable paths, or do we simply step forward toward the next limiting constraint?

GREGG MARLAND

Carbon-Climate Simulation Science Group

Oak Ridge National Laboratory

Oak Ridge, Tennessee


Atmospheric “restoration” sounds pretty upbeat, saleable, even redemptive. It is certainly better than its antonym, which would be decline or degradation. But is pursuing geoengineering, defined as modification of the global environment using massive technological fixes, a sound or even rational strategy for attaining this goal? And what is the goal? What former state of the atmosphere is to be restored? While reminding us, twice, that “our climate is already changing,” Robert B. Jackson and James Salzman imply that geoengineering can somehow cure climate change, as if Wally Broecker’s chaotic “climate beast” could be tamed or at least chained.

The article’s opening claim that, until recently, geoengineering was primarily in the realm of science fiction is true only if you ignore the long and checkered history of climate control. To give a few brief nonfictional examples, in 1901 the Swedish scientist Nils Ekholm suggested that, if facing the return of an ice age, atmospheric CO2 might be increased artificially by opening up and burning shallow coal seams—a process that would warm the climate. He also wrote that the climate could be cooled “by protecting the weathering layers of silicates from the influence of the air and by ruling the growth of plants.”

Five decades later, Harrison Brown, the Caltech geochemist, eugenicist, and futurist, imagined feeding a hungry world by increasing the CO2 concentration of the atmosphere to stimulate plant growth: “If, in some manner, the carbon-dioxide content of the atmosphere could be increased threefold, world food production might be doubled. One can visualize, on a world scale, huge carbon-dioxide generators pouring the gas into the atmosphere.”

In 1955, the famous mathematician John von Neumann asked “Can We Survive Technology?” He issued a strong warning against tinkering with Earth’s heat budget. Climate control, in his opinion, like the proliferation of nuclear weapons, could lend itself to unprecedented destruction and to forms of warfare as yet unimagined.

Jackson and Salzman are right to distance themselves from solar radiation management schemes (turning the blue sky milky white), ocean iron fertilization (turning the blue oceans soupy green), and fantasies of a green Sahara or Australian Outback. Forest protection and restoration are fine, but they will not cool the planet significantly. Carbon capture, removal, and long-term storage, however, face daunting thermodynamic, economic, and infrastructural hurdles, not to mention potential high risks to future generations if the CO2 does not stay put.

We need to be avoiding, not pursuing, geoengineering, and “curing climate change outright” is a chimera.

JAMES FLEMING

Science, Technology, and Society Program

Colby College

Waterville, Maine


As chairman of the International Expert Group on Earth System Preservation (www.iesp.de), I want to thank Robert B. Jackson and James Salzman for their excellent, far-sighted, and well-balanced assessment of some of the opportunities, hazards, and risks of geoengineering, for pointing out the obvious discrepancy between urgent needs and socioeconomic barriers, and for providing guidance toward sustainable solutions. I fully support their conclusion that the preservation and restoration of forests, in concert with measures to reduce CO2 emissions, provide the most immediate opportunity for limiting further increase of the CO2 partial pressure in the atmosphere. But we should not narrow down the role of forests to carbon capture and storage.

JACKSON AND SALZMAN ARE RIGHT TO DISTANCE THEMSELVES FROM SOLAR RADIATION MANAGEMENT SCHEMES (TURNING THE BLUE SKY MILKY WHITE), OCEAN IRON FERTILIZATION (TURNING THE BLUE OCEANS SOUPY GREEN), AND FANTASIES OF A GREEN SAHARA OR AUSTRALIAN OUTBACK.

With respect to climate restoration, a forest (more precisely, a forest ecosystem) has much greater importance. By the interplay of evapotranspiration and condensation processes, forest ecosystems are known to control the air temperature within their own boundaries. Moreover, forest ecosystems have an impact on the temperature and climate conditions at the planetary scale while influencing the global hydrological cycle and thus the level of the partial pressure of the other strong greenhouse gas: water vapor. Calculations of Makarieva, Gorshkov, and Li in 2009 led to the conclusion that forest ecosystems have a strong effect on the water vapor partial pressure in the atmosphere above the canopy and beyond. Investigation of the spatial course of rainfall events perpendicular to the coastline revealed a steady increase of the precipitation rate over a distance of up to 2,500 kilometers, whereas the precipitation rate sharply decreases in areas where in the past the size of forests was significantly diminished. It appears that without a stabilizing biotic impact, desertification and increase of the surface temperature are unavoidable.

At this point we should remember that a moderate mean temperature of about 15°C and the availability of liquid water are the most important preconditions for the existence of life on Earth. If Lovelock´s Gaia theory holds, the concept of preservation and restoration of large tropical and boreal forests becomes a crucial issue, of higher importance than the carbon capture and storage paradigm suggests.

It appears that forest ecosystems constitute our most important life-supporting backbone. Human society is well advised to stop sacrificing them. The preservation and restoration of forest ecosystems should to be treated as the most promising instruments of sustainable geoengineering.

PETER A. WILDERER

Institute for Advanced Study

Technical University of Munich

Munich, Germany


Mineral reserves

I wholeheartedly agree with the Roderick G. Eggert’s comments in “Critical Minerals and Emerging Technologies” (Issues, Summer 2010), even though he sugarcoats the impact of our nation’s current short-sighted mining regulations.

The discussion on improving regulatory approval for domestic resource development is really a key issue that needs elaboration. The current morass of regulations and environmental zealotism in the United States, in the eyes of investors, has created a psychological and economic image of the United States that is somewhat akin to that of the worst of the third-world nations. The 2009/2010 version of the report produced by the Fraser Institute, a leading free-market think-tank, cites the United States as becoming even less favorable to mining investment, and hence less attractive as a target for development. The introductory letter in the report carries the headline “California Ranks with Bolivia, Lags Behind Kyrgyzstan,” and states: “The worst-performing state was California, which placed 63rd, ranking among the bottom 10 jurisdictions worldwide, alongside regimes such as Bolivia, Mongolia, and Guatemala.” Fred McMahon, coordinator of the survey and the institute’s vice president of International Policy Research, goes on to comment: “California is staring at bankruptcy yet the state’s policies on mining are so confused, difficult, and uncertain that mining investment, which could create much-needed jobs, economic growth, and tax revenue, is being driven away.”

Is it time to ask ourselves whether we are interested in the global environment or just that of our locality? By driving mining away from countries such as the United States, where it can be monitored and held to a reasonable standard of accountability, we are forcing it into areas of the world with few rules and little control over the practices employed.

Imposing an environmental tariff on materials from such countries, as some have proposed, does not appear to be a practical solution, because it will lead to still further job losses in the United States. More industries will move offshore to take advantage of lower costs and the higher availability of raw materials. The magnesium-casting industry is a good example of what happens when a single-country tariff, in this case a protective tariff, is levied.

With such movement go not only jobs and tax base but critical technology developed in the United States. Today, offshore production bases are saying “Don’t worry about the shortages of critical raw materials in the West. Just send us your orders and your best technology and we will build them for you.” What they don’t mention is the loss of the high-paying jobs in areas such as the alternative energy industry, a key component in the current administration’s recovery program. In the defense industry the risk is even greater, putting our ability to ensure our way of life at risk.

I believe it is time we stepped back and took a realistic look at what we want from the mining industry and what the economic impact of a crippled domestic mining industry will be. Only then will we be in a position to determine our economic and political future.

IVAN HERRING

Bonilla-Pedrogo LLC

Royal Oak, Michigan


Transforming conservation

As Alejandro Camacho, Holly Doremus, Jason S. McLachlan, and Ben A. Minteer point out in “Reassessing Conservation Goals in a Changing Climate” (Issues, Summer 2010), a challenge now is how to continue to save species, ecosystem services, and “wild” ecosystems under current and anticipated global warming. Business-as-usual conservation biology, based on setting aside tracts of land to preserve nature as it was found at some past point in time, will not meet its goals when the climatic rug is pulled out from under our preserves.

We agree that the challenge of restructuring conservation biology is daunting, but it is tractable. A committee to “develop … a broad policy framework under the auspices of the National Academy of Sciences,” as Camacho advocates, is an essential step. Focusing such a committee’s mandate on unifying the conservation targets of U.S. governmental agencies can effectively jump-start a new era in conserving nature.

It can also provide a global model because (1) the United States is large and geographically diverse, providing test cases for many biomes; (2) different land-management agencies encompass a wide range of sometimes conflicting goals, but are under one national jurisdiction; (3) America has long valued nature and has been a leader in global conservation; and (4) copious historic and prehistoric data document the natural ecological variability of vast tracts of our continent at time scales that range from tens to thousands of years or longer.

It is no longer appropriate or feasible to set the benchmark for successful conservation as managing for single species or holding an ecosystem to a historical condition. We know from the past that the normal ecological response to climate change is for species to dramatically change geographic distributions and abundances and assemble into new communities. Some species thrive when environments change, others suffer. A more realistic and indeed ecologically more sound overall philosophy is to ensure that species can traverse the landscape as needed in order to track their climate space, and where that is not possible, to help species move using sound science.

This overall philosophy requires developing new standards for land managers—standards based on ecosystem properties rather than the presence of individual species. As an example, in most western American terrestrial ecosystems, the rank-order abundance of individuals within genera of small mammals did not change much during the past several hundred thousand years of dramatically fluctuating climate, but the species within those genera did. Thus, it may not be of much concern if one species replaces another in the same genus, but it may be of great concern if the genus disappears. Likewise, it is already possible to model, using biogeographic principles, what overall species richness in a given climatic and physiographic setting should be. With changed climates, some reserves should see an increase in the number of species, and others should show a loss. Deviations from such expectations would indicate the need for management action.

It may be inevitable that managed relocation be implemented in such cases, and also where it is clear that endangered species simply cannot survive under the climatic regime in their existing preserves. That is a risky business, which has the potential of turning what are now reasonably natural ecosystems into elaborate, human-managed gardens and zoos. That is, saving species could destroy the wild part of nature that many regard as its key value.

For that reason, we suggest that the new conservation mandate needs to incorporate the explicit recognition of two separate-but-equal kinds of nature reserves. One—species reserves—would have the primary goal of saving species. Receiving endangered species brought in through managed relocation would be an integral part of the management arsenal. Such reserves would be most logical in places that already have many human impacts. The other—wildland reserves—would have the main goal of mimicking the ecological processes (not necessarily with today’s species) that prevail in times or places where humans are not the landscape architects. Managed relocation simply to save a species would be less desirable there. Prioritization of ecosystem services would be the focus of other government lands.

Whatever strategies eventually are adopted to make conservation biology more compatible with the future, it is essential to initiate action now, given the rapid rate and probable magnitudes of human-caused global climate change.

ANTHONY D. BARNOSKY

Professor of Integrative Biology

University of California, Berkeley

Berkeley, California

ELIZABETH A. HADLY

Professor of Biology and of Geological and Environmental Sciences

Stanford University

Stanford, California


Alejandro Camacho et al. make a welcome contribution to a discussion of the difficult decisions ahead in attempting to conserve biodiversity in the face of rapid global climate change. We would like to comment on two points they make: that traditional reserves will not work and that managed relocation raises new problems regarding environmental justice.

THE PROBLEM HERE IS OF SOCIAL JUSTICE; THE POSSIBILITY THAT RELOCATION WILL BE TARGETED TO REGIONS WHERE HUMAN STAKEHOLDERS HAVE THE LEAST POWER TO PREVENT THE REALLOCATION OF LAND TO USES STIPULATED BY THOSE IN POWER.

Contrary to what Camacho et al. seem to imply, the idea that conservation measures must go much beyond traditional reserves is not new and was urged for a long time before the problems posed by climate change. For over a decade now, systematic conservation planning has focused on the creation of “conservation area” networks, where conservation areas are any habitat parcels that are at least partly managed for the persistence of biodiversity. A dominant theme has been that these networks must be suitably interwoven into the landscape or seascape matrix. The motivation has been partly to prevent conservation areas from becoming islands surrounded by such inhospitable habitat that species’ populations within the areas become isolated and unviable, and partly to incorporate conservation plans into the local cultural geography.

What the specter of climate change has done is to underscore this integrative approach. For those species that are capable of adapting to climate change by dispersal, conservation-oriented management of habitats must include units that ensure the required connectivity at the appropriate time. As Camacho et al. note, the time for static conservation planning is over. Luckily, many recently developed decision support software tools (such as ConsNet or Zonation) enable dynamism by incorporating spatial coherence and other criteria of concern to conservation planners.

Turning to managed relocation (or assisted colonization), its proponents fully acknowledge most of the difficulties raised by Camacho et al., although there has not been the kind of attention to ethical issues that the authors are correct to highlight. The problem here is of social justice: the possibility that relocation will be targeted to regions where human stakeholders have the least power to prevent the reallocation of land to uses stipulated by those in power.

However, even this problem is not new and is related to the problem of creating reserves through enforced human exclusion. As many authors have documented, the creation of national parks in the United States and elsewhere has routinely involved the expulsion of or denial of traditional rights to resident peoples; for instance, the First Nations in North America. In recent years, the shift in focus from traditional reserves to conservation areas has somewhat mitigated this problem. Managed relocation may well reintroduce this problem, although it is unlikely that the scale will be as large as that of the creation of the original national parks.

As Camacho et al. suggest, the only ethically responsible policy is to insist on environmental justice analyses of every proposed relocation. However, there is no rationale for restricting these to relocation policies: They should form part of every environmental policy that we consider.

SAHOTRA SARKAR

Department of Philosophy and Section of

Integrative Biology

DAVID M. FRANK

Department of Philosophy

Patricia Illoldi-Rangel

Section of Integrative Biology

University of Texas at Austin

Austin, Texas


There can be no doubt that climate change is a game changer, in more ways than one. The response of flora and fauna over the expanse of the Northern Hemisphere has been well documented (earlier leafing, altered timing of migrations, etc.), but the implications for conservation strategies have only begun to be discussed. Comacho et al. will help spur this urgent discussion.

For the past century and a half, the centerpieces of conservation have been parks, wilderness areas, and refuges. Many of these are now public lands, but more and more such conservation efforts have been private initiatives, most notably the Nature Conservancy’s efforts. The premise of both the public and private initiatives has been that large protected areas would be self-replicating, examples of what was once presumed to be an orderly recurring cycle of disturbance and recovery. To be sure, there were compelling reasons to doubt this premise long before we began to recognize climate change–induced departures from what had long been regarded as the norm. But until recently, the model of orderly succession held sway, and conservation efforts were directed to setting aside wilderness areas, expanding parks, and expanding refugia. In the face of climate change, areas with fixed boundaries may well more closely resemble prisons than refuges. The ensembles of species contained in our parks, refuges, and designated wilderness areas are almost certainly going to change in ways that many will find deplorable.

Translocation may be, as the authors suggest, a reasonable response to the prospect that protected areas may become dead ends, but there are serious problems with putting all our eggs in that basket. First, we can’t possibly move everything (any more than we can save every endangered species). It seems almost inevitable that our efforts, should we opt for translocation, will focus on charismatic species, who may or may not qualify as being biotically significant.

There is also the distinct risk that transplants will, if they take to their new home (no small “if”) become invasive nuisances or worse. There are no sure bets, but the authors are to be congratulated for calling for the initiation of a urgently serious discussion of how conservation needs to be reinvented in a warming world.

JAN E. DIZARD

Professor of Sociology and Environmental Studies

Amherst College

Amherst, Massachusetts


Intelligent transportation

I was pleased to read another article by Stephen Ezell related to the need for action in the deployment of intelligent transportation systems (ITS) in the United States. (“Bringing U.S. Roads into the 21st Century,” Issues, Summer 2010). I have regarded him for some time as the most erudite researcher and meticulous analyst, bar none, in the area of ITS, and the article is true to that reputation—a concise and powerful statement of the case for action.

As in his other works, Ezell carefully blends the optimal mix of facts, statistics, and real-world case studies to give a complete view of the growing gap between the United States and other nations around the world in the ITS field, citing earlier successes of the interstate highway system and global positioning system now being supplanted by far more robust development elsewhere. Yet despite his straightforward style, the crispness of his presentation serves to inspire rather than alarm, and throughout there is an air of measured but clear optimism that the United States has all the resources needed to emerge as a leader in the field. Having set up and broken down the key elements of the problem, Ezell concludes with clear, actionable recommendations based on this analysis, suggesting what the ITS community agrees are now the two most pressing issues: the transition of the U.S. Department of Transportation’s role from R&D to implementation, and the application of performance-based metrics.

Along with endorsing these recommendations in the strongest possible terms, based on my own 15+ years of involvement with the ITS community, I would add that I believe that the effective use of the Federal Advisory Committee Act and its ability to bring public and private stakeholders together to directly influence U.S. policymaking, supported by working groups of experts from their respective organizations, could and should be leveraged as part of driving these transitions.

DAVID E. PICKERAL, JD

Global ITS Development Executive

IBM Corporation

Arlington, Virginia


Stephen Ezzel presents a strong argument on how ITS can revolutionize transportation. The author notes how ITS can improve the performance of current infrastructure and mitigate the annual costs from traffic accidents. In doing so, ITS can be an integral part of maintaining infrastructure in a state of good repair. Improved safety, combined with ITS’ potential to reduce congestion (saving time and fuel), can have a huge effect on economic competitiveness. Reducing congestion reduces emissions, which can significantly affect livability and environmental sustainability. Essentially, the author presents ITS as a comprehensive approach to addressing many of the U.S. Department of Transportation’s long-term strategic goals.

The author also successfully demonstrates how the United States lags behind other countries, notably Japan and South Korea, in ITS research and deployment. Indeed, as a percentage of gross domestic product, both Japan and Korea spend twice as much as the United States. In Japan, drivers can use their mobile phones to access real-time comprehensive traffic information. South Koreans regularly use their mobile phones to access a Web site that provides them with a list of available public transportation options. Improved information of this nature allows drivers to not only choose alternate routes when traffic is bad on their preferred route, but to also consider public transit as an option.

The author urges the federal government to increase spending on ITS by up to $3 billion annually and to speed up research and deployment efforts. For example, he is critical “that it will take five years simply to research and to make determinations about the feasibility and value of IntelliDrive.” Yes, ITS holds immense promise, and in an ideal world it would be nice to accelerate deployment. However, some caution is in order.

The United States is more geographically diverse than Japan or South Korea.Within urban areas, the level of congestion varies greatly. Congestion has increased substantially in higher-growth areas such as Dallas, but much less in lower-growth areas such as Cleveland. Even in urban areas with less congestion, some have more congestion than others. Given scarce resources, it is prudent to carefully consider where and how to spend dollars on ITS most effectively. It is important to prioritize efforts in the more highly congested areas and corridors that can yield the greatest benefits. Expediting spending solely for the sake of speed can lead to different results.

We also need to link ITS investments to transit, alternate routes, and even telecommuting. Once ITS alerts them to a traffic situation, drivers need frequent and reliable service if transit is to be a viable option. In other cases, opting to telecommute may be better. We also need to link ITS investments to performance-based outcome measures. This is especially so if funding increases. Just as it is prudent to focus investment on where it makes the most sense, it is also prudent to see if those investments actually made a difference. In this respect, the author notes the need for better performance measurement and evaluation.

ANTHONY HOMAN

Senior Economic Policy Advisor

U.S. Department of Transportation

Washington, DC


I find Stephen Ezell’s article very beneficial because it clearly compares the current situations, benefits, and disadvantages of ITS in the United States, Japan, South Korea, and Singapore. His comparisons are useful for understanding what is missing and what needs to be improved.

He writes that nationwide deployment and a single national standard are benefits in Japan, but I must honestly say that a unified system is more easily adopted nationwide in a country such as Japan, because Japan is geographically isolated from other countries. However, private companies, which provide vehicles, onboard devices, and communication devices, seek potential markets globally in order to be successful. To maximize their opportunities in global business, it is critical to remind ourselves that we should actively promote international collaboration. Creating a standardized global market will help those globally competitive companies. Reading the article has led me to reflect on whether sufficiently detailed and comprehensive preparations had been made in Japan before the current deployment of ITS. I am thinking of various areas: the study of mid- and long-term roadmaps for nationwide deployment, of management and maintenance after the nationwide deployment, and of preserving flexibility for future trends in technology.

Ezell also writes that the strong leadership role of the national government has been a benefit in Japan, but I think that a public/private partnership rather than the leadership of the national government alone has been an important key to the successful deployment of ITS. In Japan, a public/private partnership has been ongoing, but I think that it is important to enhance this partnership ever more, a view that I feel many countries hold.

I agree with his view that the implementation of performance measurement and further improving accountability for results are both important for the effective allocation of surface transportation funding. As is often mentioned in this arena, I understand that there are major challenges in implementing performance measurement, such as which measures should be adopted, how to continuously monitor performance measurement data, and how to link performance measurement to funding allocations. In spite of these challenges, the future direction that the article points out sounds right. ITS technology can be useful in solving many of these challenges in order to realize performance measurement and better accountability.

Ezell also states that investments in ITS deployment deliver superior benefit/cost returns. I think this view is exactly right when looking at certain aspect of ITS, but in regard to other aspects, such as safety improvements, the results are still being debated. In order to increase investments in ITS deployment, it would be important to evaluate all benefits accurately, and to make evaluated benefits visible and easy to understand for road users and taxpayers.

His article helps us realize the differences among the countries that have interests in ITS and encourages me to think afresh about my own country’s ITS. I will share this beneficial article with my co-workers in Japan, so that they also can use the information for our future activities at MLIT.

MASAHIRO NISHIKAWA

Senior Researcher

Ministry of Land, Infrastructure, Transport and Tourism

Tokyo, Japan


Personal health records

In “Personal Health Records: Why Good Ideas Sometimes Languish” (Issues, Summer 2010), Amitai Etzioni suggests that a “Freudian macroanalysis” (FMA) of proposed policy remedies, seeking “subterranean forces,” can aid our understanding of why some solutions aren’t adopted as quickly as one might expect. To illustrate, he examines the question, “Why aren’t more people using personal health records (PHRs)?” As interesting as his answers are however, he doesn’t go deep enough, nor does he psychoanalyze the most important party to the decision about whether to use a PHR. I’d like to suggest an expanded analysis.

First, Etzioni is correct in his observations about how much PHRs cost for doctors to use, doctors’ fear of patients using their PHRs to pursue lawsuits, and doctors’ concerns about entering data into PHRs that could confuse or scare patients. But among doctors, those aren’t subterranean concerns at all. In fact, the great majority of doctors readily admit to all of them. An FMA, as I understand it, should seek to uncover hidden causes of actions or emotions—causes that one might be loath to admit play a role (Freud’s hypothesized “Oedipal complex” and “penis envy” constructs are well-known examples).

More important though, is that Etzioni’s original question wasn’t, “Why don’t doctors make it easier for patients to use PHRs?” That’s an interesting question, and one we have been pursuing in our survey research at the American Medical Association, in collaboration with the Markle Foundation, and it’s clearly related to the general adoption of PHRs by patients. But, to be blunt, if PHRs were really appealing to patients, they would be purchasing and using them with or without their doctor’s help. Yet they are not.

So, what subterranean concerns of patients might be hindering the adoption of PHRs? Here a Freudian analysis seems even more apropos. Terms like denial, avoidance, repression, regression, narcissism, and reaction formation come to mind as potential hidden reasons why most people don’t spend much time obsessing over their health data. The fact is, most people don’t enjoy spending time pondering illness, infirmity, and mortality, even when doing so might be beneficial to them. Or, as one commentator put it, PHRs are not like Quicken (a model for PHRs in the eyes of many, since financial data are also complex and confidential), because tracking one’s blood pressure will simply never be as much fun as tracking one’s investment portfolio.

How might we use such insights to generate greater uptake of PHRs? Data from population surveys show that the individuals most interested in using PHRs are those with chronic conditions or an ill relative. These people have been thrust, most unwillingly, into tracking medications, lab results, and symptoms. For them, PHRs might be a way to help regain a sense of control over their fate and to get better-quality care. But for the rest of us, those who are relatively healthy today, how might we surmount denial, procrastination, and avoidance to convince people that they should be spending time tracking their personal health statistics? And in doing so, is there a risk of moving from denial to obsession? These are combined policy and psychology questions, which deserve much further study.

MATTHEW WYNIA

Director, The Institute for Ethics

American Medical Association

Chicago, Illinois

Clinical Assistant Professor

Infectious Diseases

University of Chicago

Chicago, Illinois


Transforming Education in the Primary Years

When more than two-thirds of students cannot read at grade level and barely three-quarters are graduating from high school on time, it is time to reevaluate not just how well our schools and teachers are doing but whether the entire system needs an overhaul. That is where we find ourselves today. Reading scores on the National Assessment of Educational Progress are embarrassingly low for all children and abysmal for minorities. Worse still, graduation rates, according to the National Center on Educational Statistics, are hovering around 60% in some states.

Better early education is a big part of the solution. Governments at the local, state, and federal level must start investing in systems that reach children before kindergarten and get serious about providing children with high-quality instruction in the earliest grades of their schooling. To do otherwise is to waste taxpayer dollars, ignore decades of research, and disregard the extraordinary potential of millions of children who otherwise have little chance of succeeding in school.

This is more than a repeat of the argument for creating universal pre-K. We need a much broader and deeper transformation of the educational system that starts, if parents choose, when children are as young as three years old and continues through the first few grades of elementary school. Early childhood does not stop at kindergarten; it extends through age eight, because children are still learning foundational skills in literacy, numeracy, social competence, and problem solving. A revision such as this requires more than extra funding, retraining teachers, and revamping buildings. It demands a full rethinking of the social contract that is at the core of the public education system.

During the past few years at the New America Foundation, a think tank in Washington, DC, scholars have put forth a series of papers that envision what we call a “next social contract,” a new agreement that sets forth the kind of institutional arrangements that prompt society to share the risks and responsibilities of our common civic and economic life and provide opportunity and security for our citizens. The need for a new contract becomes more apparent each year as the nation is rocked by social and economic shifts: increasing globalization, the aging of the population, and most recently the financial crisis that is reshaping the world economy.

Education has always been critical to the social contract. In fact, primary education is one of the few, if not the only, goods and services Americans have decided should be provided to all citizens free of charge. In the 18th century, the nation’s founders realized that an educated citizenry was essential to the success of the experiment in democratic self-government on which they had embarked. Over time, public education has become a foundational piece of the economic social contract too: Do well in school, attend college, and a good job will await you when you graduate. A well-educated U.S. workforce has enabled the country’s dominance throughout the 20th century. Expanding access to education has become an important policy tool for advancing social justice, economic opportunity, and global economic competitiveness.

Yet today, despite the success in expanding access to disadvantaged populations of children, the educational system is not producing students who can succeed. Test scores on international exams show that U.S. students are not achieving at the levels of their counterparts in countries such as Finland, Switzerland, Japan, South Korea, Canada, Australia, and others. Among those who have graduated from our high schools, 13% cannot read well enough to conduct basic activities such as reading a newspaper or restaurant menu, according to a 2003 report on adult literacy. Most troubling are the statistics on the low achievement of economically disadvantaged and racial or ethnic minority youngsters. Approximately 84% of African American fourth-graders cannot read grade-level texts well enough to hit the proficiency mark on comprehension tests. With each passing year, their chances of succeeding in high school recede, as do their chances of participating fully in civic life and landing a job that can pull them out of poverty.

Birth to age 8: The crucial years

Every month, new studies in neuroscience and psychology provide insights and warnings about how much of a person’s capacity for learning is shaped from birth to age 8. Young children need to experience rich language interactions with teachers, parents, and other adults who read to them, ask questions of them, and encourage their exploration of myriad subjects. Young children in households with educated parents and well-stocked libraries are more likely to experience those interactions. They are encouraged to participate in conversations and ask questions about what clouds are made of, what they saw at the zoo, what the backhoe was doing at the nearby construction site. Parents with lower education levels rarely have the resources and background knowledge to provide these experiences. For example, a now-classic study, released in 1995, showed that by the time they turn three years old, children from the most disadvantaged families will have heard three million fewer words in their lifetimes than children of professional parents. Without intervention, these children will be behind when they arrive at school. The vicious cycle will continue.

More than 50 years ago, when research was just starting to tell this story, federal policymakers decided to invest in early intervention. The answer then was Head Start, a program started in 1965 for children in families at or below the poverty line. It was designed to provide a preschool-like experience combined with health and nutrition services and a strong emphasis on parent engagement. Head Start programs were designed to be managed by local organizations that received federal dollars; they were decidedly not part of a state’s education system.

Head Start continues today, although only about half of eligible children are served, partly because funding is limited. In many areas, wait lists are common; one cannot just squeeze more children into classrooms because teacher/child ratios must be held low to ensure that children get the attention they need. Therefore, expansion is impossible without the money to hire more teachers. Still, even with small class sizes, the quality of the program has also come under fire. Head Start teachers, like most people in the early childhood profession, work for very low wages. This has probably dissuaded college graduates from considering careers in Head Start. In many cases, classes are led by teachers who have not received a strong education themselves and who may be limited in their ability to encourage children’s questions about the stories they hear or about phenomena they observe in the natural and physical world.

Efforts are under way to improve Head Start and raise the credentials of those who teach there. By 2013, for example, more than half of lead teachers must have a bachelor’s degree. From 2005 to 2008, the number of college graduates teaching in Head Start jumped from 40 to 46%.

Despite the hurdles facing Head Start, the program has been shown to make a difference in immunization rates, dental health, and some areas of children’s social and cognitive development. A nationwide longitudinal study, which started collecting data in 2002, showed that children who started Head Start at age four were faring better than their peers on 5 of 15 indicators of school readiness after a year of the program, and those who started at age three were doing better on 11 of 15 indicators.

But according to the most recent installment of the study, by the end of first grade, the evidence of that positive impact had evaporated. Head Start students were doing no better than non–Head Start students. Researchers are now trying to untangle the data to determine what led to the drop-off. Could the low education levels of staff or other quality measures account for it? Did something happen in the early years of public school to stall children’s progress? This latter theory has gained traction as anecdotal evidence has shown that some public school kindergartens are unable to adequately build on what children have already learned in Head Start. Unfortunately, data that would help to get to the bottom of these questions were never collected.

Other preschool programs have produced benefits that last much longer. In fact, the evidence of the effectiveness of high-quality pre-K programs is among the strongest findings in education research. Peer-reviewed, randomized controlled trials in the High/Scope Perry Preschool Project and the Chicago Child Parent Centers Program found that high-quality pre-K programs produced both short-term learning gains for participating students and long-term benefits, including reduced rates of grade retention, special education placement, and school dropout; higher educational attainment and adult earnings; and reduced likelihood of involvement with the criminal justice system. These studies began in the 1960s and 1980s, respectively, and followed children well into adulthood.

Even when children do have access to preschool, research shows that quality is highly varied, with many programs providing mediocre instruction that is not tailored to the natural curiosities and motivations of young children.

More recently, studies of large-scale and high-quality state pre-K programs in Oklahoma and New Jersey have found evidence that these programs also produce significant learning gains for participating children, gains comparable to those found in the Chicago program. Importantly, in each case, the preschool programs are considered part of the education system, opening the door to stronger connections and better alignment between what is taught when children are four and what is taught when they are five, six, and seven. Teachers in these programs also have bachelor’s degrees and receive continuous professional development. In other words, the programs are not provided on the cheap. Research is showing that there is a big difference between the kinds of intellectual, physical, and social explorations that can be guided by a well-qualified teacher compared to what is expected of someone hired to babysit, keep children’s hands clean, and dole out snacks.

Economists have been investigating whether the investments in high-quality preschool programs lead to financial benefits in the long term, when those preschool-age children grow up. One of the most often cited is Steve Barnett, an economist at Rutgers University and codirector of the National Institute for Early Education Research (NIEER). Using his own research and that of colleagues at other institutions, Barnett sees a large return for every dollar invested in preschool, as long as the quality is high. The oft-cited benefit/cost ratio for these high-quality programs is $10 to $1.

James Heckman, a Nobel prize–winning economist at the University of Chicago, has also argued for more investment in the early years. His work has shown that programs will be most cost-effective if they are aimed at infants, toddlers, and children in preschool and the early grades. By the middle years of a child’s schooling, programs tend to produce less of a payoff, with remedial and job training being the least cost-effective.

Policy lags behind research

Given this knowledge, it no longer makes sense to postpone the start of public education until children turn five. More and more parents find it financially necessary or desirable to work full or part-time outside the home when their children are young. Until their children are five years old, most parents are left entirely to their own devices to find and pay for early education services. As a result, nearly half of three-and four-year-olds are not enrolled in nursery or preschool, according to U.S. Census data. Because of the cost of these programs, some of which rival the cost of attending college, children from low- to moderate-income families have much less access than those from more affluent families.

A growing number of states have tried to fill the gaps by creating their own pre-K programs, but they are typically available only to children from low-income families. (Florida, Georgia, and Oklahoma are among the few to offer universal access to pre-K.) But even accounting for federal and state programs combined, public programs are serving just over 40% of four-year-olds and less than half that proportion of three-year-olds, according to NIEER data.

Even when children do have access to preschool, research shows that the quality is highly varied, with many programs providing mediocre instruction that is not tailored to the natural curiosities and motivations of young children. Even relatively wealthy parents are often left in the dark about how to evaluate the programs available to them. This is true in publicly funded and parent-funded programs. The ad hoc patchwork that comprises the fledgling system we have today—Head Start, state-funded pre-K, subsidized child care, school-based pre-K programs, and parent-funded nursery schools—does not provide for consistency in quality standards, early learning experiences, or outcomes for young children. Although quality pre-K can narrow achievement gaps, current arrangements often exacerbate, rather than counter, inequalities.

One might think the picture becomes clearer once children enter kindergarten. But in fact, even kindergarten, which children typically attend in their fifth year, is not a stable part of schooling in the United States today. According to a 2010 report from the Foundation for Child Development, six states—Alaska, Idaho, New Jersey, New York, North Dakota, and Pennsylvania—do not require school districts to provide kindergarten at all, and across the country, one-third of children attend only half-day programs. Yet data on the benefits of full-day kindergarten continue to accumulate. Several reports on the academic outcomes of a national cohort of children who attended kindergarten in the 1998–1999 school year show that full-day attendees were ahead of half-day attendees in reading achievement by the end of the year.

The early grades of elementary school are also in need of reform. Although research shows the imperative of preparing children to read and comprehend grade-level texts by the end of third grade, far too many elementary schools are not up to the task. In-depth observational research in U.S. elementary school classrooms, led by Robert Pianta at the University of Virginia, suggests that only 10% of poor children experience high-quality instruction consistently throughout the elementary years. The same studies show that only 7% of all children have consistently stimulating classroom experiences when both emotional and instructional climate are taken into account.

The evidence of the effectiveness of high-quality pre-K programs is among the strongest findings in education research.

On their own, these deficiencies seem bad enough. But the problem is even deeper. Current education policies have led to a fragmented approach to early education, with different services provided by different agencies with different funding streams that are not designed to coordinate or interoperate. The educational system typically groups children separately in pre-K settings (with their own multitude of agencies and services, whether at the local, state, or federal level) and K-5 elementary schools. Many elementary school teachers have relatively little training in early childhood development. Yet an elementary education credential typically allows teachers to work in any K-5 grade, even though the skills required to successfully teach first-graders to learn to read are very different from those required to teach fifth-graders to read to learn. What is worse, principals often know little about early childhood development. In order to improve the effectiveness of the schools in serving young children, we must ensure that all educators working with young children have a solid understanding of early childhood development and knowledge about to create seamless educational experiences.

A new vision for early education

Fixing and extending the primary years of a child’s education are not a silver bullet for the multitude of challenges that must be addressed throughout the education pipeline. But they are a critical first step. The next generation, the workforce of 2030, will not succeed unless today’s children are provided with a healthy, supportive learning environment from the day they are conceived.

In recent years, leading thinkers in child development have developed a new vision, one in which early childhood education is seen as extending through the elementary school years. Jacqueline Jones, the senior advisor on early learning to the U.S. Secretary of Education, is among the proponents of this approach, as is Ruby Takanishi, president of the Foundation for Child Development (an organization that funds part of my work). Instead of focusing on interventions that may affect a child for just one or two years, these experts advocate an approach that spans the continuum of early childhood. This more seamless approach is often called PreK-3rd.

How do we build such a system? The first step must be the expansion of access to high-quality early learning opportunities for all preschool-aged children, starting at age three. To be clear, starting public education at age three does not mean turning the primary grades into college-preparatory machines. It does not mean sticking preschoolers in classrooms that are clearly inappropriate for their age or expecting them to work all day, as if naps were just nuisances in the way of academic training. Instead, an early start means respecting the cognitive, social, and physical needs of young children in a way that is developmentally appropriate. Indeed, we need to elevate those needs to the level they deserve, instead of just assuming that they will be magically met by well-meaning but untrained adults or assuming that children will just absorb knowledge and skills by osmosis.

Pre-K should be a fundamental component of the education system, not an optional add-on. We need to make it universally available to all children whose parents want to enroll them, regardless of family income level or other factors. Americans would never countenance the notion that some children should be denied access to publicly funded third grade or high school based on family income or limitations on available state resources. Participation in pre-K programs, unlike in K-12 schooling, should be voluntary, and parents should have the opportunity to choose among multiple pre-K options in deference to the important role of families as children’s first teachers. But again, any parent who wants to enroll his or her child in pre-K should have that option.

Although providing universal pre-K may appear more costly than targeting pre-K only to low-income youngsters, several facts argue in favor of universal provision. Families with young children often experience considerable fluctuations in income, so eligibility criteria based on family income may lead to disruptions in children’s early learning experiences, undermining public investments in income-based pre-K. Making pre-K universal would also address the needs of moderate-income families, who as noted above currently have the greatest difficulty in obtaining quality early childhood opportunities for their children. For example, a family of four with a household income of $29,000 is too wealthy to qualify for Head Start. Perhaps most important, providing pre-K universally would encourage greater consistency in the early learning experiences that children have before entering school. It would reduce the tremendous variation that currently exists in the skills of entering kindergarteners and allow kindergarten and early-grades teachers to align their curriculum and teaching practices with children’s pre-K experiences.

Providing universal pre-K would also ensure that pre-K programs have the same resources and funding levels as elementary and secondary schools. Most states with publicly funded pre-K programs spend only a fraction on pre-K compared with what the state’s schools spend on K-12 students, even though providing the kind of experience that produces lasting educational benefits requires quality standards and highly skilled teachers—and by extension, funding—comparable to that provided in the elementary and secondary grades. Ideally, pre-K funds should flow to schools and community-based providers on a per-pupil basis through the same school finance system as funding for other elementary and secondary students. Systems of data collection, quality monitoring, and accountability for pre-K programs should be integrated into the larger data and accountability systems used for the entire public education system.

Establishing universal pre-K does not mean that pre-K programs should be just an extension of the public schools. The United States has a rich and diverse network of community-based early childhood education providers, including child care centers, family home care, and Head Start. We must take advantage of the capacity, experience, and unique assets these programs offer by enacting public policies that help them improve the quality of their services and build linkages between community-based pre-K programs and the public schools.

To create seamless transitions, school districts must shore up and improve their kindergarten programs. Kindergarten should not be considered a separate line item in a district’s budget where it can be vulnerable to cutbacks. It should also run for a full day, providing enough time for learning and allowing teachers to introduce children to a full range of subjects, rather than focusing heavily on language and literacy, as many currently do. A full day would also allow teachers to incorporate more time for child-directed and imaginative play, which plays a critical role in developing children’s self-regulation and other essential skills.

In the elementary grades, whether conceptualized as K-3rd or 1st-3rd, there is clearly room for improvement. Many schools, because of a combination of poor understanding of child development and an increased emphasis on early academics, do too little to support children’s social and emotional development during this period. A preK-3rd approach would bring a new focus to the years when children are not only acquiring literacy and language skills but also developing social-cognitive skills such as the ability to self-regulate, defer gratification, focus on a task, and communicate one’s needs and feelings verbally, rather than by acting out. These qualities are as important to an individuals’ long-term success in life and the workforce as are academic accomplishments.

To do all of this, schools and their communities must take a much more systematic approach to developing children’s skills, both academic and social-cognitive. Standards, curriculum, formative assessments, and instructional strategies must be aligned with one another, working together to support children’s learning. This alignment must be vertical (from grade to grade) and horizontal (within the grade level), so that all elements are cohesive and children in different classrooms have a common learning experience. Standards must be aligned from grade to grade and over the course of the year. This means that children who learn about, say, triangles and rectangles in kindergarten, receive geometry lessons in first grade that do not just have them mindlessly repeating last year’s vocabulary but ask them to build on their knowledge of shapes through new math and engineering exercises.

Effective elementary schools use a clearly articulated curriculum that is simultaneously content-rich, developmentally appropriate, and aligned with student learning goals articulated in the standards. Accordingly, effective preK-3rd educators use developmentally appropriate formative assessments and benchmarks to monitor children’s progress against the curriculum and standards, to inform instruction and identify gaps in children’s knowledge before they fall behind, and to target interventions and supports to struggling youngsters.

Faithful adoption of this approach will require a fundamental rethinking of the culture of teaching and the work that teachers do. Too many public schools operate under an egg-carton model, with teachers working in isolated classrooms, rarely engaging one another or sharing lessons. In preK-3rd schools, teachers are constantly working together, in grade-level and cross-grade disciplinary teams, analyzing student data, regularly communicating about children’s progress, and sharing and refining lesson plans. Teachers have common language and vocabulary to talk about their goals for students and students’ progress towards those goals.

In implementing the preK-3rd approach, it will be important to monitor progress. Policymakers must develop systems and infrastructure to track the quality of pre-K and kindergarten programs and hold them accountable for results. These systems should measure how well a child is developing socially, emotionally, and cognitively, as well as track the long-term effects of the programs on children’s academic performance. This information must be easily available to teachers and parents and available in the aggregate to policymakers to determine which programs are measuring up to high standards. Only when data show that the education system is finally meeting quality standards—only when most U.S. children are proficient in reading, math, and social-emotional skills by the end of third grade—will we be able to say that the social contract for public education is no longer broken.

In his February 2009 address to a joint session of Congress, President Obama set the goal that “by 2020, America will once again have the highest proportion of college graduates in the world.” But the current focus on high-school reform and policies to expand college access and completion ignores the strong body of evidence that says a student’s chances of college success are often determined long before he or she enrolls in high school. The pathway to college graduation really begins at conception, with babies and toddlers receiving the support of parents, families, and communities. By the time their children are three, parents should have the choice of enrolling their children in publicly funded, accessible, and high-quality learning environments. That high quality must extend through kindergarten and the early grades of elementary school.

In short, high-quality early education is the foundation on which all future learning rests. It is imperative that government invest in such a system. Without it, we will continue to squander the potential of American children to grow into productive, inquisitive, and high-achieving adults, and we will make life harder for all Americans as a result.

From the Hill – Fall 2010

House approves bill to reform offshore oil drilling

After holding dozens of hearings on the Deepwater Horizon oil rig explosion and well rupture in the Gulf of Mexico, the House on July 30 passed a bill to reform offshore oil drilling and restore the Gulf Coast. A Senate bill did not make it to the floor but was expected to receive a vote after the August recess.

The House passed H.R. 3534, the Consolidated Land, Energy, and Aquatic Resources Act of 2009, by a vote of 209 to 193. The bill would remove the current monetary cap on the liability facing companies involved in offshore drilling accidents, impose new fees on oil production, strengthen off-shore oil drilling safety standards, and replace the Minerals Management Service (MMS) with three new agencies. The latter is consistent with action already taken by the Department of the Interior.

The bill would direct some revenue from offshore drilling to research and coastal planning efforts, and 10% of the revenues would go to an Ocean Resources Conservation and Assistance Fund, which would provide grants to coastal states and regional ocean partnerships for activities related to the protection, maintenance, and restoration of ocean, coastal, and Great Lakes ecosystems, including a competitive research and education grant program. The funds will also support the development and operation of an integrated ocean observation system.

The House bill contains provisions to better understand chemicals used to break up and disperse crude oil from the surface of water. The bill would institute a temporary moratorium on dispersant use until the Environmental Protection Agency (EPA) develops regulations on their safe application. It would also require the release of information about the chemical ingredients of dispersants, which companies now keep secret for proprietary reasons.

On August 2, the EPA announced the results of a second round of testing on eight dispersants. Phase I testing found that none of the dispersants exhibited significant endocrine-disrupting activity. Phase II testing, which evaluated the toxicity of each dispersant when combined with varying concentrations of crude oil from the Gulf, found that the toxicity of the mixtures was no greater than that of undispersed oil.

At an August 4 hearing of the Senate Committee on Environment and Public Works, Paul Anastas of the EPA testified that more research is needed to better understand the long-term effects of dispersant use. Sen. Sheldon Whitehouse (D-RI) expressed concern that the approval processes for dispersant use did not take safety data into account, a position corroborated by Anastas. Testimony provided by academic scientists in toxicology, oceanography, and environmental science cited dispersant use in the Gulf as “a grand experiment.”

The Senate oil spill bill, The Clean Energy Jobs and Oil Company Accountability Act (S. 3663), is similar in many respects to the House bill, although there are differences. Most important, there is no agreement in the Senate on liability limits or how revenue from oil drilling will be divided.

The Senate bill creates a new interagency research effort focused on oil spill prevention and response; expands the authority of the National Oceanic and Atmospheric Administration to develop oil spill response, restoration, and damage assessment capabilities; and requires an interagency effort to validate oil spill containment and removal methods and technologies.

Senate committee approves competitiveness bill

The Senate Commerce, Science, and Transportation Committee on July 22 passed the America COMPETES Reauthorization Act of 2010 (S. 3605) by a unanimous voice vote. The bill differs from the version passed by the full House on May 28 in a number of ways, although the purpose of both bills is to support science, technology, engineering, and math (STEM) education and R&D and thus help bolster the country’s foundation for economic growth and competitiveness.

The America COMPETES Act, originally approved in 2007, makes investments in science, innovation, and education at three agencies: the National Science Foundation (NSF), National Institute of Standards and Technology (NIST), and the Department of Energy’s (DOE’s) Office of Science. Because many of the provisions related to the DOE and STEM education in the House bill are outside the jurisdiction of the Senate committee, S. 3605 does not include them. If the Senate bill makes it to the floor for debate, which is becoming less likely because there are only a few weeks left in the fiscal year, it is expected that the Energy and Natural Resources Committee will offer an amendment authorizing DOE funding, and the Health, Education, Labor, and Pensions Committee would authorize funding for STEM education activities.

The House bill puts basic research programs at the three agencies on a path to doubling authorized funding levels over 10 years, and the Senate committee’s bill does the same for NSF and NIST.

The House legislation would authorize $7.48 billion for NSF in fiscal year (FY) 2011, with the authorization level rising to $10.16 billion in FY 2015. It authorizes $991 million for NIST for FY 2011, rising to $1.2 billion in FY 2015. DOE’s Office of Science is authorized for $5.2 billion for FY 2011, increasing to $6.9 billion in 2015. In the original draft of the Senate bill, NSF was authorized at $8.25 billion for 2011, $9.07 billion for 2012, and $9.94 billion for 2013; and NIST was authorized at $1 billion for 2011, $1.02 billion for 2012, and $1.13 billion for 2013. During a markup, an amendment by Sen. Lisa Murkowski (R-AK) was passed that cut funding by roughly 10% and put Senate funding more in line with that in the House’s bill. The Senate committee voted to authorize the funding for only three years, instead of the five in the House bill. Both bills also would support the NSF Robert Noyce Scholarships for students studying in science, mathematics, and engineering fields who plan to teach after graduation.

The main difference between the two chambers’ bills is that the Senate strategy is to essentially use the original America COMPETES Act as a template for reauthorizing funding for R&D and education programs, albeit with higher funding levels; whereas the House bill has taken the original legislation as a springboard for creating a set of new efforts to address innovation and education, in addition to putting the three agencies on a doubling path. For example, the House bill would create an NSF R&D grant for high-risk, high-reward research.

However, there are some new efforts in both bills. For example, the House and Senate bills would make the director of NIST an Undersecretary of Commerce and increase the federal government’s share of the Manufacturing Extension Program to up to 50%. The legislation would establish a green chemistry basic research program at NSF. The Senate bill gives NSF responsibility for helping to meet cybersecurity challenges.

The Senate committee approved an amendment proposed by Sen. Kay Bailey Hutchison (R-TX) that authorizes $10 million per year for five years for NSF to prepare science and engineering majors to be elementary- and secondary-school teachers. This provision is not in the House bill.

Both bills would authorize the Department of Commerce’s Office of Innovation and Entrepreneurship, an initiative first proposed by the Obama administration in September 2009 to support regional innovation initiatives and provide $100 million in loan guarantees to help small and medium-sized manufacturers use or produce innovative technologies. Unlike the House bill, the Senate’s bill would provide loan guarantees of up to $500 million total for science park infrastructure for the purpose of creating areas where businesses dedicated to scientific research can collaborate.

Additionally, both bills instruct the Office of Science and Technology Policy to create an interagency working group responsible for coordinating federal standards for sharing scientific information and establishing policies for public access to scientific research supported by federal funds.

GAO investigates genetic test companies

The Government Accountability Office (GAO) is continuing to examine direct-to-consumer genetic testing companies. It released the results of its latest investigation at a July 22 hearing of the House Energy and Commerce Committee’s investigations and oversight panel.

Unlike the GAO’s 2006 report, which focused heavily on dietary supplements and lifestyle advice that companies claimed to be personalized to a customer’s genetic profile, the current investigation turned its lens on major, reputable players in the nascent direct-to-consumer genetic test market: 23andMe, Navigenics, DeCode Genetics, and Pathway Genomics. Pathway caused a stir in May 2010 when it made an agreement to sell its test at Walgreens, a major drugstore chain. But the agreement fizzled after the Food and Drug Administration (FDA) said the product appeared to constitute a medical device that must receive federal approval.

During the hearing, the GAO’s Gregory Kutz, managing director of forensic audits and special investigations, described the GAO’s methods. It submitted 10 samples from five donors—some with true donor information and some with false—to the companies. The results varied. Kutz said his own sample revealed that he was either at a high, average, or low risk for prostate cancer depending on which company was analyzing the results.

23andMe General Counsel Ashley Gould responded that it was not surprising that the results were different because each company has different methods for analyzing its data. Gould and representatives from Navigenics and Pathway argued that the different methods do not invalidate the science behind them. Gould called for the government to issue quality assurance standards for direct-to-consumer genetic tests.

Company representatives appeared to be surprised by recordings of telephone conversations between GAO investigators posing as customers and representatives from unidentified gene test companies. In one case, a company employee falsely implied that a customer was doomed to get breast cancer. In another, a customer was encouraged to sneak a sample from her fiancé and surprise him with the results—a practice that is legally restricted in 33 states, Kutz said.

Jeff Shuren, director of the FDA’s Center for Devices and Radiological Health, also testified at the hearing. Although genetic tests have been sold online for years, he said that it was the Walgreens-Pathway deal that prompted the FDA to take another look at regulating the tests. Since then, the agency has sent letters to 20 companies. The FDA is still determining next steps in regulating the young direct-to-consumer genetic test market.

House, Senate committees lay out plans for NASA’s future

House and Senate committees with jurisdiction over the National Aeronautics and Space Administration (NASA) passed authorizing bills that essentially lay out their plans for the agency’s future. Both rebuff some of President Obama’s proposals, especially on commercial space flight.

Sen. Jay Rockefeller (D-WV), chair of the Senate Commerce, Science and Transportation Committee, touted the bill passed by his committee as a compromise with the administration. Passed by the full Senate on August 5, it authorizes $19 billion, $19.45 billion, and $19.96 billion for NASA for FYs 2011–2013. It would allow $1.3 billion over three years to support private-sector efforts to develop spacecraft, an approach encouraged by the administration. It would also extend the life of the space shuttle through FY 2011, a priority for Sen. Kay Bailey Hutchison (R-TX), the committee’s ranking minority member.

The House bill, in contrast, provides just $450 million over the same time period for the commercial spacecraft effort. Many members of the House Science and Technology Committee support the Constellation program that Obama plans to dismantle. Constellation was a plan developed during the Bush administration that would have sent humans first to the Moon and later to Mars. The House bill would also extend the shuttle program.

Both bills call for the immediate development of a heavy-lift launch vehicle—not an Obama priority—and support the White House plan to keep the International Space Station operating until at least 2020.

Meanwhile, 14 Nobel laureates signed a letter supporting the president’s proposed strategy for NASA and criticizing the House authorization bill. The letter said the House bill would leave “substantially underfunded” the areas of technology development, commercial spaceflight, robotic missions, and university and student research.

Federal science and technology in brief

  • The House approved H.R. 4842, a bill to reauthorize the Department of Homeland Security’s Science and Technology Directorate. The bill would double the department’s cybersecurity R&D budget.
  • Coal state Sens. Rockefeller (D-WV) and Voinovich (R-OH) introduced the Carbon Capture and Storage Development Act of 2010 (S. 3589), which would invest $20 billion during the next 10 years to develop carbon capture and storage (CCS) technology. The bill would support large-scale pilot projects and establish a regulatory framework to monitor and govern long-term geological storage of carbon. Meanwhile, on August 12, 2010, the interagency task force on CCS released a report examining the implementation of large-scale CCS technology within 10 years. The report highlights financial, economic, technological, legal, and institutional obstacles to CCS deployment and outlines several recommendations for a long-term plan. These include adopting a carbon price, improving federal coordination, and conducting an analysis of liability associated with CCS technology.
  • The House Committee on Energy and Commerce held its third hearing on antibiotic use in animals. Although scientists from three federal agencies testified that using antibiotics to promote livestock growth contributes to rising antimicrobial resistance, three Republican members argued that current scientific evidence is insufficient to link antibiotic use in animals to the development of resistant strains in humans. Meanwhile, the FDA released draft guidance urging farmers to stop providing nontherapeutic antibiotics to livestock. Antibiotics should be used only in cases where animal health must be protected, Principal Deputy Commissioner Joshua Sharfstein said, and not across the board to promote growth or for other nonmedical reasons. Sharfstein warned that the FDA would consider stricter regulations if the industry fails to comply.
  • Sen. Charles Grassley (R-IA) released a report on ghostwriting, the practice in which researchers sign on as authors of scientific journal articles that were actually prepared by third parties supported by drug or device makers. Grassley has called on the National Institutes of Health (NIH) to adopt policies to ensure the disclosure of third-party involvement in journal articles and has recommended that NIH-funded research run only in publications with disclosure policies.
  • Rep. John Dingell (D-MI), architect of a major food safety bill that passed the House last July, sent a letter to Sen. Dianne Feinstein (D-CA) in frustration over her desire to add to the Senate version of the bill (S. 510) a ban on the use of the controversial chemical bisphenol A in food and drink containers. Industry groups have threatened to pull support for the bill over the proposed ban, and Dingell believes Feinstein’s efforts will hurt the bill’s chances of passing the Senate. The bill would strengthen the FDA’s authority to police the nation’s food supply.
  • The EPA proposed new regulations to cut fine-particle pollution and ground-level ozone (smog) that drifts across the borders of eastern states. The regulation, called the transport rule, would reduce sulfur dioxide emissions from power plants by 71% and nitrogen oxide emissions by 42% by 2014, compared with 2005 levels. The proposal would replace the 2005 Clean Air Interstate Rule, which a federal appeals court ordered the EPA to revise.
  • On July 19, President Obama signed an executive order establishing a National Policy for the Stewardship of the Ocean, Coasts, and Great Lakes. The Executive Order adopts the Final Recommendations of the Interagency Ocean Policy Task Force, including establishing an interagency National Ocean Council composed of representatives of nearly two dozen federal agencies and offices. The order lays out 10 broad policy objectives for using, protecting, restoring, and increasing scientific understanding of the oceans, coasts, and Great Lakes, and outlines the mechanisms for promoting those goals. It also calls for marine and coastal spatial plans based on ecosystem management in each of nine regions.
  • The 2009 State of the Climate report released by the National Oceanic and Atmospheric Administration (NOAA) on July 28 found that the past decade was the warmest on record and that Earth has been warming during the past 50 years. “For the first time, and in a single compelling comparison, the analysis brings together multiple observational records from the top of the atmosphere to the depths of the ocean,” said NOAA Administrator Jane Lubchenco. “The records come from many institutions worldwide. They use data collected from diverse sources, including satellites, weather balloons, weather stations, ships, buoys and field surveys.”

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Is the Smart Grid Really a Smart Idea?

It is hard to quarrel with the idea that it is good to be smart. That presumably is why the proponents of some radical changes in the design of the U.S. electrical system came up with the name “smart grid.” The Obama administration has signed on. So have members of Congress from both parties and state utility regulators all over the country. Propelled by promises of greater energy efficiency and reduced greenhouse gas emissions, the smart grid is on a roll.

The smart grid has the potential to bring the United States a more stable, economical, and environmentally friendly electrical system. Unfortunately, it is far from the unalloyed plus portrayed to the public. The cost will be high: Although the economic stimulus program approved by Congress last year included $4.5 billion to help create the smart grid, the full build-out will cost at least a couple of hundred billion dollars more. The potential savings will justify the cost only if the smart grid brings sweeping changes in the way consumers use and pay for electricity. But these changes have the potential to saddle them with unnecessarily high prices, force them to bear unnecessary risks, and make their local utility company an uninvited participant in the intimate details of their everyday lives. These potential changes deserve a thorough airing before the United States commits to such large investments in the name of smartness.

The meaning of dumbness

The nation’s electrical system is an extremely complex network. At its base are generating plants, ranging from windmills to nuclear reactors. Some of these generators, such as big coal-fired plants and nuclear power stations, produce electricity at a steady rate, day in and day out. This baseload power, constantly available, is usually the cheapest type of electricity. Other types of generators, such as wind, solar, and hydroelectric plants, function every day, but their output varies with environmental conditions. A third type of generating resource is a peaking plant, which can be switched on at the times of highest demand. In the United States, peakers usually burn natural gas, but sometimes diesel oil, heating oil, or other fuels. Peakers are often expensive sources of electricity. They are usually uneconomically small and spend most of their time unused, representing wasted investment capital. Some, particularly gas-fired plants, are relatively clean power sources; others, such as those fueled by oil, can be highly polluting.

These generating facilities may be owned by the regulated utilities that distribute power to individual customers, by separate companies (merchant generators) that produce electricity but do not sell it directly to end users, or by industrial plants (cogenerators) that make electricity as a byproduct of producing hot water or steam. Regardless of where they obtain their electricity, utilities devote considerable effort to forecasting how much they will need. A utility knows its customers’ consumption patterns, the weather forecast, and special factors that may have short-term effects on electricity use. The utility’s planners use these data to estimate how many kilowatts of electricity its customers will want each hour of the day for several days ahead.

Once it has forecast demand, the utility looks for the cheapest way to obtain the power it expects to need. There are nearly 18,000 generating plants nationwide, owned by hundreds of different entities. The utility will probably obtain most of its electricity from its own baseload plants or from a merchant generator with which it has a long-term contract. If its expected needs are higher than its baseload supply plus the power it expects to produce from intermittent sources, the utility has several choices. It can switch on some of its peaker plants. It can go into the wholesale market, a well-established arrangement in which utilities bid for electricity offered by sellers with power to spare. It can also mount a public relations campaign to encourage energy conservation. At the other extreme, if the utility expects demand for electricity to be low, its generating plants may produce excess power that it can try to sell in the wholesale market. The baseload plants will run in any event, because a temporary shutdown is too costly and time-consuming.

Benefits of a smart grid

At its core, the smart grid is about capital utilization. A large amount of capacity is seldom or never called into service: U.S. generating plants can make about 16% more electricity than they expect to need even on the hottest summer day. This reserve margin has actually increased slightly in recent years. Regulators require utilities to keep excess capacity because demand is so difficult to forecast accurately and because a lack of power can have large economic and social costs. But this cushion comes at a cost to ratepayers, who must pay for plants and transmission lines that rarely operate. The smart grid is intended to help shave off the demand peaks and fill in the valleys, allowing more efficient use of extremely expensive generation and transmission assets and reducing the need for new plants.

The term “smart grid” incorporates a number of different concepts; not all utilities’ smart-grid programs are identical. But certain key features are common to all visions of the smart grid.

  • The grid has two-way integrated communications, both between a utility and the consumer and along the transmission and distribution grid. Most electric meters used in the United States today allow no communication whatsoever. The somewhat smarter meters now being installed in many areas permit radio communication so the utility can read the meter from outside the property but do little else beside recording the number of kilowatt-hours consumed. True smart meters would allow constant two-way communication between utility and customer. Along the transmission grid and local distribution lines, two-way communication could inform a central office of changes in the flow of electricity at any metered location.
  • The grid incorporates sensing and measurement technologies that can monitor equipment health, grid integrity and congestion, and energy theft. Many utilities now are unaware of power outages until a customer calls in. Once the grid has two-way communications capability, sensing technology could provide instant notification of a power outage. It could detect that a transmission line is becoming overloaded. It could also sense that a particular circuit is carrying more electricity than is being measured by meters, helping to identify customers who have tapped into the line illegally or circumvented their meters.
  • The grid has advanced transmission and storage components that make the use of generation and transmission infrastructure more efficient. Technologies within these categories include such things as high-temperature superconducting cable; distributed generation, which means small-scale generation close to consumption locations; electricity storage, so that power generated by intermittent sources at times of low demand could be saved and transmitted to customers when demand is high; and transformers capable of remote monitoring.
  • The grid has diagnostic and control devices and software that can identify and propose precise solutions to specific grid disruptions or outages. These devices and software would make for a big improvement over the current situation, in which it may take considerable time for the utility to identify the location of a problem on the grid and understand the cause.
  • The grid includes interfaces and decision-support tools that can help better manage the electrical system. The smart grid will provide system managers with far more operating data than have been available in the past, and software tools will apply that information in decisionmaking.

These various communications and diagnostic devices will make electricity distribution more efficient and reliable. They will improve utilities’ ability to spot and resolve problems. The advanced systems also will make it easier for utilities to incorporate electricity from intermittent sources, such as wind generators and rooftop photovoltaic solar installations, into their supplies, because they will be better able to predict when and where those alternative sources will be available. Although there are many questions about how the individual components will work and whether they will work together, a system with these components would have substantial social benefits.

Need for demand management

These social benefits, however, offer relatively meager financial benefits to electric utilities. Smart systems by themselves do not address the most expensive problem utilities face: the need to install and maintain large amounts of generation and transmission capacity that are used only a few days, and perhaps only a few hours, each year. If utilities are to obtain significant financial benefit from the smart grid, they need to make use of the smart grid’s capabilities to manage demand. Demand management is an integral part of the smart-grid concept, but it has the potential to give rise to very considerable difficulties between utilities and their customers.

State utility regulators need to begin thinking of ways to protect the interests of residential and small-business customers as the smart grid is gradually put into place.

The logic of demand management is simple. Today, almost all residential customers and most nonresidential customers pay for power according to a pre-established price schedule. The prices may vary by season, or the price per kilowatt-hour may be different for the first 500 kilowatt-hours used in a month than for the next 500. Retail electricity prices, however, generally do not change over the course of a day, and they do not vary with supply and demand in the electricity market. If a utility’s published rate schedule calls for households to pay $0.10 per kilowatt-hour, that will be the price whether the electric system is awash in excess capacity or is so overloaded that a blackout threatens.

Whereas households and small businesses buy power from their utilities at firm prices, the utilities themselves are constantly buying and selling in the wholesale electricity market, where utilities and generators trade with one another. In the wholesale market, the price is set only for short periods, often as little as 15 minutes. In other words, utilities face what is known as dynamic pricing, which changes frequently depending on supply and demand, whereas the prices they charge their customers are static. In the extreme case, when power is in extremely short supply, a utility may pay more to purchase electricity in the wholesale market than it receives for selling that electricity to retail customers. The end users of electricity receive no price signal to cut back on consumption.

The smart grid gives utilities an opportunity to alter this situation. The savings are potentially large: the Electric Power Research Institute, a utility-industry organization, estimated in 2009 that demand management alone can shave 4.6% off of peak summertime demand by 2020 and 7% by 2030. Over the next decade, the projected savings from demand management are greater than those from increased efficiency in electricity consumption.

Demand management requires that electric meters at each home, farm, factory, or office be replaced by far more advanced devices that incorporate communications and data-processing capabilities. These smart meters could inform customers of price changes in advance or in real time, enabling a utility to change prices as often as regulators permit. The meter might even be linked to a display screen in the customer’s kitchen or office. The customer could be informed of an upcoming change in the price of electricity and could then choose to schedule electricity-using activities at a time when the price will be low.

The smart meter is only the first stage in demand management. Over time, the meter could be linked to a new generation of “intelligent” appliances or business equipment. These devices could be set to operate only under price conditions selected by the customer, or they could be controlled by the utility itself by means of the smart meter. A family might program its electric dryer to run only when electricity costs are low, and to shut off should the price rise above $0.12 cents per kilowatt-hour. Utilities could develop new businesses monitoring individual electrical devices; the smart meter might observe, for example, that a supermarket’s freezer is consuming more electricity than it was designed to use and might contact the store to suggest checking whether a seal is broken. Another new line of business might involve managing consumption: A household or business might ask its utility to hold the electric bill below $100 per month, and the utility would turn lights, appliances, and cooling and heating systems down or off at high-price times to keep the bill below the specified level.

These possibilities have created an unusual amount of excitement in the staid electric industry. Future relationships between utilities and their customers may be very different from what they are today. But when people who have spent their careers in regulated monopolies begin using such phrases as “rich transactive environment” and “consumer enablement” to describe the future, it is worth asking why.

Smart-grid complications

The smart grid, linked to smart meters, will facilitate far more complex pricing systems than household and small-business electricity users are accustomed to. Some observers have described the aim of the smart grid as providing variable pricing. But this description is inaccurate, or at least incomplete. Variable pricing implies different prices at different times of the day or year. The pricing structure permitted by the smart grid is quite different. It may involve not just variable pricing but real-time dynamic pricing, allowing retail electricity prices to vary constantly according to supply and demand conditions in the wholesale power market at any given moment.

Real-time dynamic pricing can lead to high price volatility as market conditions change. There might be advance notice: The utility could notify customers that the present price of $0.12 will treble from 2 p.m. to 4 p.m. because of anticipated high demand. But it is just as possible that a customer currently paying a low rate per kilowatt-hour could experience a sudden price jump due to unexpected events, such as an explosion at a generating plant or a lightning strike that disrupts a transmission line. Prices would be fixed only for very brief periods, perhaps 15 minutes at a time.

Real-time dynamic pricing has the potential to save money for consumers who are able to shift their electricity consumption to times when rates are low. But it could harm small users of electricity in three different ways. First, household and small-business customers will have to devote time and effort to keeping abreast of the price situation or risk paying significantly more to operate electrical equipment than they anticipated. Second, consumers who are unable to shift their consumption in response to price changes could face extremely high electricity bills. This increase would result because bills would be based not on the utility’s average cost of electricity, but on the marginal cost, which is usually higher and which is the only cost that fully reflects supply and demand conditions in the wholesale market. Third, the risk of changes in wholesale electricity prices would effectively be transferred from producer to consumer.

Each of these factors represents a major change from the customary treatment of small electricity users. At present, electricity consumers’ information costs are minimal: a price per kilowatt-hour appears on the monthly bill, and no further price information is required. A sudden spike in the price utilities pay for power will not increase their bills. The risk of changes in the wholesale electricity price is borne by the utility, not by the residential and small-business customer. Under dynamic pricing, all of that would change, to the customer’s disadvantage.

Some utility customers can handle these risks. A large factory or a big office building can purchase hedges in the futures market, plan ahead to close certain operations when electricity prices are high, or sign long-term fixed-price supply contracts. Households and small businesses can do none of these things; a pizzeria cannot turn off the oven for a few hours just because the cost of electricity has soared. And consider the price risks facing the household that turns on its dishwasher before going out for the evening and later learns that during its absence, the price of electricity rose from $0.12 cents per kilowatt-hour to $0.65, making the cost to wash the dishes far higher than anticipated. Small electricity users would have few realistic ways to control their costs under this scenario, aside from programming their smart meters to turn appliances off whenever the price gets above a certain level. The most realistic alternative would be for the customer to hand control of electricity consumption to the electric company, which would use smart-grid technology to reach into a house or business at times of high demand and reduce consumption directly by turning electrical devices down or off.

For utility engineers who regard the smart grid as another tool for optimizing the efficiency of the electrical system, utility control over users’ power consumption would be the most desirable outcome. The reason is that real-time dynamic pricing, by itself, may not fully achieve the goal of flattening demand peaks. From the utilities’ point of view, it is not enough to know that higher prices will curb demand. Optimal asset use requires a high degree of certainty about the extent to which demand will change in response to a change in the price of electricity; if the demand response is not highly predictable, utilities will still need to maintain copious spare capacity, and their basic financial goal in building the smart grid will not be accomplished. Only direct utility control of power consumption will provide enough certainty about peak demand for utilities to radically reduce their reserve generating capacity.

The idea that a utility should be able to turn off part of a customer’s power consumption is not new. Utilities do this now by offering what is known as interruptible power to some large users. The user gets a special low rate, in return for which the utility can disconnect it whenever the utility faces a lack of power, a process known as load shedding. The smart meter will make it technically possible for a household to purchase power on an interruptible basis. It will even allow the utility to micromanage load shedding by first turning off the electric water heater, then the clothes dryer, then turning up the thermostat by two degrees. This could occur in the order determined by the customer. But as houses and businesses fill up with smart equipment, it also is possible that state regulators could give a utility the authority to override thermostats or turn appliances off without customer consent. The virtue of this approach, from a utility’s point of view, is that it would know precisely how much load it could shed at each of a million homes and businesses. With that knowledge, it could safely reduce its reserve generating capacity and avoid purchasing power at high prices, thereby lowering its costs by more than enough to pay for smartening up the grid.

Half a loaf

What seems optimal to utility engineers, though, may not seem ideal to electricity consumers. Households and small businesses will want to control their electricity consumption to maximize their own welfare, not to optimize a utility’s cost function. For the utility industry, this raises the prospect that the smart grid will not adequately modulate demand and hence will not lead to the reduction in reserve generating capacity that is necessary to make the investments in smart technology financially worthwhile.

There is perhaps a better way. Right now, an important subset of utility customers—namely, large commercial and industrial users of electricity—is eager to have the smart grid. These customers are equipped to handle the risks of dynamic pricing, and they are sophisticated enough to figure out how to maximize their profits by turning off various sources of power demand when rates rise. Some of these industrial and commercial customers already purchase electricity on an interruptible basis in order to take advantage of the cost savings that come with it. Many of them would sign up for real-time dynamic pricing immediately if it were possible to do so. They want the smart grid now.

As they support the smart grid, the Department of Energy and the Federal Energy Regulatory Commission need to make clear that dynamic real-time pricing for small electricity users is not part of the project.

These big power users probably account for about a fourth to a third of all electricity consumption. Finding ways to smooth their demand is low-hanging fruit. It can be picked without installing smart meters at 140 million locations around the country. The potential for smoothing demand is not as large as in the household and small-business sectors, but the hurdles to adoption are much lower. In this area, unlike the household sector, smart-grid investments entail low political risk and a reasonable probability of a decent return.

And what of residential and small-business customers? State utility regulators need to begin thinking of ways to protect the interests of those customers as the smart grid is gradually put into place. There are many ways to expand the use of price signals in the electricity market while stopping short of dynamic real-time pricing. For example, customers might receive a simple price schedule that takes advantage of the smart meter’s ability to change prices during the course of the day. A scheme with one price per kilowatt-hour from 7 a.m. to 2 p.m., a much higher price until 8 p.m., and then a very low price until the morning would be easy for customers to remember, and they would soon get used to cooling the house in the morning and dialing back the air conditioner in the afternoon. Confronting customers with a price change every 15 minutes, in contrast, will not encourage this simple rule-of-thumb behavior to conserve electricity. Dynamic pricing at the retail level is a recipe not for customer enablement but for customer confusion and paralysis.

One of the biggest advantages of this modest approach is that it does not shift price risk to the consumer. That is as it should be. Utilities have the knowledge, skills, and financial capability to protect against the risk that a spell of hot weather will cause wholesale power prices to spike; that an unscheduled outage at a generating plant will lead to a temporary shortage of electricity; and that, on occasion, the wholesale price of power may even exceed the retail price. Individual households and small businesses, on the other hand, have absolutely no means of protecting against such price risks. These are risks that utilities should bear.

A similarly simple pricing model could be used to achieve one of the greatest potential benefits of the smart grid: the integration of electric vehicles into the nation’s vehicle fleet. Electric vehicles have the potential to strain the electrical system under the flat-rate pricing schemes widely used today, because drivers have no incentive not to recharge their batteries at times when power demand is already high. A dynamic pricing structure would leave drivers unsure whether they should recharge their batteries when they park for several hours at a charging point in an office complex or a shopping center. With an easily remembered static pricing rule, on the other hand, drivers would be aware that daytime recharging is costly and nighttime recharging cheap, and would be likely to develop the simple habit of recharging overnight whenever possible.

Most important of all, using the smart grid to enable households and small businesses to make sensible and comprehensible choices about their own electricity consumption will avert the backlash that is likely to come if the public sees the smart grid as a means of manipulating behavior and forcing people to do things they do not wish to do. Regrettably, customer sensitivity is often weak in the utility industry, where most customers have no choice but to deal with their local electric monopoly. In the case of the smart grid, though, customer sensitivity is paramount. A virulent public reaction against dynamic pricing could impede the adoption of smart-grid technologies, delaying the many public benefits that the smart grid can bring.

Both federal and state governments have roles to play in protecting consumers during the transition to the smart grid. Although most of the money to build the smart grid will be provided up by utilities themselves, federal grants and coordination efforts are vital. As they support the smart grid, the Department of Energy and the Federal Energy Regulatory Commission need to make clear that dynamic real-time pricing for small electricity users is not part of the project. State utility commissions need to encourage experimentation with other pricing models, including the rule-of-thumb approach described above, that will promote conservation and reduce peak consumption while at the same time protecting household and small-business users. Equally important, state regulators need to make sure that, as large power users shift to real-time dynamic pricing, the utilities’ fixed costs for reserve generation and transmission are not shifted onto small customers covered by static pricing schemes.

These policies may mean slowing the pace of deployment, because a smart grid without dynamic pricing will produce a smaller return on utilities’ investments. But even though a smart grid without dynamic real-time pricing does not offer the perfect price signals that engineers and economists find so alluring, it will bring many social benefits while protecting the interests of electricity users and reducing potential political backlash. Although dynamic pricing may be the theoretical ideal, in the case of the smart grid, the best may be the enemy of the good.

The Dismal State of Biofuels Policy

Biofuels policy in the United States remains controversial and much debated. In the months since BP’s catastrophic deep-water oil rig explosion, the international debate over energy, ever inclined to drift on the winds of current events, has been captured by the fiasco in the Gulf of Mexico and the environmental destruction caused by the errors of BP, Transocean, and Haliburton. Not knowing how to respond, politicians, including President Obama, have called for new, non–petroleum-based “clean” energy.

The pall cast over the U.S. energy future by the ruptured well in the Gulf has worked in odd ways. On the one hand, petroleum extraction and consumption, on which the entire industrial enterprise is clearly staked for now and in the reasonably foreseeable future, have risks. We already knew about the air pollution and climate hazards of an oil economy, but the risks of undersea extraction were largely ignored. Part of the reaction to what happened in the Gulf has been increased interest in alternative energy sources such as ethanol. Yet the irony is that well before the BP disaster, the use of petroleum-based nitrogen fertilizer to grow corn, nearly a third of which now goes to make ethanol, had polluted the Gulf of Mexico. After the surge in corn plantings in 2007 and 2008, largely for use in ethanol production, the hypoxic or “dead zone” in the Gulf caused by the runoff of nitrogen reached its largest extent in 25 years. The BP spill simply added to the Gulf’s ecological problems.

Meanwhile, the non–market-based stimulus to the U.S. biofuels sector shows no sign of abating, despite the fact that even if every bushel of corn produced domestically were dedicated to biofuels, it would support only about 15% of vehicular energy demand. Congressionally mandated usage requirements have diverted corn from food and feed uses to fuel uses regardless of supply conditions or price levels. Far from moderating this policy, President Obama has called for an increase in the currently mandated level of biofuel production, from 36 billion gallons by 2022, including 15 billion gallons of corn ethanol, to 60 billion gallons by 2030.

According to estimates by Earth Track founder Douglas Koplow, if current laws are maintained until 2022, the biofuels industry will receive more than $60 billion per year in subsidies, more than six times the $9.5 billion in support received in 2008. Cumulative subsidies between 2018 and 2022 are expected to total $420 billion. If the Obama plan to require 60 billion gallons by 2030 comes to pass, subsidies in that year would be $125 billion, and cumulative support from 2008 to 2030 would be in excess of $1 trillion.

Congress intended that corn would be used for only 15 of the 36 billion gallons of mandated ethanol production. The rest is supposed to come from cellulosic alternatives based on feedstocks such as switchgrass. However, the 2010 mandated cellulosic blend had to be scaled back by 95% because the cellulosic fuel was commercially unavailable.

IN A NEWLY DEBT-BURDENED UNITED STATES, THE EXCESSIVE COST OF CURRENT BIOFUELS POLICY WILL PROVE TROUBLESOME.

In May 2010, the U.S. Department of Agriculture (USDA) issued a critical assessment of the difficulties of cellulosic alternatives. U.S. production capacity for cellulosic fuels was estimated to be only 10 million gallons in 2010, compared to the 100 million gallons mandated for the year. In addition, the cost of producing cellulosic ethanol is estimated to be three to four times that of corn ethanol. The report noted that the cost of growing feedstocks for cellulosic plants is probably underestimated and that “dedicated energy crops would need to compete with the lowest value crop such as hay, which has had a price exceeding $100 per ton since 2007.” The leading cellulosic ethanol producer, Fiberight, is expected to have a production capacity of 130 barrels a day in 2010. Even a small oil refinery produces 60,000 barrels per day. The USDA estimates that production capacity for cellulosic biofuel will be 291.4 million gallons by 2012, compared with a mandated 1,000 million gallons.

In our previous assessments of biofuels policy, we have emphasized the effects of displacing corn for feed and food use by fuel use and the consequent upward pressure on corn prices. A cursory examination of corn prices shows that, after 2007, corn prices achieved highs in 2008 and have now settled at a higher plateau in the range of $3.50 to $4.00 per bushel. Early in June 2010, the Food and Agriculture Organization (FAO) of the United Nations noted that although the FAO Food Price Index fell from 174 points in January 2010 to 164 points in May 2010, it remained 69% higher than in 2004. The specific role of biofuels in rising food costs is significant: The inflexibility of mandates helps cause price spikes when supplies are tight, and the increasing diversion of corn to fuel over time pushes up long-term prices. Biofuels policy has done little to provide more flexibility in response to such price effects. Nor has it confronted the fundamental question of an efficient distribution infrastructure, because ethanol is water soluble and cannot be moved in petroleum-dedicated pipes or containers.

U.S. biofuels policy has also had disproportionate effects globally. The decision in 2007 to double the corn-based ethanol mandate from 7.5 to 15 billion gallons pushed global use of grains for fuel sharply upward because the United States accounts for nearly 90% of world ethanol consumption. As a result, grains used for biofuels reached 125 million tons in 2009–2010, with annual growth rates in the three preceding years of 15, 24, and 36% as compared to non-industrial demand growth averaging about 2% per year. Roughly 8% of global grain production is now committed to heavily subsidized fuel use.

Yet corn-based ethanol produces little net energy gain—20 to 30% by most credible estimates. And its effects on greenhouse gas emissions are seen as increasingly troublesome, both because of heavier nitrogen fertilizer use adding to nitrous oxide emissions and because of pressure to convert new lands to cropping, resulting in a carbon debt measured in decades or centuries, depending on the land converted and the conversion method used.

Finally, current U.S. biofuels policy is neither a cost-effective strategy for reducing greenhouse gases nor a reliable bridge to the supposed promise of second-generation biofuels. Koplow estimates the cost of reducing carbon emissions through U.S. ethanol policy at $500 per ton, which is among the most expensive of all available options. And the current structure of the U.S. ethanol industry—fermentation plants clustered largely in the heart of the Corn Belt—is ill-suited to a cellulosic-based strategy because it will probably require the chemically based breakdown of cellulose and lignin from wood or grass products grown on marginal lands far from the Corn Belt or the use of waste by-products or algae, also sourced far from existing plants.

The current dismal state of U.S. biofuels policy should prompt a return to the drawing board. Consider the following:

  • In a newly debt-burdened United States, the excessive cost of current biofuels policy will prove troublesome, relying as it does on operating subsidies to specific companies and commodities rather than on public goods investments accessible to all and favoring the least-cost solutions.
  • In a national debate over the appropriate regulatory role of government, biofuels policy will look increasingly disruptive, relying as it does on mandates that displace market signals with unrealistic political goals that have already spiked food prices, driving tens of millions in poor countries into debilitating hunger.
  • In a reexamination of alternative energy sources, biofuels will be found wanting on two scores, being both an inefficient replacement for petroleum as an energy source and a high-cost strategy for reducing greenhouse gas emissions.
  • In a reconsideration of environmental pollution problems, biofuels will emerge as a major contributor to pollution through nitrogen runoff, as a threat to biodiversity through pressure on land conversion and water scarcity, and as a worrisome contributor to nitrous oxide emissions, which are almost 300 times more forcing as a greenhouse gas than is carbon.
  • In an assessment of long-term solutions, biofuels will look less attractive because of the need for a costly separate distribution system to avoid problems from their water solubility.
  • As more is understood about next-generation biofuels, corn-based ethanol will come to seem more like a barrier than a bridge, given the likely differences in location, processing technologies, and feedstocks for the two industries and the necessity of overcoming current ethanol and farm subsidies to launch a new industry.
  • As skepticism mounts over the technological and economic feasibility of next-generation biofuels and as commercial production proves elusive and its costs increasingly burdensome, the whole biofuels pathway will be reexamined.
  • As practical, medium-term solutions become more attractive, attention will probably shift away from quickly replacing petroleum-based liquid fuels to finding more efficient ways to improve fuel use. Already-rising fuel economy means that U.S. gasoline use probably peaked in 2007 and will decline in the future. A McKinsey study showed that most reductions in carbon emissions between now and 2030 can come cheaply and from existing technologies; the decarbonizing of the U.S. economy will be gradual and incremental in nature.

How could policy changes be made in a thoughtful way? A first step certainly would be to freeze the mandates and end their escalation. The ethanol industry’s share of transport fuel usage can be grown more reliably through market-based competition than through mandate-based sourcing shifts. This is especially true given all of the technological and economic uncertainties surrounding next-generation biofuels.

Second, the blender’s tax credit should continue to be lowered in measured but clearly committed steps. The credit is paid to those who blend ethanol with petroleum fuels and acts as an indirect price support for corn. It was recently reduced from 51 to 45 cents per gallon. Further, policymakers should consider replacing the credit with a subsidy that varies inversely with corn prices. It would rise to promote surplus disposal when corn prices fall but be phased out as corn prices rise. This would make corn-based ethanol more of a balancer than a destabilizer in food policy.

Third, the biofuels market needs to be opened and internationalized to promote greater efficiency and competitiveness. This would reward low-cost producers while also providing more breadth and diversity to the potential market. The United States, because of its dominance in ethanol consumption, needs to set the tone and direction if it is serious about ethanol as a transport fuel.

In addition to putting ethanol on a more realistic foundation, these changes would free up substantial resources for research into a broader array of clean energy alternatives. These should include not just new energy sources but also improved energy storage and distribution technologies and more efficient energy usage approaches. Dedicating the current $10 billion annual cost of the biofuels program (let alone the sixfold increase in spending that is looming under a continuation of the current course) to such a three-pronged research agenda should produce more usable, cost-effective energy and climate change solutions.

This approach requires moving government out of its current role of distorting or replacing market signals and back into its more appropriate role of investing in public goods to create new market opportunities. Although the transition may be difficult, the resulting policy course should enjoy more broad-based support.