An ocean of worry

Although Orrin Pilkey and Rob Young are skeptical of mathematical models that specify how high and how quickly sea level will rise, they have no doubt that the oceans are expanding. Because of the long time lag, they are also certain that sea level will continue to climb for decades, regardless of efforts to forestall global warming. We must therefore accept the inevitability of coastline retreat and plan for at least a seven-foot rise by 2100. Failure to do so, the authors warn, will exacerbate an impending crisis.

As Pilkey and Young demonstrate, the rising sea threatens not only beaches, coastal wetlands, mangrove swamps, and coral reefs, but also the global economy and perhaps even the international political system. The inundation of low-lying neighborhoods in coastal cities is all but certain. Several major financial centers, including Miami and Singapore, are especially imperiled. Equally worrisome is the advance of seawater into the agriculturally vital deltas of eastern and southern Asia, a process that will undermine global food security and eventually generate streams of environmental refugees. Entire countries composed of low-lying atolls, such as Tuvalu and the Maldives, may be entirely submerged, forcing their inhabitants to higher islands or continents. Finding havens for such displaced nations will, to say the least, present its own economic and geopolitical dilemmas.

The Rising Sea outlines several strategies that could be deployed to protect vulnerable communities from the mounting waters, but none are cost-effective. Seawalls and other forms of coastal armoring can only temporarily defend limited areas at great expense; shorelines so hardened will eventually turn into wave-battered capes as neighboring stretches of coast retreat. Replenishing eroded beaches with sand hauled in from elsewhere, another commonly favored strategy, is similarly dismissed as exorbitant, ecologically destructive, and doomed to failure as the sea continues to rise.

Facing the inevitable, Pilkey and Young find only one viable approach: wholesale retreat from beaches and other low-lying coastal areas. We must pull back, they argue, and allow the sea to reclaim land and infrastructure alike. In some cases buildings might be moved, but most structures along the shoreline must be abandoned. Alternative approaches, they demonstrate, are always extraordinarily expensive and can only be provisional. Arguing in hard-headed economic terms, the authors emphasize the conservative nature of their overarching proposal.

Pilkey and Young realize that few self-styled conservatives will warm to their prescriptions. As they show in a chapter aptly entitled “A Sea of Denial,” economic interests threatened by climate change strive to manufacture doubt about the underlying environmental processes. Although a handful of reputable scientists do deny global warming as well as its concomitant sea-level rise, their arguments continue to be discredited. Journalists who frame the resulting controversies as “unsettled debates” thus do the public a disservice, elevating crank theories into mainstream positions. With the issues so clouded, and given the enormous inconvenience of accepting the reality of climate change, few policymakers, let alone voters, have grasped the severity and certainty of the threat. As a result, the necessary adjustments will be difficult to enact, making our eventual reckoning with the sea all the more painful and costly.

Few scholars are as well-prepared as Pilkey and Young to confront the challenges posed by the rising sea. Pilkey, the James B. Duke Professor of Geology Emeritus at Duke University, has written, co-written, or edited more than 30 books on shoreline processes and policy. He has devoted much of his career to the study of the overbuilt North Carolina barrier islands, documenting the futility of engineering protection from storm surges. In 1985, Pilkey founded Duke’s Program for the Study of Developed Shorelines, which has explored the intersection of environmental and developmental processes in coastal zones throughout the world. In 2006, the institute moved to Western Carolina University as leadership passed to Pilkey’s former student and current co-author, Young. Young has also intensively analyzed the clogged vacation islands of the North Carolina coast, while gaining broad expertise in wetland ecosystems, hurricane dynamics, and landscape evolution

Their knowledge and experience have served them well in crafting a book of global scope. The Rising Sea takes on a host of contentious issues, ranging from the science of climatology to the politics of coastal planning to the economics of engineering, all the while taking into account the perverse psychology of a populace loath to acknowledge the truth when doing so proves disruptive. Although insistent, Pilkey and Young are never unduly alarmist. The seven-foot rise in sea level by 2100 that they advise us to expect is a rather modest figure. The actual rise, they readily grant, could be far greater. Yet they have steered clear of writing a panicky ecodisaster tome that might make a big impression but would risk discrediting the environmental movement. In certain respects, The Rising Sea is actually optimistic. “Sea-level rise,” the authors write, “does not have to be a natural catastrophe. It could be seen as an opportunity for society to redesign with nature.”

A complex equation

At first glance, sea-level rise is a straightforward phenomenon: As global temperatures increase, glaciers melt, pouring additional water into the oceans. Pilkey and Young, however, painstakingly show that the process is anything but simple. To begin with, the sea is not actually level, because of local variations in gravitational attraction. More significant complexities are introduced by the fact that some coastlines are sinking while others are rising, impeding the effort to establish a global baseline. The growth of the oceans, moreover, also stems directly from increasing temperature: As water warms, it expands. Roughly half of the sea-level rise that has occurred during the past several decades has been caused by thermal expansion. During the next century, however, glacial melting will almost certainly become a much more significant contributor. In particular, the possible attenuation or even disintegration of the Greenland and West Antarctic ice sheets could result in a much greater than anticipated rise.

The fate of the ice caps remains the subject of considerable scientific controversy. As Pilkey and Young show, such uncertainty makes the accurate prediction of sea-level rise impossible. And if we cannot say how much or how quickly the oceans as a whole will expand, we certainly cannot predict how far the shoreline will retreat in any particular place. Local shoreline processes vary tremendously in accordance with their geological circumstances; a delta starved of sediment by dam construction, for example, will usually experience substantial land loss regardless of changes in sea level.

Such inherent variability has not prevented coastal engineers from deploying a precise mathematical model, the Bruun Rule, for predicting shoreline retreat. As Pilkey and Young demonstrate, this so-called rule, based on absurdly simple postulates, does not work as advertised. In many environments, the Bruun Rule severely underestimates the rate of seafront advance. But coastal engineers, the authors imply, are often blinded by their own interests in minimizing threats. Working closely with beach developers, engineers typically favor simple approaches that mollify their clients. The coastal engineering establishment, and in particular the Army Corps of Engineers, comes across as obstructionist, obtuse, and profligate. Indeed, one of authors’ concluding recommendations is simply to “get the Corps off the shore.”

Pilkey and Young’s additional proposals are equally forceful. Because most barrier islands will be doomed by a mere three-foot rise, further development on them is indefensible. More generally, the authors argue that beachfront construction should be limited to small and, ideally, moveable structures. They call for the discontinuation of government programs, including disaster relief, that encourage building in vulnerable areas. “People who insist on building adjacent to eroding shorelines,” they imply, should be counted as “fools rather than victims.” More contentious yet is their suggestion that control over the beachfront be stripped from local governments. According to Pilkey and Young, local governments are usually beholden to property owners and developers determined to protect their investments regardless of cost or long-term feasibility. They argue that authority should be passed to higher levels of government, although they do not specify how this might be accomplished.

“PEOPLE WHO INSIST ON BUILDING NEXT TO ERODING SHORELINES,” THEY IMPLY, SHOULD BE COUNTED AS “FOOLS RATHER THAN VICTIMS.”

The planner’s perspective

Timothy Beatley, author of Planning for Coastal Resilience, would probably concur with most of Pilkey and Young’s proposals. He too is convinced that rising sea levels will require massive social and economic adjustments, including the withdrawal of human habitation from the most vulnerable beachfront areas. Not surprisingly, Pilkey finds much to admire in Beatley’s book, endorsing it on the back cover as a “critical addition to the library of anyone concerned with the future of the world’s coasts.”

But although the authors share many core concerns, the two books bear little resemblance to each other. Beatley is scarcely concerned with geological processes, climatological predictions, or political maneuvering. Instead he focuses on policies that coastal communities could enact to increase their resilience in the face of potential disturbances, including but not limited to those caused by the rising sea. His recommendations focus on infrastructure and buildings, but they range widely, often turning to issues of social organization. Thus he suggests that coastal resilience can be enhanced by “nurturing critical social networks and institutions” and promoting a “diverse local economy.”

Most of Beatley’s recommendations are laudable and all are well-meaning, but many come across as lackluster. Bold section headings give obvious advice such as “Plan Ahead for a Resilient Recovery and Growth,” “Guide Growth and Development Away from HighRisk Locations,” and “Think Holistically.” Beatley’s presentation is similarly uninspired, favoring bullet-point lists, typologies, and belabored definitions of key terms. The entire first chapter bypasses coastal issues altogether to scrutinize the concept of “resilience.” Elsewhere one encounters idealistic platitudes, including the proposition that we replace our “excessive individualism” with “an ethic of helping others.” As a result, Planning for Coastal Resilience sometime reads more like a sermonizing textbook for a course in green community planning than a trade book aimed at a broad audience.

Most disappointingly, Planning for Coastal Resilience actually has little to say about specifically coastal issues. Although Beatley’s case studies focus on seaside communities, almost all of his policy recommendations are equally applicable to noncoastal communities. He repeatedly advocates universally relevant green practices such as building green roofs and creating pedestrian-friendly communities. Nor is there anything specifically coastal in the author’s emphasis on disaster preparedness, given that most inland communities are similarly vulnerable to a variety of natural and not-so-natural calamities. Of course, sea-level rise does present coastal communities with an extraordinary threat, as Pilkey and Young show so well, but Beatley’s discussion of the phenomenon is limited to a few pages.

Perhaps it is unfair to criticize a book on the basis of what it promises to be rather than what it actually is. As a brief textbook on eco–community planning with special reference to coastal areas, Planning for Coastal Resilience has much to recommend it. But anyone striving to grasp the monumental challenges posed by the advance of the ocean, as well as the complexities of the political debates and scientific controversies that surround it, would be advised to turn to The Rising Sea. Bruce Babbit does not exaggerate by much in his blurb for Pilkey and Young’s book, calling it “a must read for all Americans.”


Martin W. Lewis () is a senior lecturer in the department of history at Stanford University, Stanford, California.

New Bedfellows

During the past few decades, a growing amount of mental real estate has been devoted to discussing art and science, and the level of attention increased markedly in 2009 as we marked the 50th anniversary of C. P. Snow’s influential lecture The Two Cultures, the 150th anniversary of the publication of The Origin of Species, and Darwin’s 200th birthday. Innumerable conferences, symposia, and festivals celebrated not only Darwin’s science but the links between his thinking and literature, the arts, and popular culture. We have examined under a microscope every aspect of the social, political, cultural, ethical, and visual environment surrounding the forefather of modern biology in order to better understand his work as well as his humanity. The cumulative result of this exploration is reflected in the insights found in Imagining Science: Art, Science, and Social Change.

As Curtis Gillespie suggests in the foreword to the book, once you crack open the topic and dig around, the range of ideas and perspectives on the world are amazingly vast. It’s the consensus of most scientists that bioscience is to the 21st century what the field of physics was to the 20th century. If our fundamental understanding of nature has altered that much, how can that thought not have a significant ripple effect in every other field of study or critical thought? Combine that with technological advances and you have a wealth of things to think about. Artists love this stuff, philosophers live for it, and ethicists, lawyers, and policymakers have no choice but to face it.

“THE ABILITY TO IMAGINE SCIENCE THROUGH ART ALLOWS THE READER TO EXPLORE FEELINGS OF ANXIETY, FEAR OR UNCERTAINTY THROUGH VISUALLANGUAGE,” SAYS SEAN CAULFIELD.

Imagining Science is a collection of essays and visuals by artists, scientists, and social commentators that “explore the complex legal, ethical, social, and aesthetic concerns about advances in biotechnology, such as stem cell research, cloning, and genetic testing.” The editors are brothers, Sean and Timothy Caulfield, who seem almost by birthright to be perfectly suited for this task. Sean is a printmaker and Canada Research Chair and professor of printmaking in the Department of Art and Design at the University of Alberta. Timothy is the research director of the Health Law Institute, a professor in the Faculty of Law and the School of Public Health at the University of Alberta, and a senior health scholar with the Alberta Heritage Foundation for Medical Research. Their combined expertise guided their excellent selection of contributors to provide a thoughtful and accurate mapping of the larger conversation about bioscience, technology, art, and social concerns.

“The ability to imagine science through art allows the reader to explore feelings of anxiety, fear or uncertainty through visual language,” says Sean Caulfield. “It is crucial that thinkers from a variety of disciplines work together in order to ensure that we maintain a broad and open perspective when addressing important issues. As biotechnology continues to challenge our minds, stretch ethical boundaries and reach new limits, it is more important than ever for us to unite our artistic and scientific communities as we continue our quest in understanding and Imagining Science.” This book makes a valuable contribution to that mission.

The essay “Making Art, Making Policy” by law professor Lori Andrews and lawyer, artist, and community activist Joan Abrahamson sets a strong opening tone for the book by providing examples of how art and literature have directly affected policymaking. They suggest that “an alternative way to explore the ethical, legal and social impact of technologies like genetics is to analyze the issues artists predict will arise with the technologies.” The importance of the creative voice is reiterated in Bartha Maria Knoppers’s essay “Policymaking and Poetry,” in which she suggests that the truly personal voice, free of ideology and political maneuvering, “can contribute to the construction of universal principles.” Conrad Brunk, a professor of philosophy and an expert in the ethical aspects of environmental and health risk management, points out that the level of risk deemed acceptable by experts is not always the same as that accepted by the general public. In “The Evocative Image: Art and the Perception of Risk” he states that the artist’s role is to call our attention to the complexity of moral and ethical issues that determine what is “safe” technology.

The essays aim to move beyond simplistic views of science, such as the popular media image of the scientist as a Dr. Frankenstein, a madman driven to take technology to its limits no matter what the ethical and moral concerns. Ethicist Cynthia B. Cohen attempts to dispel myths and raise the awareness of the role that scientists play in the ethical and policy issues raised by stem cell research, human cloning, and the creation of chimeras. In “Half Human, Half Beast?: Creating Chimeras in Stem Cell Research” she points out the leadership role that the science community has taken in self-regulating research in this area. As an artistic counterpoint to these concerns, the work produced by Adam Zaretsky’s Transgenic Pheasant Embryology Art and Science Laboratory at the University of Leiden offers a platform for further discourse on the use of new biological methods. This is one of several programs represented or at least briefly mentioned throughout the book that deal with “wet-work,” or artwork that employs hands-on biotechnological procedures. In fact, the first program to formally train artists in the lab next to science students was founded not very long ago in 2000 at the School of Anatomy and Human Biology at the University of Western Australia. What was known as SymbioticA, now called the Centre of Excellence in Biological Arts, which was originally founded by cell biologist Miranda Grounds, neuroscientist Stuart Bunt, and artist Oron Catts, represents an innovative learning and research environment. As bizarre as this approach to artmaking may seem at first, lawyer and ethicist Hank Greely’s essay “Within You, Without You” addresses the possibility that the biological world is even stranger than our often oversimplified understanding of it. Greely argues that whereas popular views of science tend to strive for clear definitions and structure, art may in fact be better at communicating the true nature of our biological complexities.

Science and art are different epistemologies, and the benefit of the discourse lies in the perception gained from their differences. It’s often a question of exploring the effectiveness or ineffectiveness of the language used. Communication expert Edna Einsiedel’s “Of Ladders, Trees and Webs: Genomic Tales Through Metaphor” suggests that understanding the power of metaphor, the mainstay of artist and writer, may well be the key to understanding the link between popular discourse and scientific thinking. In contrast, Jai Shah’s essay warns of the possible misunderstanding that can arise when metaphors are misinterpreted. Several essays by experts such as geneticist Jim Evans, artist David Garneau, and a joint essay by writer and creative “sci-art” practitioner Anna Hayden and genetic scientist Michael Hayden talk about the inherit problems of comparing art and science. Rather than arguing for the superiority of art or science, they maintain that neither field of exploration is complete without the complement of the other. In “Believing the Impossible” Gail Geller challenges the reader to cultivate a higher tolerance for uncertainty and curiosity. Economist Peter Phillips reminds us that role of the artist has evolved from the time of the Renaissance, when artists relied on the patronage of the elite, to an independent social position with the freedom to criticize.

Imagining Science makes clear that the art/science interface is becoming a productive field of study with a growing group of its own theorists, critics, curators, and historians. To those already entrenched in the debate, Imagining Science offers a fresh perspective, summarizing the hot topics. For the uninitiated, the collection of words and images is an inviting introduction. Although the large format and high-quality images might lead one to treat this volume as a coffee table art book, it would be a mistake to let it gather dust after a quick perusal. It deserves to be read closely and considered carefully. Imagining Science should be a springboard to further exploration of the rich interaction of science with other powerful social forces and institutions.

Charles Darwin and the Human Face of Science

The latest success in Charles Darwin’s victory lap marking his 200th birthday and the 150th anniversary of the publication of On the Origin of Species is a starring role in a major motion picture. Creation, a film by Jon Amiel starring real-life husband and wife Paul Bettany and Jennifer Connelly as Charles and Emma Darwin will open in the United States in January. In what might be a relief to those who have taken their own voyage of the Beagle through a seemingly nonstop series of Darwin conferences and symposia, the film focuses not on the science itself, though there is a little of that, or the religious opposition to Darwin’s ideas, though that theme does appear in the context of Charles’s relationship with Emma, but on the personal demons with which Darwin must wrestle before he can write and publish his groundbreaking book.

The movie, which is based in part on the book Annie’s Box by Darwin’s great great grandson Randall Keynes, is a highly dramatized and somewhat fictionalized treatment of this period of Darwin’s life. This is not the elder sage Darwin with the biblical beard, but a 40-something Darwin wracked by grief and guilt over the death of his 10-year-old daughter Annie, plagued by mysterious physical and psychological ailments, uncertain about his relationship with his wife, and anxious about the inevitable conflict between his scientific findings and the religious orthodoxy of the day. The drama revolves less around the evidence and analysis of the science than around Darwin’s sanity and his ability to put his ideas into a book.

Fellow scientists Joseph Hooker (Benedict Cumberbatch) and Thomas Huxley (Toby Jones) make brief appearances, encouraging Darwin to finish and publish his book. Huxley is portrayed as a latter-day Richard Dawkins, eager to take on the religious leaders who put beliefs before scientific evidence. His one brief outburst will provide vicarious thrills for many defenders of evolution who secretly yearn to lash out at religious anti-intellectualism. In exhorting Darwin to publish his findings and advance the cause of rigorous science, Huxley spews his venom: “Clearly the almighty cannot claim to have authored every species in under a week. You’ve killed God, sir. And I for one say good riddance to the vindictive old bugger…. Science is at war with religion, and when we win, we’ll finally be rid of those damned archbishops and their threats of eternal punishment.”

For those of us who want to see science become a more integral part of the culture and to have scientists perceived as human beings, Creation demonstrates that, as always, one must be careful what one wishes for. We want science to be perceived as an ordinary human activity conducted by flesh-and-blood people, but that sometimes means that we want scientists to be seen reading bedtime stories to their kids, attending football games, doing the laundry. It’s easy to object to the caricature of the mad or impersonal scientist, but what about the philandering, neurotic, egocentric, greedy, abusive, or racist scientist? You can be sure that if we have more scientists in movies and TV, we will have more unlikable scientists. Just look at the Mozart of Amadeus, or any number of artists, novelists, and musicians that appear in film and fiction.

The Darwin who appears in Creation is not a repulsive figure, but he is a flawed human being who is certainly not a model of mental stability. He is not unfeeling, but his over-wrought feelings clearly damage his relationships with his wife and children. In the film, his psychological tensions threaten to prevent him from publishing his landmark book. Even after he conquers his own ambivalence about finishing the writing of the book, he remains uncertain about whether to publish it. In a sequence designed to squeeze as much tension as possible out of events, he gives the completed manuscript to the earnestly religious Emma to read and to decide whether to publish or destroy it. She pulls history’s most portentous all-nighter reading the manuscript. When Darwin awakes, she is outside poking a fire. Is this the writer’s nightmare to end all writer’s nightmares? Well, we know it’s not, and she soon presents him with the manuscript wrapped and addressed to the publisher. He then carries it to the gate, where he drops it in the back of the mail carrier’s horse-drawn cart. Perhaps it’s a reflection of my own anxieties, but seeing the only copy of On the Origin of Species sitting in the back of an open cart gave me a chill. Where were his copy machine and his external hard drive?

As my father used to say when I asked him if one of his tall tales was true, “Of course not, you idiot, but it could be.” Director Jon Amiel and screenwriter John Collee were creating drama, not writing history. The point is not that this is exactly what happened in Darwin’s life, but that the personal dramas to which we all are subject can affect scientists and the path of science. Over time, we believe that the self-correcting mechanisms of science as a discipline will correct human errors and missteps, but there is no harm in acknowledging that the path of science is not bounded at all times by reason, evidence, and objectivity. And particularly if we want to see science win a role in the popular arts, we must be prepared to tolerate some dramatic license. (The paparazzi might be harder to countenance.)

What many in the scientific community are having a bit more trouble swallowing at the moment is the fallout from the hacked email communication among climate scientists. Although it is excruciating to listen to climate change naysayers who have never bothered to learn any of the details of climate science suddenly become close readers of every nuance and detail of these off-the-cuff exchanges, it would be foolhardy to defend every word. There might not be anything of import that is objectionable in the emails, and there is certainly nothing that undermines the major findings of climate science. But it isn’t necessary to defend every word written by every scientist. Good intentions and a good education do not imbue us with perfect judgment or character. We know that we have to be willing sometimes to censure the words and behavior of colleagues we respect and agree with. Yes, some climate scientists can be petty, arrogant, impatient, rigid, and blinkered. They are human, and the culture of science has evolved to deal with human error, whether intentional or not. Those who support the consensus in climate science should be at the forefront of condemning any behavior that does not support the most rigorous science.

We say that we want nonscientists to not only be familiar with the results of science but to understand the messy, contentious, error-strewn process by which science advances. This includes the ordinary human failings as well as honest errors of judgment or interpretation. Let’s admit whatever mistakes were made, put them in the context of the larger achievements of climate science, and get back to the daunting challenge of learning as much as we can about our glorious, befuddling planet—and its even more perplexing inhabitants.

Forum – Winter 2010

U.S./Russia nuclear cooperation

Linton F. Brooks’s “A Vision for U.S.-Russian Cooperation on Nuclear Security” (Issues, Fall 2009) is devoted to one of the most important issues between Russia and the United States, the resolution of which is essential to our joint future and the future of the world in general.

The article correctly underscores the replacement of Cold War proliferation threats with an increased threat from radiological terrorism, because knowledge about nuclear technologies allows their quick development by a significant number of countries at minimal expense. But this widening circle of countries developing peaceful nuclear activities is also able to create the scientific and technical precursors to accessing nuclear weapons.

The article also notes that the collective experience with cooperation between Russia and the United States over more than a half century allows one to speak of the possibility of creating a longterm partnership to strengthen global security. The resolution of these problems is not possible alone, even for the most powerful country, but together Russia and the United States can be world leaders in this process. However, for this partnership to be a reality, one must establish the necessary conditions for mutual understanding. Among the conditions listed by Brooks are the following: information exchange, dialogue, joint situation analysis, open and frank discussion of differences in threat perceptions, and a desire to understand one another.

In addition to these fundamental considerations, one must add missile defense, because its specific boundaries have yet to be defined. Where is missile defense necessary to address the defense of Europe and the United States from the potential launch of ballistic missiles by certain countries, and where might it be the catalyst for a new arms race?

The article provides a positive evaluation of the developing Russian-U.S. Strategic Arms Reduction Treaty discussions, which speaks to the fact that both sides will support having nuclear forces of nearly equal size. When both sides had several thousands of nuclear warheads, that principle was justified. But today, there is a new task: to go to even lower levels.

In this case, it would be reasonable to adhere to principles of equal security, including nuclear and non-nuclear elements and the positioning of forward-base capabilities along the borders of nonfriendly countries.

With respect to the nuclear threat from Iran and North Korea, it would be wise to hold direct talks with them, starting with a guarantee by the United States that no steps would be taken toward regime change if they do not act as aggressors. Such talks should aim to attract these countries to participate in the international division of labor surrounding the development of peaceful nuclear technology in exchange for strict adherence to the Non-Proliferation Treaty (NPT) and the Additional Protocol. The NPT played, and continues to play, a positive role, but it has deficiencies.

The steps proposed to strengthen the nonproliferation regime should be supported, and we should jointly consider how to implement them through international agreements, obligatory for all countries that are developing potentially dangerous peaceful nuclear technologies regardless of whether they are members of the NPT or not.

Although the elimination of nuclear weapons in the near term is not highly likely, as the author notes, that question requires joint discussion: What contemporary role do nuclear weapons play in the post–Cold War world, and why are they necessary for the United States, for Russia, and for other countries? In what way can we move toward a nuclear-free world?

The article correctly notes the role of nuclear energy in satisfying the growing world demand for energy. This requires the joint development of general rules and norms that countries wishing to develop nuclear technologies should follow. This is true from the perspective of safety as well as from the perspective of potential diversion of this technology to military purposes. Adhering to safety criteria is the job of all states because, as the author rightly notes, “a nuclear reactor accident anywhere in the world will bring this renaissance to a halt.”

It goes without saying that Brooks’s proposals about the exchange of information to support the security of nuclear arsenals and nuclear materials, the development of joint criteria for safety and the technical means for adherence to these criteria within the limits of the nonproliferation regime, and the creation of an international system for determining the sources of nuclear material should be supported.

The article establishes the necessity of strengthening scientific and technical cooperation to fight nuclear terrorism, controlling the reduction of nuclear weapons and materials, and detecting undeclared nuclear activities, as well as supporting physical security, materials control and accounting of nuclear materials, and increased safety at nuclear reactors.

Brooks does not ignore the possible obstacles to cooperation between the United States and Russia, as it is possible that significant tensions in political relations may remain. But these obstacles should be addressed, because nuclear disarmament and nonproliferation are keys to strengthening strategic stability and security and are of fundamental interest to both countries and the entire world community.

The resolution of these problems is provided in the proposals of Ambassador Brooks.

LEV D. RYABEV

Advisor to the Director General

State Atomic Energy Corporation (Rosatom)

Moscow, Russian Federation


Protecting the youngest

The article by Jack P. Shonkoff on early childhood policy is mostly persuasive. (“Mobilizing Science to Revitalize Early Childhood Policy,” Issues, Fall 2009). Even so, I have three nits to pick. The first is that Shonkoff’s argument that early childhood policy needs to be “revitalized” is misleading. There has been a productive and energetic debate at the federal level since the beginning of the war on poverty about early childhood policy. Over these four-plus decades, both Republicans and Democrats have supported a host of laws that created large child care and preschool programs. More recently, 40 states have created their own high-quality prekin-dergarten programs on which they spend about $4 billion annually. The nation now spends $26 billion in federal and state funds on child care and preschool every year, not counting the $4 billion in stimulus funds being spent in 2009 and 2010.

Second, at the risk of being labeled a troglodyte, the claim that brain research shows how much we need early childhood programs is unpersuasive. I could not count the number of times I’ve heard people who don’t know a dendrite from a synapse announce that “As all the brain research now shows…” and then proceed to make the traditional claim that early childhood is vital to subsequent development. Behavioral research by educators and developmental psychologists has long been persuasive in showing that early experience contributes greatly to child development and that high-quality early childhood programs can boost the development of poor children and produce lasting effects. Most policymakers fully understand this fact, as the $26 billion in spending on early childhood programs demonstrates. The real need now is for those who understand brain development to create and test specific activities (or even curriculums) that ensure that brain development proceeds according to plan in children who live in poverty or other difficult environments. To the extent that these activities, which are already under way, supplement the curriculums developed on the basis of behavioral research, brain science will have made a concrete contribution to the early childhood field.

Third, for anyone concerned with early childhood policy, I think the greatest need right now is to figure out how to maximize the return on the $26 billion we already spend. At the very least, we need to figure out how to coordinate Head Start and state pre-K programs to maximize the number of children who receive a high-quality program. Moreover, although this point is contentious, the research seems to show that we’re getting a lot more return from state pre-K spending than from Head Start. Perhaps we should find ways to allow more competition at the local level and the flexibility to award federal funds, including Head Start, to programs that produce the best results, as measured by standardized measures.

I am sympathetic with Shonkoff’s call for innovation and reform in early childhood education, but doubt that appeals to the recent explosion of understanding of brain development and function have much to contribute to the spread of high-quality programs through the more effective use of the resources already committed to preschool programs.

RON HASKINS

Co-director

Center on Children and Families

The Brookings Institution

Washington, DC


The science of early childhood, together with its contribution to children’s educational achievement and economic contribution, are the reasons why it is time to step beyond public discussion and into a greater public investment. That’s one big point that Jack P. Shonkoff articulates, and an important one.

Two other standout points from his article bear underscoring and action. First, the fact is that “early” means “early.” That means that the investment and practice strategies have to reach children from infancy and become part of a continuum of early childhood services that link to robust educational opportunities from kindergarten and beyond. Consider how we approach this age-span issue in the world of public education. We do not engage in public debate about eliminating educational opportunities for all children, although we acknowledge that strategies and opportunities may vary depending on the age and needs of the child. We do insist on the educational continuum. This broad-based thinking must likewise inform the work of early childhood stakeholders so that we do not inadvertently pit the needs of one age group of children against another.

Second, Shonkoff notes another scientific given: the known relationship between children’s cognitive development and the development of their executive function. The most successful adults exhibit initiative, persistence, and curiosity, and can and do get along with others. Early childhood leaders have long known that children’s development demands a comprehensive approach to early learning, and the science reinforces this. As the traditional public education community seeks to develop what are known as the “common core” of standards starting with kindergarten, early childhood educators and public education leaders should be mindful that the best traditions in early childhood education include a comprehensive understanding of children’s developmental domains. The early childhood standards across the states move beyond conventional understandings of cognition and into executive function, approaches to learning, and children’s social-emotional development. We must take care to insist that the basic integrity of this approach is protected as the federal government, states, and education advocates join together to appropriately push for the common core in education reform.

We would best serve the needs of children and the broader society if we could move off the discussion about “whether” to commit public funding for early childhood and focus instead on drilling deeper and more creatively to hone the practices and supports for early childhood practitioners so that we can offer the best array of early childhood services in all communities.

HARRIET DICHTER

Deputy Secretary, Office of Child

Development and Early Learning

Pennsylvania Departments of Education and Public Welfare

Harrisburg, Pennsylvania


Jack P. Shonkoff’s article raises a call to action for the early childhood field. There is no doubt that the period of early childhood is receiving attention from all sectors—communities, states, federal leaders, researchers, and philanthropy— but the risk is to not deliver on that energy and momentum. In the National Conference of State Legislatures’ work with legislatures, it is clear that policymakers are eager for the best information possible to inform their decisions about investments in young children. Lawmakers are considering multiple issues ranging from child care and preschool opportunities to home visiting and children’s health, in an effort to support learning and healthy development, particularly for young children at risk. There has been much progress in creating greater understanding of the importance of the early years and in investing in available policy options. But the door is open for new ideas.

Shonkoff points to the role of science in informing the policies and programs of the future. Neuroscience and child development research have been a key knowledge base that has contributed to the broad interest in early childhood policy. During the past several years, the National Conference of State Legislatures has engaged groups of legislators to work with the Center on the Developing Child to help scientists be more effective at communicating neuroscience and child development research to a policymaker audience. What legislators have shown us is that because they grapple with policy issues daily in their legislatures, they can take a scientific finding and think of multiple ways to apply it to their state context. Scientific research on how the brain works, how it develops, and the multiple impacts of adversity makes the case for the importance of addressing children’s needs early. The research also provides real clues about possible policy directions, but there is a lack of application of this emerging science base to the design of specific interventions or in program evaluations that take the science, apply it, and test it. And even when there are program evaluation data, debates about methodology or effect sizes can create the impression that the research isn’t strong enough. Cost/benefit data have also contributed to the significant attention to early childhood issues but similarly lack answers about which program elements are most critical to fund.

Policymakers then continue to ask key questions, particularly about the children Shonkoff refers to as experiencing “toxic stress”: What works? What is the most effective intervention to invest in? These are the “how” questions Shonkoff is urging the scientific community and the early childhood field to answer better. And during these tough economic times, when states have had to close more than $250 billion in budget gaps, the answer can’t be to expand funding for everything and hope for the best. To get to implementation, policymakers need interventions that can be brought to scale, a strong research base to back up their choices, and an accurate assessment of cost. With these in hand, the next phase of science-driven innovation can make the leap Shonkoff calls for.

STEFFANIE E. CLOTHIER

Program Director

NCSL Child Care and Early Childhood Education Project

National Conference of State Legislatures

Denver, Colorado


Cloning DARPA

Erica R. H. Fuchs’s thoughtful and persuasive tutorial on making DARPA-E successful (“Cloning DARPA Successfully,” Issues, Fall 2009) rests on two unprovable assumptions: first, that DARPA has been successful, and second, that the reasons for that success are knowable and replicable. Significant evidence and widespread consensus support both assumptions. But they are assumptions nonetheless.

DARPA’s success is more often asserted than proved. Most such assertions, like those of Fuchs, cite a handful of historic technological developments to which DARPA contributed. Seldom, however, do such claims enumerate DARPA’s failures or attempt to compute a rate of success. When I was conducting my own research on DARPA, the success rate I heard most often from those in a position to know was 15%. Who can say if that does or does not qualify as success in DARPA’s admittedly high-risk/high-payoff style of research?

The landmark study of U.S. computer development conducted by the National Research Council, Funding a Revolution: Government Support for Computer Development (1999), concluded that computer technology advanced in the United States because it enjoyed multiple models and sources of government support. DARPA was just one of several federal agencies and several research-support paradigms that made a difference. Different agencies played more or less critical roles at various stages. DARPA can justly claim computer development as one of its success stories, but so too can other agencies and other models. Nor is there widespread agreement on the reasons for DARPA’s success.

Fuchs’s argument that that success “lies with its program managers” comports well with my research. But I also discovered, at different times, weak program managers, strong office managers, and even decisive DARPA directors. I doubt that any single cause explains all of DARPA’s success, or even its failures.

Still, if I had to bet on one factor being most important, I would be inclined to embrace Fuchs’s program managers. DARPA-E will probably succeed at some meaningful level if it can recruit and empower good program managers. Even more surely, it will fail if it cannot.

ALEX ROLAND

Professor of History

Duke University

Durham, NC

Alex Roland is coauthor, with Philip Shiman, of Strategic Computing: DARPA and the Quest for Machine Intelligence, 1983-1993 (MIT Press, 2002).


Nanoscale regulation

The increasingly rapid pace of technology change of all kinds presents modern societies with some of their most pressing challenges. Rapid change demands foresight, vision, adaptability, and creativity, all combined with a healthy degree of prudence. Such capabilities are difficult to come by in the complicated and often messy world of modern governance.

The class of innovations known generally as nanotechnology is typical of the new issues that are challenging government and society. These are not new materials but extremely small versions of existing ones, which exhibit different properties and have new applications. That nanotechnologies offer benefits in the areas of health, environmental protection, energy efficiency, and many others is clear at this stage in their development. That they may, at the same time, pose risks to health and the environment and that those risks are highly uncertain, also are clear. Nanotechnology poses a need for making risk/benefit calculations in the midst of scientific uncertainty and conflicts over values; this will challenge government and others in the coming decades.

In “Nanolessons for Revamping Government Oversight of Technology” (Issues, Fall 2009), J. Clarence Davies has identified the central problem in anticipating and managing the effects of new technologies: We are addressing a 21st-century set of problems with institutions and strategies that were designed for an earlier era. In that earlier era, we generally wanted proof of harm before acting to prevent or minimize it. Once the problems became apparent, theoretically omniscient government regulators would determine the best technology or other solution for managing it. These solutions then were applied to the sources of the problems through rules; these sources would either comply or face government sanctions.

This kind of strategy, and the combative relationships that typically accompanied it, made some sense for environmental problems that were relatively predictable, for sources that were readily identifiable, and when government could develop at least a modest level of expert knowledge. Issues such as nanotechnology are different. They are dynamic and ever-changing, their effects are extremely difficult to predict, and government simply does not have the capacity to be able to anticipate and manage them without a different relationship with those who are developing and applying the technology.

At a broad level, nanotechnology and other issues require two capabilities from government. One is to integrate problems and solutions; the other is to be able to adapt to rapid change. Davies proposes a promising (although politically challenging) approach to the integration issue. He would consolidate administrative authority and scientific resources from six existing agencies into a new super-regulator. This would primarily be a science agency with a strong regulatory component. There is much to recommend such a change, especially if it brings a higher degree of integration to the task of anticipating and managing new technologies. Whether it meets the adaptability challenge—that of making government and others more nimble, creative, and collaborative—is another matter. Still, half a loaf is better than none. This proposal offers a thoughtful starting point for deciding how to manage new technologies before they manage us.

DANIEL J. FIORINO

School of Public Affairs

American University

Washington, DC


Energy innovation

In “Stimulating Innovation in Energy Technology” (Issues, Fall 2009), William B. Bonvillian and Charles Weiss have made a valuable contribution by characterizing the challenges surrounding the entry of new energy technologies into the market and articulating the case for taking a holistic view of the energy innovation process. They present a thoughtful outline of the various energy technology pathways and a logical approach to assessing the necessary policies, institutions, financing, and partnerships required to support the emergence of technological alternatives.

The appeal for a technology-neutral, attributes-based approach to policy design is very consistent with findings and recommendations of the Council on Competitiveness. For example, as part of a 100-day energy action plan for the 44th president (September 2008), the council proposed the formation of a Cabinet-level Clean Energy Incentives working group to construct a transparent, nondiscriminatory, long-term, and consistent investment framework to promote affordable clean energy. In addition, the council called for the working group to take into account full lifecycle costs and environmental impact, regulatory compliance, legal liability, tax rates, incentives, depreciation schedules, trade subsidies, and tariffs.

Bonvillian and Weiss’s ideas are also consistent with the underlining recommendations of the council’s comprehensive roadmap to achieve energy security, sustainability, and competitiveness: Drive: Private Sector Demand for Sustainable Energy Solutions, released at the National Energy Summit on September 23, 2009. As the authors point out, the implications of adopting such an approach are large and politically challenging, not to mention an immense analytical undertaking. But business as usual, whether in terms of our energy production, consumption, or the underlying policies that drive both, is an option we can no longer afford. A new framework for policy design will not only facilitate U.S. energy security, sustainability, and competitiveness, it will inform public understanding and raise the quality of the policy debate.

SUSAN ROCHFORD

Vice President, Energy & Sustainability Initiatives

Council on Competitiveness

Washington, DC


Health care for the future

In “From Human Genome Research to Personalized Health Care” (Issues, Summer 2009), Gilbert S. Omenn makes a compelling and comprehensive case for the advent of personalized health care and for its implications for research and policy development. The necessary research agenda must go further and faster to address the professional, social, and financial implications of this “disruptive technology.” The objective is to address the final stage of translation and to minimize the adoption interlude.

The policy agenda should be expanded to encourage research into the implications for health services, including the organization of delivery sites, especially exploring the possibilities for developing organizationally and physically adaptive hospital environments. The implications for the workforce are profound, including the organization of specialties, scope of practice, medical education, and, most challenging, continuing medical education. The design of adaptive payment mechanisms must be explored to ensure that financing is not a barrier to adoption. And we must face the legal issues from a strong research base.

This translation agenda is as challenging and critical to the future of personalized health care as is the clinical research agenda. The Agency for Healthcare Research and Quality (AHRQ) should be added to Omenn’s list of federal agencies that are the important players. AHRQ has a history of effectively engaging the disciplines that must be mobilized to move the translation agenda.

GARY L. FILERMAN

Senior Vice President

Atlas Research

McLean, Virginia


Climate policy for industry

In their Fall 2009 article entitled “Climate Change and U.S. Competitiveness,” Joel S. Yudken and Andrea M. Bassi correctly summarize the challenges to the U. S. steel industry from climate change legislation. They suitably identify the industry as both energy-intensive and globally competitive, with the inability to pass on to customers costs resulting from a cap-and-trade system mandate. And they are correct in stating that certain policy measures are necessary in U.S. climate change legislation to maintain the competitiveness of the domestic steel industry, while preventing the net increase of global greenhouse gas emissions.

As Congress considers climate change legislation, it is important to remember that the U.S. steel industry has the lowest energy consumption per ton of production and the lowest CO2 emissions per ton of production in the world. Domestic steelmakers already have reduced their energy intensity by 33% since 1990. Further, the steel industry is committed to CO2 reduction via increased recycling; sharing of the best available technologies; developing low-carbon technologies; and the continued development of lighter, stronger, and more durable steels, which enable our customers to reduce their CO2 emissions.

Given the accurate identification by Yudken and Bassi of the U. S. steel industry as energy-intensive and foreign trade–exposed, it is essential that U.S. climate change legislation contain provisions that minimize burdens on the industry.

First, we need a sufficient and stable pool of allowances to address compliance costs arising from emissions, until new technologies enter widespread use (at around 2030) and foreign steelmakers are subject to comparable climate policies and costs.

Further, a provision to offset compliance costs associated with an expected significant increase in energy costs is essential. All forms of energy—coal, natural gas, biomass and electricity— have the potential to suffer dramatic cost increases, due to the hundreds of billions of dollars to be spent on new generation and transmission technologies for renewable energy.

Finally, it is necessary that legislation contain an effective border adjustment provision that requires imported energy-intensive goods to bear the same climate policy–related costs as competing U.S. goods. To avoid undermining the environmental objective of the climate legislation, a border adjustment measure should take effect at the same time that U.S. producers are subject to increased (and uncompensated) costs and should apply to imports from all countries that do not have in place greenhouse gas emissions reduction requirements comparable to those adopted in the United States.

If the principles above are adopted, the resulting climate policy will reduce greenhouse gases globally while per mitting U.S. steelmakers and other domestic manufacturers to continue to operate competitively in the global marketplace.

LAWRENCE KAVANAGH

Vice President, Environment and Technology

American Iron and Steel Institute

Washington, DC


Science’s rightful place

Daniel Sarewitz’s Summer 2009 article “The Rightful Place of Science” correctly questions the meaning and implications of the Democrats’ rhetorical embrace of science. Sarewitz demonstrates that it is impossible to separate science and politics and warns that turning science into a political weapon could ultimately hurt both the Democratic and Republican parties and the prospects for good policymaking overall. I share his concerns, and would go farther to argue that even if both parties agreed to make science central to the policymaking process, it could jeopardize both scientific independence and democratic participation.

Scientists and engineers pride themselves on offering an independent and neutral source of evidence and expertise in the policymaking process. If they begin to advise the policy process more directly, they are likely to be subject to greater expectations and scrutiny. They will increasingly be asked to extrapolate their findings far beyond the heavily controlled environments in which they do their work to the messy “real world” in which politicians govern and citizens live. While they will receive credit for the policies that succeed, they will also have to take more blame for those that fail.

Scientists and engineers may also be pressured to shift their R&D agendas to fit the instrumental needs of government, rather than pursuing the projects that they (and their colleagues) find important and interesting. Finally, because of the higher stakes of the research, scientific practices may also be subject to more examination from interest groups and citizens, perhaps leading to more difficulties, particularly in ethically controversial areas. Overall, this increased visibility could eventually erode the special status and credibility that scientists and engineers have enjoyed for decades and put them in the same political game as everyone else.

Making science more central to policy could also diminish opportunities for public engagement. Already, citizens can only participate on a very limited basis in discussions about important science and technology policy issues that affect our daily lives—including energy, environment, and health care—because of their highly technical nature. If we increase the prominence of science in these debates, it will not, as Sarewitz has demonstrated, remove values and politics from the policymaking process. Rather, the inclusion of more scientific evidence and expertise on these issues will simply mask these aspects and make it far more difficult for citizens to articulate their concerns and weigh in on the process. This exclusion should be of particular concern to the scientific community, because a pluralistic and adversarial process is as fundamental to the development of good policy as it is to the development of good science.

It seems that a wiser course of action would be to begin a serious national conversation in which we speak honestly about the potential benefits and limits of using science in policymaking. We must also simultaneously acknowledge the relative roles of science and values in policymaking, and develop a better approach to balancing them so that we can have a process that is both rational and democratic.

SHOBITA PARTHASARATHY

Co-Director, Science, Technology, and Public Policy Program

Ford School of Public Policy

University of Michigan

Ann Arbor, Michigan


You have published several recent pieces regarding the interface between science and policy: Michael M. Crow’s “The Challenge for the Obama Administration Science Team” (Winter 2010), Frank T. Manheim’s letter regarding that article (“A necessary critique of science,” Summer 2009), and Daniel Sarewitz’s “The Rightful Place of Science” (Summer 2009). Although I agree with the general message that the interface can and should be improved, these articles missed some key aspects of the science/policy interface.

Crow bemoaned “a scientific enterprise whose capacity to generate knowledge is matched by its inability to make that knowledge useful or usable.” It needs to first be recognized that scientific research is very rarely (if ever) going to produce a definitive answer to a policy question, even one that sounds relatively well-defined and scientific (such as “what is the safe level of human exposure to chemical X in water?”). Many of the differences of opinion about how scientific results are used have nothing to do with the scientific method used by researchers and everything to do with data set sizes, how the results are extrapolated, what assumptions they are combined with to make them policy-relevant, and how one wishes to define terms such as “safe,” “adequate margin of safety,” etc. In addition, policy development often involves complex questions of cost, risk, the handling of uncertainty, parity, values, coalition-building, and so on that are not resolved well by science; although some people have tried to do this, there are inherently subjective aspects of these issues. As a result, policies that are purported to be “based on science” are often (perhaps necessarily) analogous to movies that are “based on a true story”: based on some factual information, but with other things thrown in to make them appealing and/or understandable to a wider audience— and tragically, often later mistaken by the public (and politicians) for the true thing. It is unclear whether Crow’s recommendation to “[increase] the level and quality of interaction between our institutions of science and the diverse constituents who have a stake in the outcomes of science” would have a positive effect or the negative effect of politicizing science (as some would argue has already happened).

Sarewitz addresses head-on the issue of science-related policy requiring more than just science. As he has pointed out in his book Frontiers of Illusion (which I recommend), what often happens is that complex “science-based” policy issues end up with qualified scientists on opposite sides of the debate. (Alternatively, I have seen people who are not qualified scientists simply cite scientific articles that came to one conclusion and omit those that came to the opposite conclusion.) This has the dual negative effect of not only making science irrelevant to policy but also shaking people’s faith in scientific integrity. With respect to Sarewitz’s question regarding the rightful place of science, I think that there is ample evidence that it should not just be “nestled in the bosom of the Democratic Party” (or, for that matter, any one political party). More than one person has attributed the demise of Congress’ Office of Technology Assessment around the time of the 1994 “Republican Revolution” at least in part to the fact that it failed to engage with Republican technophiles such as Newt Gingrich. Though there may be examples of Republican congressmen who lack respect for science, this is hardly a party platform.

So, how best to move forward? I have four suggestions: First, there is an issue of variability in the standards of various technical journals, some of which publish articles with attention-grabbing titles only to have the caveats buried somewhere in the text. Though some scientists may be aware of which journals are “better” than others, politicians and the public cannot be expected to understand this. Within much of the scientific publishing community, there needs to be a more earnest attempt to both (1) conduct good-quality, balanced literature searches, and (2) require authors to be more careful in the conclusions they draw and better explain why their conclusions are different from those published previously.

Second, there needs to be clearer distinction between scientific information and policy judgments, and better awareness that policies are hardly ever solely based on science alone. Within the policy realm, science would be best served if scientists with opposing conclusions were to separate the issues that they agree on from those on which they do not. Given the inherent complexity of scientific issues, these should really be communicated in writing, rather than orally, so that ideas and context can be communicated fully, with citations and without interruptions. A two-step process, whereby these communications are first made and discussed within the scientific community and then concisely and accurately “boiled down” (possibly with the assistance of a third party), might be useful for making this information more accessible to the public and/or policymakers.

Third, there is a need to understand that scientists are not the only people who need to be consulted with respect to the science/policy interface. When dealing with almost any policy issue with technical components (whether it be health care, energy, the environment, the military, etc.), economic information is important. However, economic analyses are often done very simplistically (and often inaccurately) by people who are not intimately familiar with the actual economics of the situation at hand. The people who are may not be scientists or professional economists either; instead, they are likely to be private-sector entities who have actually had to deal with the economics of industrial-scale questions of scale-up, implementability, and reliability. Though these entities are also likely to have vested interests in policy outcomes, there is a need to at least engage them in a more useful manner than is currently done.

Fourth, with respect to the issue of research funding (raised by Crow, Manheim, and Sarewitz in his book), there needs to be a better understanding of not only what can be expected of scientific research (and its limitations), but also what is needed at the policy level. There are really countless issues or potential knowledge gaps that are important, and it appears to my eyes that too much research gets funded on the basis of how important a specific project sounds in and of itself, without a better, holistic understanding of what research is most policy-relevant and what other information (and/or additional research) needs to be combined with that research in order to make it policy-relevant. Prioritization needs to be evaluated with respect to important specifics (such as asking which questions are likely to be answered with the research outcomes), and not just done at a high level (such as identifying which subject areas to focus on). There needs to be recognition that the usefulness of a given study may evaporate entirely if the budget is cut; the utility is not proportional to the budget. Furthermore, there is a need for research planning and funding to be less compartmentalized. In some cases, to address a key problem there may need to be a large coordinated research effort, with specific understanding and identification of how the results of several projects need to be tied together. When the associated policy implications affect multiple stakeholders with different knowledge bases and areas of expertise, there may be a need to have jointly funded research. Currently, these are all factors that are largely missed, and as a result it seems that a significant portion of research dollars is effectively wasted on projects that are “orphaned” or otherwise lost in obscurity.

TODD TAMURA

Tamura Environmental, Inc.

Petaluma, California


The third way

I am afraid my good friend David H. Guston (in the Fall 2009 Forum) read my article in the Summer issue entitled “A Focused Approach to Society’s Grand Challenges” too fast, or I was not sufficiently clear in my description of the two policy tools the government needs to accelerate progress toward the solution of grand challenges. One approach is indeed an approach to research like that described by Donald Stokes in Pasteur’s Quadrant: an organized approach of research (not development) that embraces not only identified problems that need to be overcome but also looks at economic, political, and social issues such as unintended consequences.

The other approach, which Gerald Holton, Gerhard Sonnert, and I call “Jeffersonian science,” and I call in the Summer issue “the third way,” is entirely different. It is in fact a public investment in pure science, with unsolicited proposals, and probably performed primarily in university laboratories. In this approach the only connection between the basic research undertaken by the scientists and the government’s commitment to make progress toward solving a grand challenge is the mechanism for allocating funds to be invested in basic science. This mechanism requires the government to seek the advice of the best-informed scientists to help them select subfields of science that might, if basic science research is accelerated, provide new discoveries, new tools, and new understanding. The objective is to allow good science in selected disciplines to make progress toward solving grand challenges much easier and faster than it would be by random chance in the conventional means for allocating basic research funds.

The scientists funded by this mechanism would have their proposals evaluated just like those for any other basic research project, on the basis of their intellectual promise. The example I cited in my essay was the way Rick Klausner, then director of the Cancer Institute at the National Institutes of Health, chose to attack his grand challenge: cure cancer. Rather than spend all his resources on studying cancer cells themselves, he invested in sciences that might promise a whole new approach: immunology, cell biology, genetics, etc. For Guston to suggest that this approach would threaten to “derail even the most intelligent and wise of prescriptions” is an irony indeed. When has the careful allocation of new funds to pure science ever derailed any policy for science, wise or otherwise?

LEWIS M. BRANSCOMB

La Jolla, California

Perennial Grains Food Security for the Future

Colorful fruits and vegetables piled to overflowing at a farmer’s market or in the produce aisle readily come to mind when we think about farming and food production. Such images run counter to those of environmental destruction and chronic hunger and seem disconnected from the challenges of climate change, energy use, and biodiversity loss. Agriculture, though, has been identified as the greatest threat to biodiversity and ecosystem function of any human activity. And because of factors including climate change, rising energy costs, and land degradation, the number of “urgently hungry” people, now estimated at roughly 1 billion, is at its highest level ever. More troubling, agriculture-related problems will probably worsen as the human population expands—that is, unless we reshape agriculture.

The disconnect between popular images of farming and its less rosy reality stems from the fact that fruits and vegetables represent only a sliver of farm production. Cereal, oilseed, and legume crops dominate farming, occupying 75% of U.S. and 69% of global croplands. These grains include crops such as wheat, rice, and maize and together provide more than 70% of human food calories. Currently, all are annuals, which means they must be replanted each year from seed, require large amounts of expensive fertilizers and pesticides, poorly protect soil and water, and provide little habitat for wildlife. Their production emits significant greenhouse gases, contributing to climate change that can in turn have adverse effects on agricultural productivity.

These are not the inevitable consequences of farming. Plant breeders can now, for perhaps the first time in history, develop perennial versions of major grain crops. Perennial crops have substantial ecological and economic benefits. Their longer growing seasons and more extensive root systems make them more competitive against weeds and more effective at capturing nutrients and water. Farmers don’t have to replant the crop each year, don’t have to add as much fertilizer and pesticide, and don’t burn as much diesel in their tractors. In addition, soils are built and conserved, water is filtered, and more area is available for wildlife. Although perennial crops such as alfalfa exist, there are no commercial perennial versions of the grains on which humans rely. An expanding group of plant breeders around the world is working to change that.

Although annual grain crops have been with us for thousands of years and have benefited from many generations of breeding, modern plant breeding techniques provide unprecedented opportunities to develop new crops much more quickly. During the past decade, plant breeders in the United States have been working to develop perennial versions of wheat, sorghum, sunflowers, and legumes. Preliminary work has also been done to develop a perennial maize, and Swedish researchers see potential in domesticating a wild mustard species as a perennial oilseed crop. Relatively new breeding programs in China and Australia include work to develop perennial rice and wheat. These programs could make it possible to develop radically new and sustainable farming systems within the next 10 to 20 years.

Currently, these efforts receive little public funding, in marked contrast to the extensive public support for cellulosic ethanol technologies capable of converting perennial biomass crops into liquid fuels. Yet perennial grain crops promise much larger payoffs for the environment and food security and have similar timelines for widespread application. Public research funds distributed through the U.S. Department of Agriculture (USDA) and the National Science Foundation (NSF) could greatly expand and accelerate perennial grain breeding programs. Additionally, the farm bill could include support for the development of perennial breeding programs.

The rise of annuals

Since the initial domestication of crops more than 10,000 years ago, annual grains have dominated food production. The agricultural revolution was launched when our Neolithic ancestors began harvesting and sowing wild seed-bearing plants. The earliest cultivators had long collected seed from both annual and perennial plants; however, they found the annuals to be better adapted to the soil disturbance and annual sowing they had adopted in order to maintain a convenient and steady supply of grains harvested from the annual plants.

Although some of the wild annuals first to be domesticated, such as wheat and barley, were favored because they had large seeds, others had seeds comparable in size to those of their wild perennial counterparts. With each year’s sowing of the annuals, desirable traits were selected for and carried on to the next generation. Thus, selection pressure was applied, albeit unintentionally, to annual plants but not to perennials. Evidence indicates that selection pressures on wild annuals quickly resulted in domesticated plants with more desirable traits than their wild relatives. The unchanged wild perennials probably would have been ignored in favor of the increasingly large, easily harvested seeds of the modified annual plants.

The conversion of native perennial landscapes to the monocultures of annual crops characteristic of today’s agriculture has allowed us to meet our increasing food needs. But it has also resulted in dramatic changes. Fields of maize and wheat require frequent, expensive care to remain productive. Compared to perennials, annuals typically grow for shorter lengths of time each year and have shallower rooting depths and lower root densities, with most of their roots restricted to the surface foot of soil or less. Even with crop management advances such as no-tillage practices, these traits limit their access to nutrients and water, increase their need for nutrients, leave croplands more vulnerable to degradation, and reduce soil carbon inputs and provisions for wildlife. These traits also make annual plants less resilient to the increased environmental stress expected from climate change.

Even in regions best suited for annual crops, such as the Corn Belt, soil carbon and nitrogen levels decreased by 40 to 50% or more after conversion from native plants to annuals. Global data for maize, rice, and wheat indicate that they take up only 20 to 50% of the nitrogen applied in fertilizer; the rest is lost to surrounding environments. Runoff of nitrogen and other chemicals from farm fields into rivers and then coastal waters has triggered the development of more than 400 “dead zones” that are depleted of fish and other sea dwellers.

Annual crops do, however, have some advantages over perennial crops in terms of management flexibility. Because they are short-lived, they offer farmers opportunities to quickly change crops in response to changing market demands as well as environmental factors such as disease outbreaks. Thus, annual grain production will undoubtedly be important far into the future. Still, the expanded use of perennial grain crops on farms would provide greater biological and economic diversity and yield additional environmental benefits.

Perennial advantages

Developing new crop species capable of significantly replacing annuals will require a major effort. During the past four decades, breeders have had tremendous success in doubling, tripling, and even quadrupling the yields of important annual grains, success that would seem to challenge the notion that a fundamental change in agriculture is needed. Today, however, these high yields are being weighed against the negative environmental effects of agriculture that are increasingly seen around the world. And with global grain demand expected to double by 2050, these effects will increase.

The development of perennial crops through breeding would help deal with the multiple issues involving environmental conservation and food security in a world of shrinking resources. We know that perennials such as alfalfa and switchgrass are much more effective than annuals in maintaining topsoil. Soil carbon may also increase 50 to 100% when annual fields are converted to perennials. With their longer growing seasons and deeper roots, perennials can dramatically reduce water and nitrate losses. They require less field attention by the farmer and less pesticide and fertilizer inputs, resulting in lower costs. Wildlife benefit from reduced chemical inputs and from the greater shelter provided by perennial cover.

There are other benefits as well. Greater soil carbon storage and reduced input requirements mean that perennials have the potential to mitigate global warming, whereas annual crops tend to exacerbate the problem. With more of their reserves protected belowground and their greater access to more soil moisture, perennials are also more resilient to temperature increases of the magnitude predicted by some climate change models. Although perennials may not offer farmers the flexibility of changing crops each year, they can be planted on more-marginal lands and can be used to increase the economic and biological diversity of a farm, thereby increasing the flexibility of the farming system. Perhaps most important in a crowded world with limited resources, perennials are more resilient to social, political, health, and environmental disruptions because they don’t rely on annual seedbed preparation and planting. A farmer suffering from illness might be unable to harvest her crop one season, but a new crop would be ready the next season when she recovers. Meanwhile, the soil is protected and water has been captured.

The increased use of perennials could also slow, reverse, or prevent the increased planting of annuals on marginal lands, which now support more than half the world’s population. Because marginal lands are by their nature fragile and subject to rapid degradation, large areas of these lands now being planted with annuals are already experiencing declining productivity. This will mean that additional marginal lands will be cultivated. This troubling reality makes the development of crops that can be more sustainably produced a matter of necessity. Developing perennial versions of our major grain crops would address many of the environmental limitations of annuals while helping to feed an increasingly hungry planet.

Perennial possibilities

Recent advances in plant breeding, such as the use of marker-assisted selection, genomic in situ hybridization, transgenic technologies, and embryo rescue, coupled with traditional breeding techniques, make the development of perennial grain crops possible in the next 10 to 20 years. Two traditional approaches to developing these crops are direct domestication and wide hybridization, which have led to the wide variety of crops on which humans now rely. To directly domesticate a wild perennial, breeders select desirable plants from large populations of wild plants with a range of characteristics. Seeds are collected for replanting in order to increase the frequency of genes for desirable traits, such as large seed size, palatability, strong stems, and high seed yield. In wide hybridization, breeders cross an annual grain such as wheat with one of its wild perennial relatives, such as intermediate wheatgrass. They manage gene flow by making a large number of crosses between the annual and perennial plants, selecting offspring with desirable traits and repeating this cycle of crossing and selection multiple times. Ten of the 13 most widely grown grain and oilseed crops are capable of hybridization with perennial relatives.

The idea that plants can build and maintain perennial root systems and produce sufficient yields of edible grains seems counterintuitive. After all, plant resources, such as carbon captured through photosynthesis, must be allocated to different plant parts, and more resource allocation to roots would seem to mean that less can be allocated to seeds. Fortunately for the breeder, plants are relatively flexible organisms that are responsive to selection pressures, able to change the size of their resource “pies” depending on environmental conditions, and able to change the size of the slices of the resource pie. For example, when plant breeders take the wild plant out of its resource-strapped natural environment and place it into a managed environment with greater resources, the plant’s resource pie can suddenly grow bigger, typically resulting in a larger plant.

Many perennial plants, with their larger overall size, offer greater potential for breeders to reallocate vegetative growth to seed production. Additionally, for a perennial grain crop to be successful in meeting our needs, it may need to live for only 5 to 10 years, far less than the lifespan of many wild perennials. In other words, the wild perennial is unnecessarily overbuilt for a managed agricultural setting. Some of the resources allocated to the plant’s survival mechanisms, such as those allowing it to survive infrequent droughts or pest attacks, could be reallocated to seed production, and the crop would still persist in normal years.

Breeders see several other opportunities for perennials to achieve high seed yield. Perennials have greater access to resources over a longer growing season. They also have greater ability to maintain, over longer periods of time, the health and fertility of the soils in which they grow. Finally, the unprecedented success of plant breeders in recent decades in selecting for the simultaneous improvement of two or more characteristics that are typically negatively correlated with one another (meaning that as one characteristic increases, the other decreases, as is typical of seed yield and protein content) can be applied to perennial crop development.

Although current breeding efforts focused on developing perennial grain crops have been under way for less than a decade, the idea isn’t new. Soviet researchers abandoned their attempts to develop perennial wheat through wide hybridization in the 1960s, in part because of the inherent difficulties of developing new crops at the time. California plant scientists in the 1960s also developed perennial wheat lines with yields similar to the then–lower-yielding annual wheat cultivars. At the time, large yield increases achieved in annuals overshadowed the modest success of these perennial programs, and the widespread environmental problems of annual crop production were not generally acknowledged.

ADDING PERENNIAL GRAINS TO OUR AGRICULTURAL ARSENAL WILL GIVE FARMERS MORE CHOICES IN WHAT THEY CAN GROW AND WHERE, WHILE SUSTAINABLY PRODUCING FOOD FOR THE GROWING POPULATION.

In the late 1970s, Wes Jackson at the Land Institute revisited the possibility of developing perennial grain crops in his book New Roots for Agriculture. In the 1990s, plant breeders at the Land Institute initiated breeding programs for perennial wheat, sunflowers, sorghum, and some legumes. Some preliminary genetics work and hybridization research have also focused on perennial maize. Washington State University scientists have initiated a perennial wheat breeding program to address the high rates of erosion resulting from annual wheat production in eastern Washington. In 2001, some of those perennial wheat lines yielded 64% of the of the yield produced by the annual wheat cultivars grown in the region. Scientists at Kansas State University, the Kellogg Biological Station at Michigan State, the University of Manitoba, Texas A&M, and the University of Minnesota are carrying out additional plant breeding, genetics, or agronomic research on perennial grain crops.

The potential for perennial crops to tolerate or prevent adverse environmental conditions such as drought or soil salinity has attracted interest in other parts of the world. The conversion of native forests for annual wheat production in southwest Australia resulted in the rise of subsurface salts to the surface. This salinization threatens large areas of this non-irrigated, semi-arid agricultural region, and scientists there believe perennial crops would use more subsurface water, which would keep salts from rising to the surface and produce high-value crops. During the past decade, Australian scientists have been working to develop perennial wheat through wide hybridization and to domesticate a wild perennial grass for the region. More recently, plant breeders at the Food Crops Research Institute in Kunming, China, initiated programs to develop perennial rice to address the erosion problems associated with upland rice production. It is believed that perennial rice would also be more tolerant of the frequent drought conditions of some lowland areas. Scientists at the institute have also been evaluating perennial sorghum, sunflower, and intermediate wheatgrass for their potential as perennial grain crops.

Vision of a new agriculture

The successful development of perennial grain crops would have different effects on the environment, on life at the dinner table, and on the farm. Producing grains from perennials rather than from annuals will have large environmental implications, but the consumer will see little if any difference at the dinner table. On the farm, whether mechanically harvested from large fields or hand-harvested in the parts of the world where equipment is prohibitively expensive, perennial grains 20 to 50 years from now will also look much the same to the farmer. The addition, however, of new high-value perennial crops to the farm would increase farmers’ flexibility.

Farmers could use currently available management practices, such as no-till or organic approaches, but with a new array of high-value perennial grain crops. These would give farmers more options to have long rotations of perennial crops or rotations in which annuals are grown for several years followed by several years of perennials. Crop rotation aids in managing pests, diseases, and weeds but is often limited by the number of profitable crops from which farmers can choose. There are also opportunities to simultaneously grow annual and perennial grain crops or to grow multiple species of perennials together because of differences in rooting characteristics and growth habits. And because perennial grains regrow after seed harvest, livestock can be integrated into the system, allowing for greater use of the crops and therefore greater profit.

Although the environmental and food-security benefits of growing perennial grain crops are attractive, much work remains to be completed. For the great potential of perennial grain crops to be realized, more resources are needed to accelerate plant breeding programs with more personnel, land, and technological capacity; expand ecological and agronomic research on improved perennial germplasm; coordinate global activities through germplasm and scientist exchanges and conferences; identify global priority croplands; and develop training programs for young scientists in ecology, perennial plant breeding, and crop management.

Where, then, should the resources come from to support these objectives? The timeline for widespread production of perennials, given the need for extensive plant breeding work first, discourages private-sector investment at this point. As has occurred with biofuel production R&D, large-scale funding by governments or philanthropic foundations could greatly accelerate perennial grain crop development. As timelines for the release and production of perennial grain crops shorten, public and philanthropic support could increasingly be supplanted by support from companies providing agricultural goods and services. Although perennial grain crops might not initially interest large agribusinesses focused on annual grain crop production, the prospect of developing a suite of new goods and services, including equipment, management consulting, and seeds, would be attractive to many entrepreneurial enterprises.

Although public support for additional federal programs is problematic given the current economic conditions, global conditions are changing rapidly. Much of the success of modern intensive agricultural production relies on cheap energy, a relatively stable climate, and the public’s willingness to overlook widespread environmental problems. As energy prices increase and the costs of environmental degradation are increasingly appreciated, budgeting public money for long-term projects that will reduce resource consumption and depletion will probably become more politically popular. Rising food and fuel prices, climatic instability that threatens food production, and increased concern about the degradation of global ecological systems should place agriculture at the center of attention for multiple federal agencies and programs.

The USDA has the greatest capability to accelerate perennial grain crop development. Most important would be the use of research funds for the rapid expansion of plant breeding programs. Funds for breeding could be directed through the Agricultural Research Service and the competitive grant programs. Such investments directly support the objectives of the National Institute of Food and Agriculture (NIFA), created by the Food, Conservation and Energy Act of 2008, which will be responsible for awarding peer-reviewed grants for agricultural research. Modeled on the National Institutes of Health, NIFA objectives include enhancing agricultural and environmental sustainability and strengthening national security by improving food security in developing countries.

As varieties of perennial grain crops become available for more extensive testing, additional funds will be needed for agronomic and ecological research at multiple sites in the United States and elsewhere. This would include support for the training of students and scientists in managing perennial farming systems. Currently, less than $1.5 million directly supports perennial grain R&D projects around the world. USDA funds will provide less than $300,000 annually over the next few years through competitive grant awards, primarily for the study and development of perennial wheat and wheatgrass. Much of the rest is provided by the Land Institute.

Once the suitability of a perennial grain crop is well established, support from federal programs for farmers might be needed to encourage the initial adoption of new crops and practices. Farm subsidies, distributed through the USDA and which now primarily support annual cropping systems, could be used to encourage fundamental changes in farming practices, such as those offered by perennial grain crop development. Public funds supporting the Conservation Reserve Program (CRP) could be redirected toward transitioning CRP lands, once the federal contracts have expired, to perennial grain production. The CRP, initially established in the 1985 farm bill, pays farmers to remove highly erodible croplands from production and to plant them for 10 years with grasses, trees, and other long-term cover to stabilize the soil. Some 36 million acres are enrolled in the program, and most are unsuitable or marginal for annual grain crop production but would be suitable for the production of perennials.

One obstacle to supporting programs necessary to achieve such long-term goals is the short timeframes of current policy agencies. The farm bill is revisited every five years and focuses primarily on farm exports, commodities, subsidies, food programs, and some soil conservation measures. Thus, it is poorly suited to deal with long-term agendas and larger objectives. The short-term objectives can change with changes in the political fortunes of those in charge of approving the bill. The Land Institute’s Jackson has proposed a 50-year farm bill to serve as compass for the five-year bills. This longer-term agenda would focus on the larger environmental issues and on rebuilding and preserving farm communities. In the near term, Jackson proposes that, during an initial buildup phase, the federal government should fund 80 plant breeders and geneticists who would develop perennial grain, legume, and oilseed crops, and 30 agricultural and ecological scientists to develop the necessary agronomic systems. They would work on six to eight major crop species at diverse locations. Budgeting $400,000 per scientist-year for salaries and research costs would add less than $50 million annually to the farm bill, a blip in a bill that will cost taxpayers $288 billion between 2008 and 2012.

Some limited federal money has already been awarded for research related to perennial grain through the USDA’s competitive grants programs. Most recently, researchers at Michigan State University received funding to study the ecosystem services and performance of perennial wheat lines obtained from Washington State University and the Land Institute. The Green Lands Blue Waters Initiative, a multi-state network of universities, individual scientists, and nonprofit research organizations, is also advocating the development of perennial grain crops, along with other perennial forage, biofuel, and tree crops.

Although agriculture has traditionally been primarily the concern of the USDA, it now plays an increasingly important role in how we meet challenges—international food security, environmental protection, climate change, energy supply, economic sustainability, and human health—beyond the primary concerns of that agency. Public programs intended to address these challenges should consider the development of perennial grain crops a priority. For example, programs at the NSF, Department of Energy (DOE), Environmental Protection Agency, and the National Oceanic and Atmospheric Administration and U.S. international assistance and development programs could provide additional incentives through research programs or subsidies.

Currently, no government funding agencies, including the USDA, specifically target the development of perennial crops as they do for biofuels. In the 2009 federal economic stimulus package alone, the DOE was appropriated $786.5 million in funds to accelerate biofuels research and commercialization. The displacement of food crops by biofuel crops recently played a significant role in the rise of global food prices and resulted in increased hunger and social unrest in many parts of the world. Although some argue that biofuel crops should be grown only on marginal lands unsuited for annual food crops, perennial crops have the potential to be grown on those same lands and be used for food, feed, and fuel.

Substantial public funding of perennial grain crops need not be permanent. As economically viable crops become widely produced, farmers and businesses will have opportunities to market their own seeds and management inputs just as they do with currently available crops. Although private-sector companies may not profit as much from selling fertilizers and pesticides to farmers producing perennial grains, they will probably adapt to these new crops with new products and services. The ability of farmers to spread the initial planting costs over several seasons rather that meet these costs each year opens up opportunities for more expensive seed with improved characteristics.

Although the timelines for development and widespread production of perennial grain crops may seem long, the potential payoffs stretch far into the future, and the financial costs are low relative to other publicly funded agricultural expenditures. Adding perennial grains to our agricultural arsenal will give farmers more choices in what they can grow and where, while sustainably producing food for the growing population.

From the Hill – Winter 2010

Climate change debate remains deadlocked

Although the House passed a climate change bill in 2009, the Senate remains deadlocked, an outcome that effectively took the steam out of the international climate change talks that were scheduled for Copenhagen December 7 to 18.

The Copenhagen talks were aimed at forging a successor agreement to the Kyoto Protocol. But even before the talks began, it was announced that only an interim agreement would be sought in 2009, with negotiations for a new treaty to reduce the greenhouse gas (GHG) emissions that are causing climate change delayed until 2010.

At a November 4 hearing of the House Committee on Foreign Relations, Todd Stern, the special envoy for climate change at the State Department, said that U.S. expectations for Copenhagen included a restatement of the need to act and an agreement on language for mitigation assistance, forest protection, and adaptation. Stern said that any meaningful international agreement would require that developing countries reduce emissions below business-as-usual levels. Vulnerable countries should receive financial and technological assistance to meet these goals, he said.

Limits on emissions and foreign aid to achieve these cuts in developing countries, particularly those with sizable emissions such as China and India, have been key points of contention in the negotiations. In recent bilateral meetings, the United States appears to be making progress with China. The two countries recently established several partnerships on clean energy, and both countries have announced carbon emissions reduction goals. Stern was optimistic in the hearing, noting that countries such as China will probably do more on their own to reduce emissions than they are willing to agree to in an international treaty.

Meanwhile, the Senate has begun to advance climate legislation, though much work remains. After a week of hearings with more than 50 witnesses that covered R&D needs, green jobs, nuclear and renewable energy, national security, and economics, the Senate Environment and Public Works Committee Democrats reported out the Clean Energy Jobs and American Power Act (S. 1733) on November 5. The bill aims to reduce GHG emissions through a cap-and-trade program. The bill was approved 10 to 1, with one Democrat, Sen. Max Baucus (D-MT), voting no. The Republicans refused to participate.

Five other committees have jurisdiction over climate change legislation, and several have begun to hold hearings. On November 10, the Senate Finance Committee held a hearing examining the impact of climate change legislation on U.S. jobs. Committee Chair Baucus said he was committed to passing “meaningful, balanced climate change legislation,” but that Congress must “work to minimize any job losses.” As a result, he said that any climate bill would have to include a border tax adjustment to protect U.S. manufacturing from unfair competition from abroad.

Sen. John Kerry (D-MA), lead sponsor of S. 1733, announced bipartisan negotiations led by himself and Sens. Lindsey Graham (R-SC) and Joe Lieberman (I-CT) to develop a compromise bill. The bill could include text from the Clean Energy Act of 2009, introduced by Senate Republican Conference Chairman Lamar Alexander (R-TN) and Sen. Jim Webb (D-VA) to increase nuclear power production, carbon capture and storage, solar power, nuclear waste recycling, and green buildings.

Fiscal year 2010 budget work continues

As is usually the case, Congress once again failed to complete its budget work by the beginning of the new fiscal year on October 1, although the bills that have been passed and signed by the president generally reflect continuing good times for R&D spending. However, President Obama expressed concern about the budgets for the National Science Foundation (NSF) and the National Institute of Standards and Technology (NIST), pointing out that they were a total of more than $200 million below his request.

The Agriculture, Rural Development, Food and Drug Administration and Related Agencies appropriation bill, signed into law on October 16, provides significant increases for R&D spending in various U.S. Department of Agriculture (USDA) programs, including $1.3 billion for the Agricultural Research Service, up 6.3% over fiscal year (FY) 2009, and $808 million for the National Institute of Food and Agriculture (NIFA), a 12.2% increase. The Agriculture and Food Research Initiative, which is part of NIFA, would receive a $61 million or 30.3% increase.

The president signed both the Homeland Security and Energy and Water Development appropriations bills into law on October 28. Science and technology (S&T) at the Department of Homeland Security (DHS) will receive $1 billion, up $74 million from last year. The Energy and Water bill provides $4.9 billion for the Department of Energy’s Office of Science, an increase of 2.7% ($131 million) over FY 2009 levels.

The Department of the Interior, Environment, and Related Agencies bill, signed by the president on October 30, provides $1.1 billion for the U.S. Geological Survey, which is $68 million or 6.5% more than in FY 2009 and $14 million or 1.3% more than the president’s request. The Science and Technology program in the Environmental Protection Agency (EPA) will receive $846 million, which is $56 million (7.1%) more than in FY 2009 and $4 million (0.5%) more than the president’s request.

Three appropriation bills—Defense; Commerce and Justice, Science and Related Agencies; and Transportation, all with significant R&D components— have been passed by the House and Senate and are waiting to be discussed in conference committee.

On November 5, the Senate passed its version of the Commerce and Justice, Science, and Related Agencies bill. The bill includes $11.2 billion for the National Aeronautics and Space Administration, which is $611 million more than the House approved; $5.2 billion for NSF, which is $14 million less than the House; $700 million for the National Oceanic and Atmospheric Administration (NOAA), which is $22 million more than the House; and $672 million for NIST, which is $96 million more than the House.

The Senate passed its version of the Defense appropriations bill on October 6. The Senate version would provide less than the House for research, development, test, and evaluation programs ($78.5 billion instead of $80.2 billion). The biggest differences between the two bills are in the appropriation for the Navy, with the House approving $20.2 billion, $1 billion more than the Senate. The Navy programs of greatest contention are the VH-71A Executive Helicopter (House: $485 million; Senate: $30 million) and the Joint Strike Fighter (House: $2 billion; Senate: $1.7 billion), which involves disagreement about the development of an alternative engine for the aircraft.

In the Senate Transportation, Housing and Urban Development and Related Agencies appropriation bill, the Department of Transportation would receive $100.1 billion, $1.3 billion less than the House version of the bill and $2.2 billion less than the president’s request.

Health care bills advance with S&T provisions

The massive health care reform bills being debated in Congress include some lesser-known proposals of interest to the S&T community. Specifically, the bills contain provisions on comparative effectiveness research, generic biologic drugs, and financial conflicts of interest between doctors and drug makers.

The House-passed bill would establish a Center for Comparative Effectiveness Research within the Agency for Healthcare Research and Quality to conduct, support, and synthesize research that compares medical interventions. The center would be funded through a trust fund run through the Treasury Department. The proposed Senate bill would create a nonprofit corporation called the Patient-Centered Outcomes Research Institute to handle comparative effectiveness research. It would also be funded by a Treasury trust fund.

Both bills address generic biologic drugs, creating a regulatory pathway that allows a 12-year period of data exclusivity for developers of a drug. Data exclusivity, which is separate from patent protection, is the protection of clinical test data that is provided to the Food and Drug Administration during the approval process. The decision to use a 12-year time frame is a victory for the biotech industry and runs contrary to the preferences of the administration and House Energy and Commerce Chairman Henry Waxman (D-CA), who sought a seven- and five-year period, respectively.

Both bills include provisions to require drug and device makers to disclose gifts to physicians. These provisions are borrowed from a bill introduced by Sen. Chuck Grassley (R-IA) called the Physician Payments Sunshine Act. Grassley has shown a strong interest in policing conflicts of interest among researchers and medical professionals. Most recently, he has questioned the practice of ghostwriting in medical journals, in which medical writers, often sponsored by biotech companies, write an article that bears a scientist’s name.

New tuna quotas called insufficient

The International Commission for the Conservation of Atlantic Tunas (ICCAT), a body of 48 nations, has decided to decrease the annual catch quota for Atlantic bluefin tuna, a decision that environmental groups and the Obama administration said was insufficient to stem the continuing declines in tuna stocks, which have dwindled to an estimated 18% of preindustrial levels.

ICCAT said the quota would be reduced from 19,950 tons to 13,500 tons, a level it claimed was consistent with recommendations from its scientific advisory committee. Atlantic bluefin have been declining because of illegal overfishing, accidental bycatch, pollution, habitat destruction, and probably global warming.

Environmental groups wanted a zero catch limit, whereas the United States sought a quota of about 8,000 tons, a target supported by its scientific advisers. According to a statement released by NOAA, “The ICCAT agreement on eastern Atlantic and Mediterranean bluefin tuna is a marked improvement over the current rules, but it is insufficient to guarantee the long-term viability of either the fish or the fishery.” If stocks continue to decline, NOAA supports “the commitment to set future catch levels in line with scientific advice, shorten the fishing season, reduce capacity, and close the fishery.”

The United States and other countries have announced their support for including the Atlantic bluefin tuna in the Convention on International Trade in Endangered Species of Wild Fauna and Flora, known as CITES, but some Mediterranean countries and Japan oppose the move. In addition, because CITES has an opt-out clause, simply listing a species under it does not guarantee the species’ survival.

Major players in the bluefin market include the European Union, Japan, Canada, the United States, North Africa, and Mexico. With the exception of Canada, most countries do not reach quota levels because of the scarcity of the fish. In Japan, which consumes up to a quarter of bluefin, one tuna can fetch a price of more than $100,000.

In order to continue to feed the world’s bluefin appetite, some companies have begun to explore “ranching,” which entails trapping young tuna and keeping them until they are grown and ready for market. Although some have said this has reduced overfishing, others claim this practice hurts the bluefin population by preventing spawning.

At the meeting, ICCAT also adopted measures to protect threatened shark species and swordfish.

Congress examines geoengineering of climate system

On November 5, the House Science and Technology Committee held the first congressional hearing on geoengineering, the deliberate large-scale modification of Earth’s climate systems to counteract climate change. Committee Chair Bart Gordon (D-TN) explained that his decision to hold this hearing “should not in any way be misconstrued as an endorsement of any geoengineering activity.” Instead, he said, “Geoengineering carries with it a tremendous range of uncertainties, ethical and political concerns, and the potential for catastrophic environmental side effects. But we are faced with the stark reality that the climate is changing, and the onset of impacts may outpace the world’s political and economic ability to avoid them.”

Gordon said that the hearing will be the first part of a joint effort to examine issues surrounding geoengineering by the House and the House of Commons of the United Kingdom. The two bodies will hold parallel hearings, and the chairman of the Commons committee will testify before the Science and Technology Committee in a spring 2010 hearing on domestic and international governance issues surrounding geoengineering.

Ken Caldeira, senior scientist with the Carnegie Institution of Washington, outlined two main categories of geoengineering activities. The first, solar radiation management, would add tiny particles to the stratosphere to reflect the Sun’s rays back to space. This technique is intended to mimic the cooling that occurs after volcanic eruptions. Caldeira said this technique is not perfect and introduces many new risks, but “could address most climate changes in most places most of the time.” The second type of geoengineering activity is removing carbon dioxide from the atmosphere, either by storing it in the ground or using techniques to remove it from the atmosphere. With some exceptions, Caldeira noted, carbon removal techniques are unlikely to carry significant risk, but cost is likely to be primary constraint. Caldeira called for more R&D in both of these areas, a recommendation echoed by Lee Lane of the American Enterprise Institute.

University of Southampton Professor John Shepherd testified on a report he chaired, Geoengineering the Climate: Science, Governance and Uncertainty. Released in September by the UK Royal Society, the report concluded that geoengineering techniques to reverse the effects of climate change are probably technically feasible, but that it is necessary to study major uncertainties about the effectiveness, costs, and environmental effects that may occur. The report calls for mitigation and adaptation but notes that “geoengineering methods could . . . potentially be useful to augment continuing efforts to mitigate climate change.” Shepherd’s testimony emphasized that geoengineering is not a magic bullet and that cutting GHG emissions must be the highest priority in addressing climate change.

Alan Robock, a professor at Rutgers University, agreed that global warming is a problem and reducing emissions should be the primary response. He concluded that “Geoengineering should only be used in the event of a planetary emergency and only for a limited time, not a solution to global warming.”

Senate committee passes biosecurity bill

On November 4, the Senate Homeland Security and Governmental Affairs Committee approved the Weapons of Mass Destruction Prevention and Preparedness Act, aimed at improving the security and safety of research on select agents conducted in high-containment laboratories.

The WMD bill, introduced by Chairman Joseph Lieberman (I-CT) and Ranking Member Susan Collins (RME), would require the Department of Health and Human Services (HHS) and the USDA to create a new kind of designation for select agents and pathogens that pose the greatest threat of a biological attack. It would also require that new personnel reliability measures be implemented at institutions that conduct research on high-risk pathogens.

The bill responds to recommendations issued by the Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism in its report A World at Risk. Hearings have been held in both the House and Senate during the past year, but only the Senate has introduced and passed a measure to address some of the report’s recommendations. The WMD Commission recommended that the regulation of registered and unregistered high-containment laboratories (called BSL 3 and BSL 4) be consolidated under one agency, preferably DHS or HHS.

The legislation, however, does not follow that recommendation precisely and works to create a tiered system for listing select agents based on the level of risks to national security and public health. Thus, the Lieberman/Collins bill would require HHS and the USDA to create a “Tier 1” designation for select agents and pathogens that pose the greatest threat. The new level would be based on intelligence and risk assessment analysis conducted by DHS and the intelligence community.

The bill also requires that DHS establish new biosecurity standards for laboratories that conduct research on these Tier 1 pathogens and would use a negotiated rulemaking process to allow consultation with the scientific research community before finalizing the standards. The standards are to address personnel reliability and background checks, education programs and staff training, and the conduct of risk assessments. Furthermore, DHS, in partnership with HHS and the USDA, would inspect the Tier 1 laboratories to ensure that they are in compliance with the new standards.

Although research on select agents is already conducted in high-containment laboratories, a central concern of the WMD commission centered on unregistered research facilities in the private sector. These labs have the necessary tools to handle anthrax, for example, or to synthetically engineer a more dangerous version of an agent, but whether they have implemented appropriate security measures is unknown. To the commission, this represents a serious lack of oversight that needs to be addressed and was an issue highlighted at a recent hearing before the Senate Homeland Security and Governmental Affairs Committee. Gregory Kutz of the Government Accountability Office (GAO) testified that a GAO assessment “found significant differences in perimeter security” at five of the nation’s BLS 4 labs. However, he emphasized that two of the five BSL 4 labs that lacked sufficient basic security controls, such as cameras, in a similar assessment done in 2008, have made progress in addressing the security gaps.

To address the issue of private-sector laboratories, the Lieberman/Collins bill would require HHS to create and maintain a database of laboratories and facilities that conduct research that could pose a health and safety threat to the public or to animals and agriculture, but do not necessarily pose an imminent security threat. The criteria for defining which laboratories are required to register would be established by HHS in cooperation with DHS and the USDA.

Finally, the legislation would also authorize DHS to award grants for improving laboratory security at Tier 1 facilities and outlines a detailed response and countermeasure strategy in the event of a biological attack.

At the final markup session, Sen. Carl Levin (D-MI) was the lone dissenting vote, arguing that it was premature to pursue a legislative approach to securing laboratory safety because of a pending executive branch interagency report on the subject.

Odds and ends

  • The Science and Technology Committee unanimously approved H.R. 4061, the Cybersecurity Enhancement Act of 2009, on November 18. The bill expands NSF scholarships to increase the size and skills of the cybersecurity workforce and increases R&D, standards development, coordination, and public outreach at NIST related to cybersecurity.
  • The House approved H.R. 3585, the Solar Technology Roadmap Act, on October 22. Sponsored by Rep. Gabrielle Giffords (D-AZ), the bill would establish a Solar Technology Roadmap Committee to advise the Secretary of Energy and set research, development, and demonstration objectives. The bill would authorize $350 million for fiscal year 2011, ramping up to $550 million in 2015.
  • The White House announced a series of partnerships involving leading companies, foundations, teachers, and scientists and engineers to motivate students to excel in science and math. The “Educate to Innovate” campaign has received an initial commitment of more than $260 million from private-sector partners.
  • The Energy and Environment Subcommittee of the House Science and Technology Committee heard testimony on H.R. 3650, the Harmful Algal Blooms and Hypoxia Research and Control Amendments Act of 2009 and reported it favorably to the full committee. The bill is designed to focus research funding on the mitigation and prevention of harmful algal blooms, which affect the Great Lakes, the Gulf of Mexico, the Chesapeake Bay, and marine coastlines. These toxic algae, a variety of which exist in fresh and salt water, present risks to human health by affecting drinking water, marine food sources, and recreation. Witnesses at the hearing said the main cause of algal blooms was excess nutrients in water bodies, most commonly nitrogen and phosphorus from agriculture and urban runoff.
  • The EPA and the Departments of Agriculture, Commerce, Defense, and Interior released seven draft reports outlining efforts to protect and restore the Chesapeake Bay, the largest estuary in the United States. Developed in response to a May 12 executive order, these reports will be integrated into a coordinated strategy designed to be finalized by May 12, 2010.
  • The House passed the Coral Reef Conservation Act Reauthorization and Enhancement Amendments of 2009 (H.R. 860), which promotes international cooperation to protect coral reefs and codifies the U.S. Coral Reef Task Force. The bill would extend existing grants programs supporting coral reef monitoring and assessment, research, pollution reduction, education, and technical support.
  • On September 22, the EPA announced a final rule that requires companies to track and report their GHG emissions data. Fossil fuel and industrial GHG suppliers, motor vehicle and engine manufacturers, and facilities that emit 25,000 metric tons or more of carbon dioxide–equivalent per year will be required to report GHG emissions data. These sources cover approximately 85% of U.S. emissions.
  • Interior Department secretary Ken Salazar signed an order that establishes a framework to coordinate climate change science and resource management strategies across the department. The framework includes a new Climate Change Response Council, eight regional Climate Change Response Centers, and a network of Landscape Conservation Cooperatives.

“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Intangible Assets Innovative Financing for Innovation

Finding funding for a new business or idea is almost always challenging. With the recent near-collapse of the financial system, however, funding innovation is even more difficult. Credit to businesses has tightened dramatically. The market for initial public offerings is moribund, and venture capital has been reduced to a trickle. As a result, the “valley of death” between a promising idea and a marketable product appears to be even more of an unbridgeable chasm. For many innovative companies, funding to move from a promising new concept to commercialization is simply not there.

One sign of hope is the emerging practice of providing funding to companies on the basis of their intellectual property (IP) and other intangible assets. Although IP, effective management, worker know-how, and business methods are widely recognized for their role in propelling the growth of the U.S. economy, the country is still largely failing to acknowledge the real value of these intangible assets and to provide innovative companies with the funding they need to capitalize on them.

In the United States, more than $1 trillion annually is invested in the creation of intangible assets, and in 2005 their total value was estimated at $9.2 trillion. However, only a portion of that value shows up in company financial reports. Likewise, intangible assets rarely merit consideration in the financial system. As a result, companies are unable to obtain the capital that they could use for business innovation and expansion.

Currently, companies can raise money based on their physical and financial assets. Such assets can be easily bought and sold, borrowed against, and used to back other financial instruments. They provide companies with a source of the investment funding needed for the U.S. economy, allowing it to grow and prosper.

In contrast, the $9.2 trillion in intangible assets is largely hidden and therefore unavailable for financing purposes. A huge opportunity cost is imposed on the U.S. economy when such a large source of potential financing is locked up. Because intangible assets are not generally available as a source of investment and risk capital, innovative companies may face higher capital costs or even a dearth of capital to fund new ideas. Unable to use their intangible assets as a financial tool, prospective borrowers face a system that does not understand their true revenue potential and is unable to judge operational risks appropriately. New ideas never gain traction or remain unexplored or undeveloped. Economic potential goes untapped and is therefore wasted.

Rays of hope

The picture is not entirely gloomy. As industry has invested capital in R&D to create new technology and advance other creative activities, a niche market of firms specializing in intangibles-based financing is springing up. Some intangible assets (traditional IP consisting of patents, trademarks, and copyrights) have been used in sale, leasing, equity, equity-debt, debt, and sale-leaseback transactions to finance the next round of innovation.

The easiest way for companies to raise funds using their IP is through sales or licensing. In recent years, we have seen the emergence of an entire marketplace devoted to IP, including public auctions run by ICAP OceanTomo. Numerous patent brokers and Web-based marketplaces augment the vast network of technology transfer offices that seek to sell or license IP. The sale of IP creates upfront cash for a company, whereas licensing creates a future revenue stream. The difference is important if one is trying to fund the next generation of R&D and needs that upfront cash.

This is where the financial system comes in. Financing is the process of granting a security interest (ownership in case of default) in an asset in exchange for capital. The standard method is through traditional debt financing, in which the asset is pledged as collateral, and revenue streams are used to pay off the loan. For example, in 1884, Lewis Waterman borrowed $5,000 backed by his fountain pen patent to start his business. However, it remains rare and difficult to use intangible assets in this way. More likely, intangible assets and IP will be, knowingly or unknowingly, wrapped into an overall loan package.

The newer phenomenon of securitization—a variation on the long-standing practice of securitizing mortgages and other consumer debt—is another way of obtaining financing using IP. Known as royalty interest securitizations, these deals are backed by an existing royalty stream. In this case, the IP is sold to a holding company that pays for it by issuing bonds backed by the IP’s revenues. The IP owner gets the upfront cash, and the bond holders are paid off over time with the royalties.

In a variation known as revenue interest securitization, no cash flow has yet been derived from an existing license or royalty agreement. The investor is willing to step into the process early to fund commercialization with the expectation that future licensing and product sales will generate revenue. In such cases, the investor may require an equity position as well. The deal might also be structured to ramp up funding when the company meets certain benchmarks; this is especially true in health care, where there are well-established regulatory and commercial milestones.

According to published reports, deals such as these have increased dramatically in recent years. In 2000, two publicly announced deals (one royalty-interest transaction and one revenue-interest transaction) totaled $145 million in investments. Contrast that with the 2007–2008 period, when there were 27 publicly announced transactions (19 royalty-interest transactions, five revenue-interest transactions, and three hybrid transactions using multiple financing techniques, including royalty financing) totaling $3.3 billion.

A set of private equity firms also exists that targets investments in companies with a critical focus on IP and intangible assets. These firms are not necessarily targeting raw or undeveloped IP assets for the purpose of monetizing the IP itself through licensing. Rather, they are looking for early-stage or startup companies with integral IP assets for the companies’ intended markets. In essence, these firms screen their deals by looking for critical IP assets and the overall cash flow the companies generate. These models often use a hybrid approach to equity investing similar to the venture debt market.

Why it is so hard

Given these examples, why hasn’t IP-backed financing made its way into the financial mainstream? The answer is simple: Many lenders and investors still do not feel comfortable with these assets. They question how the assets should be accurately valued and financially projected.

Financial markets use a number of factors to determine the suitability of an asset, including valuation, asset recognition (accounting), separability, transferability, risk, and liquidity. To effectively use IP and intangible assets in the financial system, quantifiable metrics of their characteristics must be available so that financial markets can calculate those assets’ behavior over time. The markets often need to replicate the past performance of the asset in question or compare it with another like asset or set of assets that acts in predictable ways. Although many complex models serve to support valuation estimates in the market today, there is no one standard model of assessing intangibles.

Similarly, asset recognition for accounting purposes remains a hurdle. IP and other intangibles are still not considered on the balance sheet or given due credit for playing a vital role on the income statement. According to generally accepted accounting practices (also known as GAAPs), only intangibles purchased from outside the company can be included in a company’s financial statement. Internally generated intangibles are specifically excluded. Thus, a patent acquired by buying another company is counted on the financial books, whereas a patent on technology developed in house is not.

Separability and transferability are also issues, even with formal assets such as patents, copyrights, and trademarks. Although these assets can be separated from a company and transferred to new owners, even the most straightforward form of licensing sometimes requires a side agreement on the transfer of know-how. Likewise, in the secu-ritization of brands and trademarks, the management and servicing agreement is a key feature, even with a steady royalty stream to underpin the value.

The perceptions of risk (in some cases exacerbated by actual events, such as the subprime mortgage meltdown) have also greatly hampered the use of intangibles in capital markets. The thinness of the market creates a lack of information that in turn increases uncertainty and feeds the perception of higher risk. Investors and lenders therefore tend to overestimate the risk of default on securities and loans col-lateralized by IP.

Some perceived risks are real. For example, it is estimated that in cases of loan default, it may take twice as long to liquidate IP as inventory and accounts receivable: two assets for which an asset-backed lending market already exists. The prospects of recovering a substantial part of the funds are also seen as poor. A recent Fitch ratings report on Toys R Us’s debt illustrated this concern. The Toys R Us debt structure includes, in part, a secured term loan based on the company’s IP and debt backed by real estate. The IP-secured term loan portion of the debt is listed as less than 10% recoverable, whereas the real estate debt is rated as 70 to 90% recoverable.

To account for this associated risk, bankers offer loans only with high discount rates and often underestimate the potential cash flows. Similarly, lenders embrace very conservative underwriting standards. IP with a positive cash flow might get a loan with a 40% loan-to-value (LTV) ratio, whereas IP with only future implied value would be at a 10% LTV.

Finally, assets must at least be perceived as liquid. The recent seizure of the financial system highlighted the importance of liquidity. In the case of asset-backed financing, securitization is an offshoot of collateralization, and collateralization often requires the backstop of a working primary market in the asset. Markets for the sale and lease of IP have existed for some time, but the regularization of these markets is a relatively recent development that continues to evolve.

In the end, a major reason not to use IP-backed loans may be cost. High discount rates and low LTV ratios mean a higher cost of capital. In addition, each IP and intangible asset financing deal seems to be a unique, one-off event employing differing models to determine the assets’ value, thereby driving up transaction costs. The cost of capital for borrowers using IP and intangible assets may simply make their use prohibitive. Any cheaper source of capital will be much more attractive.

Overcoming the barriers

Turning IP-backed financing from an exotic, one-off transaction into a routine mechanism by which innovative companies can raise funds will require changes in industry standards and government policies, including technology policy. But it also means going well beyond the boundaries of what is normally considered technology policy.

To start, we should examine the current IP marketplace. Lenders and investors want a level of assurance that, in case of default, they will reclaim some of their money. That requires a robust market for the collateral that the lender can access to liquidate the asset. An asset with a 10 to 40% recapture rate is naturally going to attract only the most risk-tolerant investors. Although the markets for IP have been evolving, we should look carefully at public policies that will accelerate this development. For example, the federal government should review its technology transfer policies and procedures to facilitate and streamline the process. We should also look at the licensing process with an eye toward reducing transaction costs and using standardized documents. Government agency use of emerging IP marketplaces should likewise be encouraged. The recent public sale of some National Aeronautics and Space Administration patents at an ICAP OceanTomo auction is an example of how the government can use its own patent assets to help expand the IP marketplace.

WE SHOULD EMBRACE THE CONCLUSION OF THE RISING ABOVE THE GATHERING STORM REPORT THAT “LAYING THE FOUNDATION FOR A SCIENTIFICALLY LITERATE WORKFORCE BEGINS WITH DEVELOPING OUTSTANDING K-12 TEACHERS IN SCIENCE AND MATHEMATICS.”

In addition, the process of using IP as collateral must be streamlined and standardized. Here, the federal government can be a lead player. The U.S. Small Business Administration (SBA) plays a vital role in financing new and small businesses through loan guarantee programs such as the 7a Program. The SBA recently revised its standard operating procedure for the 7a Program to make it clear that loans can be used for the acquisition of intangible assets when buying a business. However, the rules are unclear as to whether intangible assets can be used as collateral. Intangible assets, especially IP, must be incorporated into SBA lending policies. The SBA should work with commercial lenders to extend SBA underwriting standards to cover the use of intangible assets as collateral.

Beyond underwriting standards, the establishment of a specific IP-backed lending program should be considered. Other nations, such as China and Thailand, have already developed special programs for IP-backed lending. The United States could set up a similar pilot program run by SBA lending experts. Technical support could be provided by the SBA’s Office of Technology, which coordinates the Small Business Innovation Research program, and the U.S. Commerce Department’s National Institute of Standards and Technology, which runs the Technology Innovation Program and other science- and technology-related initiatives. Such a direct lending program would be a step beyond SBA’s current loan guarantee programs. Direct lending is necessary to jump-start the process, but once the process of using IP as collateral is fully accepted, the program could convert to loan guarantees.

Key to both the creation of SBA underwriting standards for IP collateralization and a direct lending program is standardized valuation methodologies. IP is routinely valued for a number of reasons, such as purchases and licensing agreements, transfer tax considerations, damages and awards in infringement cases, financial accounting statements of acquired assets, and merger and acquisition due diligence. As a result, consulting firms, litigation specialists, and companies employ countless methodologies and models. As long as IP valuation is seen as an art rather than a science, lenders and investors will continue to view such investments as risky.

Work is being done on the topic. For example, the International Valuation Standards Council is in the process of issuing Guidance Note No. 4, “Valuation of Intangible Assets” (due out in January 2010). Such activities provide a solid foundation on which the SBA and financial regulators can build.

Larger issues of financial reform must also be addressed. As Congress, the Obama administration, and regulatory agencies work through reform of the financial sector, they must be cognizant of the hidden role of IP in the marketplace. Banking regulators such as the Federal Reserve should collect data on whether and to what extent lending institutions are using IP as loan collateral, both explicitly and implicitly. Given the intangible assets that can be wrapped up in the catch-all category of liens on all assets, regulatory agencies should also ask how lending institutions value the intangible assets for purposes of assessing collateral and determining underwriting standards—specifically, valuation and LTV ratios.

Such information is useful not only to foster the use of IP-backed financing but to promote the safety and soundness of the financial sector. Failure to explicitly include intangible assets may have three consequences. First, it may underestimate the amount of collateral that a lending institution has to call on in case of default. Second, it may show a weakness in the lending institution’s ability to recapture that collateral value, because the lending institution may be dealing with an asset it does not understand. Third, there may be a systemic failure to properly price loans, insofar as the lending institution cannot properly value the intangible assets or applies exceedingly low LTV ratios that do not accurately reflect the risk but are a function of the lending institution’s lack of information. The result is the higher cost of capital, especially for borrowers in the knowledge and technology fields with extensive intangible assets. Regulations affecting lending, such as bank capital standards, should therefore be reviewed to take into account IP-backed lending. The international Basel II Capital Accords might, for instance, be examined for their impact on intangible assets.

Finally, policymakers need to make financial statements more transparent. Workable financial markets require consistent, accurate, and useable information on prices and values. However, investors and creditors are increasingly forced to make decisions in the dark; intangibles play an increasingly important role in U.S. businesses, yet the means of understanding the nature and behavior of these assets fail to keep pace. Limited insight into intangible holdings slows financial activity and restrains U.S. enterprises, which need ready access to capital to innovate, grow, and sustain themselves.

To achieve greater transparency, accounting standards must be modified to better account for intangibles. As a first step, the Financial Accounting Standards Board and the International Accounting Standards Board should reinstate their research project on expanded disclosure guidelines for intangibles. There is no reason to continue to treat internally generated intangibles differently from the same type of intangible purchased from outside. In addition, the Securities and Exchange Commission should create a safe harbor in financial statements for corporate reporting of intangible assets.

The policymakers who are now grappling with the issue of financial reform are neglecting the critical use of intangible assets as a financing mechanism. One-half to two-thirds of companies’ value consists of their intangibles assets. There are a number of examples in which intangible assets have been directly used as financial tools, either through securitization or as lending collateral. More commonly, banks are implicitly underwriting loans based on the value of intangibles assets. But these assets are not explicitly recognized in the underwriting process and are therefore not taken into account as part of banking supervision or financial market regulation.

Now is the time to create new means of financing innovation. The deals that have been done demonstrate that IP and other intangibles are viable assets to secure capital. Unlike other “exotic” financing vehicles, however, intangible-asset financial products are built on some of the most basic financing mechanisms. Far from being exotic, they use traditional techniques in new ways to help companies innovate and grow. There is plenty of opportunity to harness the power of intangibles. All we need now is the will to develop and use this innovative method of financing innovation.

On the Trails of the Glaciers

On the Trails of the Glaciers is a multidisciplinary project, combining photography and science, to study the effects of climate change on the glaciers of the most important mountains of the world.

The project’s primary goal is to record current photographic images that reproduce the views of remote mountain recorded early in the 20th century by illustrious explorer/ photographers. These new images give scientists and investigators the basis for comparative observations on the state of the largest glaciers in the world, which are valuable and extremely sensitive indicators for assessing the current climate and how it has changed over time.

The project mounted its first expedition in 2009 to mark the 100th anniversary of the Duke of Abruzzi’s 1909 expedition to Karakorum. The main goal of the original scientific and mountain-climbing expedition was to climb K2. Although the group could not reach the peak, it did make it to the top of the 7,500-meter Bride Peak, the highest elevation anyone had achieved at that time. This expedition became famous for the abundant scientific data it collected and for Vittorio Sella’s stunning photographs of this remote and starkly beautiful region. Twenty years later, Aimone di Savoia, the Duke of Spoleto, led another important expedition with the geographical objective of exploring the basin of the Baltoro glacier. Several Italians were involved in the expedition, including geologist Ardito Desio and photographer and cameraman Massimo Terzano, who recorded still images as well as producing a film documentary.

Inspired by the centenary celebration, On the Trails of the Glaciers assembled a team of mountaineers, scientists, and photographers to retrace the steps of the two historic expeditions and to produce new versions of the photographs taken by Vittorio Sella and Massimo Terzano. Through direct comparison of old and new pictures the team was able to highlight differences that the slow flow of time made imperceptible even to the most careful viewer.

The scientific observations of the glaciers’ extensions, the moraines, snowfall, avalanches, and general geomorphologic status have been directed by two of the world’s leading experts in the field of glaciology: Claudio Smiraglia, professor at the University of Milan and former president of the Glaciological Italian Committee, and Kenneth Hewitt, research associate at the Wilfrid Laurier University in Waterloo, Canada, and founder of the university’s Cold Regions Research Centre. Hewitt, a leading authority on the Karakorum glaciers, has been constantly in contact with the team, providing guidance on where to take photos and what scientific data to collect. Smiraglia is currently analyzing the data gathered during the expedition. Geologist and alpinist Pinuccio D’Aquila has been with the crew on Baltoro to perform direct analysis and to collect data on the glaciers’ extensions.

I was the project leader and took the photos during the expedition. I had previously worked in the region in 2004 as official photographer of the climbing-scientific expedition commemorating the 50th anniversary of the first ascent of K2. For many years I’ve been studying photographic archives, maps, texts, and travel diaries that the early explorers left us from the original expedition. This research made it possible for me to identify the exact geographical sites from which previous photographers took their shots. Using the most modern digital technologies, combined with traditional large-format imaging techniques, my aim was to produce images that were not only of scientific and environmental significance but also of high aesthetic quality.

For several technical reasons, the decision to record all the images on traditional film and in particular on large format was almost mandatory. I needed to achieve the same magnification ratio obtained by the earlier photographers, and I wanted to obtain the best ultra-high-resolution images possible with today’s equipment. Even today’s best digital SLR cameras cannot match the quality of large-format film cameras. I therefore used large format 4” × 5” and 6 × 17 cm cameras to photograph glaciers. Digital cameras were used to document the expedition, and these digital images were sent daily to a web portal, where the public could follow the progress of the expedition in real time.

Our team also included a film crew that is producing a full-HD documentary of the expedition. A travelling photo exhibit and a book are planned for 2010.

I hope that this project will help us to better understand the transformations that are occurring on our planet due to climate change. Today more than ever it is essential to understand how the complexity and fragility of our ecosystem so that we can effectively act on the principles of environmental protection and sustainable development.

Further information can be found at onthetrailoftheglaciers.com.

Changing Climate, More Damaging Weather

The weather varies, but climate change affects the frequencies with which particular weather occurs, including the frequencies of extreme weather, such as heavy storms, heat waves, and droughts. More frequent weather extremes will underlie the most serious physical and economic effects of climate change. Prudent programs to adapt to current and future climate change must take these changing probabilities into account when making risk assessments and devising adaptation measures.

Federal agencies and other bodies charged with estimating the probabilities of such extreme weather events have been deriving their estimates from historical frequency data that are assumed to reflect future probabilities as well. These estimates have not yet adequately factored in the effects of past and future climate change, despite strong evidence of a changing climate. They have relied on historical data stretching back as far as 50 or 100 years, which may be increasingly unrepresentative of future conditions. As a result, the risks of damage from climate change based on these estimates may be badly understated.

These backward-looking probability estimates may be understating future frequencies and risks, and this might affect assessments of possible adaptive investments. Our analysis shows that government and private organizations that use these probability assessments in designing programs and projects with long expected lifetimes may be investing too little to make existing and newly constructed infrastructure resistant to the effects of changing climate. New investments designed to historical risk standards may suffer excess damages and poor returns. We need to accelerate research programs that link climate change to future probabilities of extreme weather events and, despite remaining uncertainties, embody the findings in estimates disseminated to the public.

Sources of error

Over the past half-century, temperatures and precipitation in the United States have gradually increased, more of the precipitation has fallen in heavy storms, sea level and sea surface temperatures have risen, and other aspects of climate have also changed. A scientific consensus agrees that such changes will continue for many decades, whatever reductions of greenhouse gas emissions are achieved. But these gradual changes are not the most threatening. Organisms and ecosystems can tolerate a range of weather conditions, and buildings and infrastructure are designed to do so as well. Within this range of tolerance, weather variability causes little damage, and if change is sufficiently gradual, many systems can adapt or be adapted.

When weather varies outside this range of tolerance, however, damages increase very disproportionately. As floodwaters rise, damages are minimal as long as the levees hold, but when levees are overtopped, damages can be catastrophic. If roofs are constructed to withstand 80 mile per hour (mph) winds, a storm bringing 70 mph winds might damage only a few shingles, but if winds rose to 100 mph, roofs might come off and entire structures be destroyed. Plants can withstand a dry spell with little loss of yield, but a prolonged drought will destroy the entire crop. The most alarming risks of damage from climate change arise from an increasing likelihood of such extreme weather events, not from a gradual change in average conditions.

Unfortunately, even if weather conditions do not become more volatile as climate changes, which might happen, a shift in average conditions will also change the probability of weather events that are far removed from average conditions. For example, as more rain falls in heavy storms, the probability rises that deluges will occasionally occur that result in extreme flooding and disastrous damages. As average temperatures rise, the likelihood of an extreme heat wave rises too.

Weather-frequency estimates have yet not come to grips with the changing probabilities of extreme weather. The methodologies in use typically are backward-looking and conservative. The frequencies with which specific weather events occur are estimated from measurements in the historical record going back decades. These frequencies are then used to “fit” to the data a probability distribution with a similar mean, variance, and skewedness. The probability distribution can then be used to estimate the likelihood of extreme weather, even though there are few, if any, such events in the historical record.

Estimating the probability of extreme, very infrequent, weather events in this way is inherently difficult, because there are so few such events in the measured record. Extrapolating from the occurrence of rarely observed events to the probability of even more extreme events beyond the historical record is unavoidably uncertain.

When climate is changing, an even more serious problem lies in assuming that the future will be like the past and projecting probabilities estimated from historical data into the future. Not only are agencies charged with assessing weather distributions assuming that the estimated probability distributions are stationary, they are also ignoring measured trends in historical weather patterns.

They do so for two main reasons. The first is uncertainty about whether an apparent trend is real and long-lasting, a poorly understood cyclical phenomenon that will be reversed, or a string of random events. The second is the dilemma of giving more weight to recent observations, which might better represent current conditions but would provide less data with which to estimate a probability distribution representative of extreme and unlikely events.

Uncertainty about future climate conditions affecting particular localities and weather phenomena is the main reason why weather probability assessments estimates are still based on historical data, despite strong scientific and empirical evidence that the future will not be like the past. Conservative agencies retain methodologies and estimates that are likely to be erroneous rather than make use of scientific projections of future conditions that are still quite uncertain, especially at a regional or local geographic scale. The question bedeviling weather forecasters is “If the future will not be like the past, what will it be like?” Climate models are still unable to provide highly reliable answers to this question.

Nonetheless, weather probability estimates become increasingly outdated as time passes or when projected further into the future. They provide unreliable guidance for the design, placement, and construction of infrastructure that will be in place for many decades and vulnerable to extreme weather throughout its useful life. By producing underestimated future risks, they also provide unreliable guidance for investment and program decisions to make existing infrastructure and communities more resistant to extreme weather. As a result, according to the 2009 National Research Council report Informing Decisions in a Changing Climate, “Government agencies, private organizations, and individuals whose futures will be affected by climate change are unprepared, both conceptually and practically, for meeting the challenges and opportunities it presents. Many of their usual practices and decision rules—for building bridges, implementing zoning rules, using private motor vehicles, and so on—assume a stationary climate—a continuation of past climatic conditions, including similar patterns of variation and the same probabilities of extreme events. That assumption, fundamental to the ways people and organizations make their choices, is no longer valid.”

This is a problem of broad and significant scope. Among the public- and private-sector organizations that are exposed to increasing but underestimated risks are:

  • Local, state, and federal disaster management agencies
  • Local, state, and federal agencies that finance and build public infrastructure in vulnerable areas as well as those that own and operate vulnerable infrastructure
  • Private investors and owners of vulnerable buildings and other physical property
  • Property and casualty insurers
  • Creditors holding vulnerable infrastructure directly or indirectly as collateral
  • Vulnerable businesses and households

Clearly, this listing encompasses a large proportion of the U.S. economy, and the vulnerable regions extend over a large part of the country, including coastal regions subject to hurricanes, storm surges, and erosion; river basins subject to flooding; and agricultural areas subject to wind, storm, and drought damage.

These underestimated risks should not be neglected in any program of adaptation to climate change. Research is under way to address this problem but should be accelerated, and efforts to improve climate change forecasts at regional and local scales should be intensified. More emphasis should be placed on forecasts of the likelihood of extreme weather events. Even while these efforts are under way, however, agencies responsible for weather probability assessment should update their estimates, incorporating the best available scientific climate projections that provide guidance regarding future conditions. Uncertainties in these projected weather frequencies should be frankly acknowledged and explained. In addition to their best estimates, agencies should also present plausible uncertainty bands around those probabilities. Finally, critical agencies should be encouraged or directed to use these revised probability estimates in their risk assessments and investment planning as an important step toward anticipatory adaptation to climate change.

Devastating hurricane hits New York City

To put this statistical analysis in concrete terms, imagine the impact of a major hurricane hitting New York City. The New York metropolitan region extends across three states and encompasses an extraordinarily dense concentration of infrastructure, physical assets, and business activity. The value of just the insured coastal property in the New York, Connecticut, and New Jersey region was almost $3 trillion in 2006. A major hurricane reaching New York could produce storm surges of 18 to 24 feet. Low-lying regions, including Kennedy Airport and lower Manhattan, would flood. Subway and tunnel entrances would be submerged, as would many essential roads. High winds would do severe damage, partly by blowing dangerous debris through city streets. The risk is real enough that the city government in 2008 created the New York City Panel on Climate Change and the Climate Change Adaptation Task Force to develop a strategy for preparing for extreme weather events.

We have prepared a case study to illustrate to what extent hurricane probabilities may be underestimated, how economic damage risks may consequently also be underestimated, how these risk assessments can be updated and projected into the future based on relevant scientific information, and how these updated risk assessments might be used to improve decisions on investments in adaptation. The analysis is relevant not only to all coastal areas vulnerable to hurricanes but also to inland areas susceptible to floods, droughts, and other extreme weather.

The starting point is the probability assessment carried out by the National Hurricane Center (NHC), an office within the National Oceanic and Atmospheric Administration. The methodology used for New York City and other coastal regions counts the occurrence of hurricanes of specific intensities (defined in terms of maximum sustained wind speeds) striking within a 75-mile radius during the historical record of approximately 100 years. NHC scientists fitted a particular probability distribution, the Weibull distribution, to these observed frequencies, and the probabilities of hurricanes of various intensities were then calculated from the fitted probability distribution. There were no actual observations of the most severe hurricanes in the historical record for the New York region, so those probabilities were extrapolations based on the fitted distribution. The results, expressed as the expected return periods, are shown in Table I for various categories of hurricanes.

These probability estimates were constructed in 1999. It is questionable whether these estimates were valid in that year, because there has apparently been an upward trend in intense hurricanes in the North Atlantic over at least the past 35 years. The number of category 4 and 5 hurricanes in the North Atlantic increased from 16 during the period 1975–1989 to 25 from 1990–2004. Consequently, the earlier years in the historical record used to compute frequencies might not have been representative of the final years.

There is good reason to believe that this increasing frequency of stronger hurricanes in the North Atlantic is linked to climate change through the gradual rise in sea surface temperatures. Warming ocean waters provide the energy from which more intense hurricanes are developed and sustained. According to a recent study, a 3° Celsius (3°C) increase in sea surface temperature would raise maximum hurricane wind speeds by 15 to 20%.

Measurements throughout the oceans have found a rising trend in sea surface temperatures at a rate of approximately 0.14°C per decade. The rate of warming is apparently increasing, however, and the North Atlantic warming has been faster than the global average. According to a recent examination, in the 28-year period from 1981 to 2009, warming in the North Atlantic has averaged 0.264°C per decade, roughly twice the global average. Rising sea surface temperatures in the North Atlantic, the driving force behind the increasing frequency of intense hurricanes, explain why backward-looking historical probability estimates, such as those generated with the NHC’s approach, probably do not provide adequate guidance with respect to current and future risks.

This problem is compounded by the rising trend in sea level, itself partly the result of increasing ocean temperatures. Higher sea levels and tides raise the probability of flooding driven by hurricane-force winds. In the North Atlantic between New York and North Carolina, sea level has also risen more rapidly than the global average, at rates between 0.24 and 0.44 centimeters per decade.

These scientific findings and measurements can be used to project hurricane frequency estimates into the future. The trend in sea surface temperature, linked to the relationship between sea surface temperature and maximum wind speed, provides a way to forecast changes in the intensity of future hurricanes. High and low estimates can define a range of future probabilities. Though there are considerable uncertainties inherent in forecasts based on this approach, the results are arguably more useful than static estimates based on historical data that fail to incorporate any relevant information about the effects of climate change. At a minimum, this approach can provide a quantitative sensitivity analysis indicating by how much existing estimates may be underestimating future risks.

Table 2 displays some results, based on both the higher and lower estimates of sea surface temperature trends and the relationship between sea surface temperature and maximum wind speeds. The table shows the estimated return periods for hurricanes striking the region, based on the 1999 Weibull distribution estimated by the NHC return periods for the New York metropolitan region. (Figures differ slightly from those in Table 1 for less intense storms because of curve-fitting variances.) In addition, it presents return periods for 2010, 2020, and 2030, estimated by indexing the scale parameter of the probability distribution to a time trend based on the rate of temperature change and its effect on maximum wind speeds. The ranges shown for the decades from 2010 to 2030 are based on the high and low estimates of the rate of sea surface temperature increase.

The effects of climate change will increase the probability that New York will be struck by a hurricane, especially the more severe hurricanes. By 2030, the probabilities of category 4 and category 5 hurricanes striking the New York metropolitan region are likely to have increased by as much as 25 and 30%, respectively. These changing probabilities have dramatic economic implications.

Paying the price

A professional risk management consultancy recently estimated that a category 3 hurricane with a landfall in the New York metropolitan region would probably result in losses of approximately $200 billion in property damage, business losses, and other effects. According to the NHC’s 2000 estimates, there is only a 1.5% chance of that happening in any year. However, this may be a very misleading portrayal of the economic risk.

A more complete assessment makes use of a tool common in the insurance industry: the loss exceedance curve, which represents the annual probability of a loss equal to or greater than specified amounts. It summarizes the probabilities of hurricanes of various intensities and estimates of the damages they would create. A loss exceedance probability of $200 billion represents the chance that a hurricane loss of that amount or more, into the trillions of dollars, might occur. To construct such a loss exceedance function for the New York region, one needs not only the probabilities of category 1 to 5 hurricanes but also the damages that they would inflict.

A recent study by Yale economics professor William Nordhaus, based on hurricanes recorded throughout the United States, investigated the relationship between maximum wind speeds and resulting damages. Shockingly, this study found that damages increase as the eighth power of the wind speed: If a hurricane with wind speeds of 50 mph would cause $10 billion in damages, then one with maximum winds of 100 mph would cause not twice the damages but more than $2.5 trillion. The reasons for this dramatic escalation are threefold. First, higher winds will obviously do more damage to everything in their path; second, more intense hurricanes are likely to affect a wider area; and third, their winds are likely to persist at damaging speeds, although not at the maximum, for longer periods of time.

The loss exceedance curve implied by this relationship is plotted in Figure 1 for the year 2000 and for subsequent decades, using the higher estimate of sea surface temperature increase. On the horizontal axis, damages are marked in hundreds of billions of dollars. On the vertical axis are the probabilities of hurricane losses of those amounts or more. One striking feature that is immediately apparent is that the exceedance curve is “fat-tailed”: Probabilities decline slowly as heavy losses mount. As maximum wind speeds increase, damages mount very rapidly, offsetting the declining probability of the more intense storms. The probability of losses exceeding a trillion dollars is not half the probability of losses exceeding $500 billion, but substantially more than that. This illustrates how vulnerable to catastrophic hurricane damage the New York metropolitan region is now.

The second feature that Figure 1 illustrates is that the probabilities of large losses shift upward over time, as climate change makes intense hurricanes more likely. By 2030, the probability of hurricane damages exceeding amounts in the range of $100 billion to $500 billion could be 30 to 50% greater than current estimates assume. Rising sea surface temperatures and rising sea levels increase the economic risks to coastal cities. In the absence of effective adaptation measures, the risks of catastrophic losses will very likely continue to rise over coming decades. If New York could insure itself against these catastrophic damages, the actuarially fair annual premium would double over this period.

Risks to investors

Investors in infrastructure projects vulnerable to hurricane damage, whether buildings, roads, or other structures, face greater risks than they realize and are likely to experience rates of return from their investments that are dramatically below those that they anticipate. Infrastructure projects are designed and engineered to withstand extreme weather, so that it would take an extremely unlikely event to cause major damage. There is a tradeoff between an extra margin of safety and the additional cost required to achieve it. Civil engineers and planners are trained to estimate and base decisions on such tradeoffs, often going beyond what is strictly required by building codes and other regulations.

Unfortunately, in assessing these tradeoffs, civil engineers and planners are still relying on historical frequency estimates and are making the same assumptions that the future will be like the past, despite climate change. Thought leaders in the engineering profession have only recently begun weighing alternative approaches to climate change issues.

An infrastructure project with a 40-year lifetime expected to earn a 12% return on investment, if historical risks persisted, would earn only an expected return of 3.9% because of the rising risks of damage. Moreover, if past frequencies of extreme weather events are projected into the future without taking into account the effects of climate change, the economic value of investments in adaptation and prevention is dramatically underestimated. Imagine that at an additional investment cost, the project can be strengthened to withstand an additional 10 mph of maximum wind speed without any additional damage. The payoff from this adaptation investment would be a lower risk of hurricane damage and a higher expected income return. Suppose further that such an investment in adaptation would just break even if the historical hurricane frequencies were projected into the future, over the project’s anticipated lifetime. Under these assumptions, adaptation would be considered uneconomic, since it would yield no positive return on investment.

If the effects of climate change were taken into account by anticipating the increasing probabilities of more extreme storms striking the region, the economic advantage of investing in adaptation and prevention would appear much more attractive. The supposedly break-even adaptation project would earn an expected 58% return on the investment. Because, with few exceptions, private investors and public agencies at local, state, and federal levels are still relying on static, historically based probability estimates of extreme weather events and haven’t yet incorporated the effects of climate change into these probability estimates when evaluating the economics of adaptation investments, these agencies are grossly underestimating the economic case for investments in adaptation. This is one of the reasons why adaptation has lagged and is proceeding so slowly.

Facing up to the future

Every year the United States is hit with hurricanes, floods, droughts, and other weather-related disasters such as wild-fires and pest outbreaks. These cause many billions of dollars in damages, loss of life, and disruption or displacement of entire communities. Some of these losses can be avoided if preventive and anticipatory actions are taken. If the risks of extreme weather events are underestimated, however, the pace and extent of preventive activities will lag.

Ignoring the effects of climate change on future probabilities of extreme weather events could significantly underestimate future risks to vulnerable communities, infrastructure, and investments. Deriving such probabilities from historical records going back many decades, with no adjustment for changes in climate extending inevitably into future decades, is likely to produce faulty estimates for planning and investment decisions. Climate change will affect the frequency with which many forms of extreme weather will occur.

The effects of climate change on weather and storm patterns are still uncertain, particularly at local and regional geographical scales. Uncertainty does not justify paralysis. It should be incorporated into estimates of future risks by establishing plausible ranges for key variables and parameters. Adhering to estimates almost certain to be wrong while waiting for uncertainties to be resolved provides misleading information for current decisions. The resulting decision errors can be very costly.

Public- and private-sector agencies responsible for providing estimates of weather risks are now grappling with the problems of incorporating the effects of climate change, but progress is slow and the bias is toward conservatism: sticking to the historical record until an alternative is clearly established. Moreover, much of the current research into this issue is narrowly focused and is not connected to adaptation program planning. Leadership in the responsible agencies is needed to ensure that their frequency estimates, to the extent now possible, reflect current and future probabilities, not past historical conditions, and that their estimates are frequently updated to incorporate new information about climate change effects.

Rising Tigers, Sleeping Giant

Asia’s rising “clean technology tigers”—China, Japan, and South Korea —are poised to out-compete the United States for dominance of clean energy markets due to their substantially larger government investments to support research and innovation, manufacturing capacity, and domestic markets, as well as critical related infrastructure. Government investment in each of these Asian nations will do more to reduce investor risk and stimulate business confidence than currently proposed U.S. climate and energy legislation, which includes too few aggressive policy initiatives and allocates relatively little funding to directly support U.S. clean energy industries. Even if climate and energy legislation passed by the U.S. House of Representatives becomes law, China, Japan and South Korea will out-invest the United States by a margin of three-to-one over the next five years, attracting much if not most of the future private investment in the industry. Global private investment in renewable energy and energy efficient technologies alone is estimated to reach $450 billion annually by 2012 and $600 billion by 2020, and could be much larger if recent market opportunity estimates are realized. For the United States to regain economic leadership in the global clean energy industry, U.S. energy policy must include more direct and coordinated investment in clean technology R&D, manufacturing, deployment, and infrastructure.

The governments of China, Japan, and South Korea are investing heavily to develop clean technology manufacturing and innovation clusters in order to gain a “first-mover” advantage in key clean energy sectors. Direct government investments will help China, Japan, and South Korea form industry clusters, like Silicon Valley in the United States, where inventors, investors, manufacturers, suppliers, universities, and others can establish a dense network of relationships and an attractive business environment that can create an enduring competitive advantage for these nations.

Asia’s clean tech tigers are already on the cusp of establishing advantage over the United States in the global clean tech industry. This year China will export the first wind turbines destined for use in a U.S. wind farm, for a project valued at $1.5 billion. With no domestic manufacturers of high-speed rail technology, the United States will rely on companies in Japan or other foreign countries to provide rolling stock for any planned high-speed rail lines. And all three Asian nations lead the United States in the deployment of new nuclear power plants. The United States relies on foreign-owned companies to manufacture the majority of its wind turbines, produces less than 10 percent of the world’s solar cells, and is losing ground on hybrid and electric vehicle technology and manufacturing. As the Rising Tigers report demonstrates, the United States lags far behind its economic competitors in clean technology manufacturing. Should this gap persist, the United States risks importing the majority of the clean energy technologies necessary to meet growing domestic demand.

Since emerging clean energy technologies remain more expensive than conventional alternatives and face a variety of non-price barriers, public sector investments in clean energy will be a key factor in determining the location of clean energy investments made by the private sector. The direct and targeted public investments of China, Japan, and South Korea are likely to attract substantial private investment to clean energy industries in each country, perhaps more so than the market-based and indirect policies of the United States.

As trillions of dollars are invested in the global clean energy sector over the next decade, clean tech firms and investors will invest more in those countries that offer support for infrastructure, R&D, a trained workforce, guaranteed government purchases, deployment incentives, lower tax burdens, and other incentives. In China, for example, state and local governments are offering firms free land and R&D money, and state-owned banks are offering loans to clean tech firms at much lower interest rates than those available in the United States.

There are historic examples of the United States catching up to competitors who have surged ahead. The United States raced past Europe in aerospace through sustained federal military-related support for aviation technology development and deployment, and was able to become a world leader in civil and military aviation, after trailing Europe for years. During the space race, the United States quickly met and then surpassed the Soviet Union after it launched the Sputnik satellite, putting a man on the moon twelve years later after a sustained program of direct investment in innovation and technology. The United States has consistently been a leader in inventing new technologies and creating new industries and economic opportunities. It remains one of the most innovative economies in the world and is home to the world’s best research institutions and most entrepreneurial workforce. The challenge will be for the United States to aggressively build on these strengths with robust public policy and government investment capable of establishing leadership in clean technology development, manufacturing, and deployment, and to do so before China, South Korea and Japan fully establish and cement their emerging competitive advantages.

The following figures and tables are taken from Rising Tigers, Sleeping Giant: Asian Nations Set to Dominate the Clean Energy Race by Out-investing the United States, a report by the Breakthrough Institute of Oakland, California, and the Information Technology and Innovation Foundation of Washington, DC. The full report is available at http://thebreakthrough.org/blog/rising_tigers.pdf.

The clean energy investment gap

China, South Korea, and Japan will invest a total of $509 billion in clean technology (solar, wind, and nuclear power; energy efficiency; advanced vehicles; high-speed rail, and carbon capture and sequestration) over the next five years (2009-2013), whereas the United States will invest $172 billion, a sum that assumes the passage of the proposed American Clean Energy and Security Act and includes current budget appropriations and recently enacted economic stimulus measures.

Research and innovation

The United States currently invests slightly more money in research and development than does Japan and has an advantage over China and South Korea. However, each Asian competitor is moving to close the innovation funding gap. Furthermore, as a percentage of each nation’s gross domestic product, Japan and South Korea out-invest the United States on energy innovation by a factor of two-to-one. The United States secures 20.2% of the world’s clean energy patents, more than any country in the world, but Japan is a close second.

Clean energy manufacturing

The United States has fallen behind its economic competitors, especially China, in the capability to manufacture and produce clean energy technologies on a large scale. The United States trails China and Japan in solar PV, China in wind, and all three Asian nations in nuclear. The three Asian nations now have their own domestic nuclear reactor designs, whereas the United States has seen a decline in nuclear engineering facilities and does not have the large heavy forging capacity necessary to produce full nuclear reactor sets.

The United States is currently being aggressively challenged by its Asian competitors in the development of plug-in hybrid and electric vehicles, and is lagging behind in the production of the advanced batteries that will power them. China, Japan, and South Korea collectively manufacture over 80% of the world’s lithium-ion batteries. The United States does not manufacture any truly high-speed trains, and all future plans for high-speed rail deployment may require international imports

Domestic clean energy markets

The United States currently leads China, Japan, and South Korea in the domestic market development of two of the six technologies, wind power and carbon capture and storage (CCS) technology. With respect to advanced vehicles, domestic markets are still nascent, and a clear world leader has yet to emerge.

The United States currently lags behind its competition in market development for nuclear power and high-speed rail (HSR). Despite having the world’s largest installed nuclear power capacity, the United States has no new nuclear power plants under construction, whereas China leads with seventeen. Whereas the United States has no HSR capacity to speak of and is still years away from breaking ground on the nation’s first true high-speed line, each of its three Asian competitors has a large and growing domestic market for this clean technology.

Using University Knowledge to Defend the Country

Everyone understands that the United States will need new ideas to meet the threat of terrorism, and indeed, history shows the way. Seventy years ago, the country’s scholars ransacked their respective disciplines for the ideas that won World War II. Academic ideas continued to produce key technologies, including hydrogen bombs and intercontinental ballistic missiles, well into the Cold War. Much work was done through the National Academies, most notably when a National Research Council report persuaded the Navy to launch its massive Polaris submarine program.

Still, that was a long time ago. How well is government using today’s academic insights to fight terrorism? Three years ago I asked 30 specialists to review what academic research has to say for a comprehensive volume entitled WMD Terrorism: Science and Policy Choices (MIT Press, 2009). As expected, we found a large, insightful body of literature, much of which drew on disciplines, from nuclear physics to game theory, that government could never have sorted out for itself. The really striking thing, however, was how often U.S. policy failed to reflect mainstream academic insights.

There are a variety of instances in which homeland security policy seems to be at odds with mainstream academic research. I will focus on three: Managing public behavior after an attack with weapons of mass destruction (WMD), mitigating economic damages from a so-called dirty bomb, and designing cost-effective R&D incentives. Although these examples are important in their own right, they also point to a more systematic problem in the way federal agencies fund and use homeland security research.

Managing public behavior after a WMD attack. Washington insiders routinely claim that certain attacks on the U.S. homeland—the setting off of improvised explosive devices, assaults by suicide bombers, or the release of radioactive isotopes from a dirty bomb—would cause psychological damage out of all proportion to any physical impact. For example, Washington counterterrorism consultant Steven Emerson told CNBC in 2002 that a dirty bomb that killed no one at all would trigger “absolutely enormous” panic, deal “a devastating blow to the economy,” and “supersede even the 9/11 attacks.” Training scenarios such as the U.S. government’s Topoff exercises are similarly preoccupied with containing public panic. According to an article in the March 14, 2003, Journal of Higher Education, the first Topoff exercise featured widespread civil unrest after a biological attack and National Guard troops shooting desperate civilians at an antidote distribution center. As Monica Schoch-Spana of Johns Hopkins University remarked, government exercises frequently stress this image of a “hysterical, proneto-violence” public.

The cultural roots of this expectation are deep. Indeed, scenes of rioting and chaos go back to the world-ending plague depicted in Mary Shelley’s 1822 novel The Last Man. Since then, the theme has become a reliable stock ingredient for 20th-century science fiction, from Philip Wylie’s Cold War–era nuclear holocaust novel Tomorrow to Richard Preston’s 1998 bioterrorism novel The Cobra Event. From the beginning, governments have shown a distinct readiness to believe such scenarios. Indeed, British Prime Minister Stanley Baldwin was already persuaded in the 1930s that public panic after air raids would deal a “knockout blow” to modern societies.

But is any of this true? Mary Shelley’s predictions notwithstanding, academics who study real disasters have repeatedly found that the public almost never panics. Confronted with this evidence, today’s homeland security officials usually argue that WMD are different. But this too is wrong. Social scientists know a great deal about how civilian populations have responded to the use of WMD, including, for example, atomic weapons at Hiroshima, wartime firestorms in Germany, and the 1918 influenza pandemic. In all of these cases, victims were overwhelmingly calm and orderly and even risked their lives for strangers. Indeed, antisocial behaviors such as crime often declined after attacks. Furthermore, social psychology studies of public attitudes toward various WMD agents, particularly radioactivity, since the 1970s have further reinforced the conclusion that the public would remain calm. The problem, so far, has been persuading authorities to listen. When researchers such as Carnegie Mellon University’s Baruch Fischhoff complain that U.S. policymakers are “deliberately ignoring behavioral research” and “preferring hunches to science,” their frustration seems palpable.

Facts, as Ronald Reagan (and John Adams before him) used to say, “are stubborn things.” Here, alas, facts have not been stubborn enough. It is bad enough for government officials to waste time and money on exercises that simulate a myth. The deeper problem is that such myths can become self-fulfilling. Cooperation, after all, requires a certain faith in others. Without that, even rational people will eventually decide that every man for himself is the right strategy. U.S. leaders have traditionally fought such outcomes by reminding Americans that the real enemy is fear itself, that communities are strong and can be trusted to work together. The problem is that too many officials don’t really believe this. As University of Colorado sociologist Kathleen Tierney has remarked, New Orleans’ citizens met Hurricane Katrina with courage and goodwill. This, however, did not prevent fragmentary and mostly wrong initial reports of looting from prompting a debilitating “elite panic.” The result was a hasty shoot-to-kill policy and, even worse, a systematic reallocation of government effort away from badly needed rescue operations to patrolling quiet neighborhoods. As things stand, federal leaders could easily end up repeating this error on a larger scale.

Mitigating the economic impact of dirty bombs. Almost by definition, terrorism requires weapons that have consequences far beyond their physical effects. Here, the archetypal example is a dirty bomb that spreads small amounts of radioactivity over a wide area. In this situation, medical casualties would probably be minimal. Instead, the main repercussions would involve cleanup costs and, especially, the economic ripple effects generated by government-ordered evacuations.

Eight years after September 11, economists know a great deal about how a dirty bomb would affect the economy. For example, University of Southern California Professors James Moore, Peter Gordon, and Harry Richardson and their collaborators have performed detailed simulations of dirty bomb attacks on Los Angeles’ ports and downtown office buildings. Depending on the scenario, they found that losses from business interruptions, declining property values, and so forth would range from $5 billion to $40 billion. Their models, however, are extremely sensitive to estimates of how long specific assets, notably freeways, remain closed. In practice, Moore et al. assume that first responders will ignore these economic effects and indiscriminately close all facilities within a predefined radius. Politically, this seems like a sensible prediction. But is it good policy? Government security experts frequently joke that dirty bombs are mainly effective as “weapons of mass disruption.” If so, it would make sense to use Moore et al.’s models to evaluate different evacuation plans that balance economic damage against increased medical risk. This could be done, for example, by allowing port workers and truck drivers to operate critical assets for limited periods of time so that critical parts, for example, continue flowing to distant factories. Ten years ago, authorities had no practical way to know which assets to keep open. Moore et al.’s work potentially changes this. The problem, for now, is that most public officials don’t know about it.

There is also a deeper problem: The power of dirty bombs comes from Americans’ fear of radiation. Yet many scholars argue that this fear is exaggerated. If so, government should be able to educate citizens so that dirty bombs become less frightening and also less appealing to terrorists in the first place. This, of course, is a notoriously hard problem. Thirty years ago, the nuclear power industry hired the country’s best lawyers and public relations firms to explain radiation risk and failed miserably. But that was before social psychologists developed sophisticated new insights into the public’s often subconscious response to nuclear fear. Many scholars, notably including University of Oregon psychologist Paul Slovik, think that new communication campaigns that targeted these “mental models” could significantly reduce public anxiety. So far, however, homeland security agencies have shown only minimal interest.

THE BASIC PROBLEM IS THAT HOMELAND SECURITY AGENCIES HAVE FALLEN INTO THE HABIT OF FRAMING RESEARCH QUESTIONS FIRST AND ONLY LATER ASKING FOR ADVICE.

Designing government R&D incentives. Everyone recognizes that defending the United States will require dozens of new technologies. So far, though, the record has been discouraging. Five years ago, Congress appropriated $5.6 billion for a “Bioshield” program aimed at developing new vaccines to fight biological weapons. Despite this, almost nothing has happened. Beltway pundits like to explain this failure by saying that drug discovery is expensive and more money is needed. This, however, cannot be squared with reliable estimates that put per-drug discovery costs at just $800 million. Instead, the real problem seems to have been an attempt by Congress to micromanage how the money was spent by, for example, specifying that procurement agencies could not sign contracts to buy drugs that were still under development. This, however, meant asking drug companies to invest in R&D without any assurance that the government would offer a per-unit price that covered their investment. Economists have long known that such arrangements can and do give government negotiators enormous bargaining leverage. Indeed, drug companies can even end up manufacturing drugs at prices that fail to cover their R&D costs. Although this might sound like a good deal for the taxpayer, drug companies were too smart to invest in the first place.

Predictably, Congress has since passed a second statute (originally Bioshield II, now the Pandemic and All Hazards Preparedness Act of 2006) that offers “milestone” payments so that companies are no longer required to make all of their R&D investments up front. This, however, creates a much more fundamental problem. When Bioshield II was first being debated, Sen. Joe Lieberman (I-CT) told his colleagues that even the new incentives might not be sufficient. Instead, Lieberman warned that “only industry can give us a clear answer to these questions,” and this would require a process of “government listening and industry speaking.” Of course, Lieberman must have known that such a process would be like asking a used car salesman how much money he “needed” to close a deal. The problem was that neither Lieberman nor anyone else in Congress knew much about drug development costs. If you don’t know how much something costs, what price should you offer?

Congress never really answered this question. Instead, it outsourced Lieberman’s dilemma to the executive branch by creating a new expert body, the Biomedical Advanced Research and Development Agency (BARDA), with broad powers to design whatever incentives seemed best under the circumstances. What Congress did not explain, of course, is exactly how BARDA should go about choosing between, say, offering drugmakers a prize for the best vaccine and hiring them to do contract research. The obvious Washington temptation, of course, would be for BARDA to treat the decision as an ideological fight with, say, Reaganites favoring prizes and Obamaites calling for government-funded research. No private-sector firm would do business this way. Its shareholders could and would demand that their CEO develop a sharp-pencil business case for choosing one strategy over the other. Taxpayers should expect no less.

We already know what our hypothetical CEO would do: hire a topflight academic economist to consult. After all, economists who study “mechanism design” problems have written literally thousands of mathematically rigorous papers about how to design incentives when seller costs are unknown. Government should similarly seek out and listen to these insights.

A systemic problem

The foregoing examples are just that: examples. In truth, many more insights could be mined. Possible places to look include the extensive survey literature on terrorist movements around the world; the literature on why small fringe groups persist and, especially, how they fail; and the largely forgotten history of civil defense and firefighting operations in World War II. In at least a few cases, government’s need is urgent. Nature reported on September 3, 2009, that companies manufacturing artificial DNA that can be used to make smallpox were locked in a standards war over how much to scrutinize incoming orders. Political scientists and economists have been studying when and how government can intervene in this type of industrial controversy since the 1950s. At least potentially, government could use these insights to tip the private sector to whichever outcomes it deems best for the nation. For now, though, many agencies do not seem to understand that intervention is even an option.

What has gone wrong? The basic problem is that homeland security agencies have fallen into the habit of framing research questions first and only later asking for advice. Here, the underlying rationale seems to be that university faculty are smart people who can correctly answer any question posed to them. The problem with this approach, which can often be useful, is that it scales poorly. Indeed, universities have famously rejected such methods of establishing truth—variously labeled “arguments from authority” and “scholasticism”—since the 11th century. And historically, government experiments with telling academics what to study have usually failed. During World War I, U.S. agencies inducted thousands of scientists into the Army and told them what problems to work on. Despite Thomas Edison’s leadership, the system produced few results and was widely seen as a waste of money.

There is, of course, a better way. Although no one denies the role of individual brilliance, the real power of university research comes from community. The discovery of atomic fission, for instance, would never have been possible without 40 years of sustained effort by hundreds of scientists. Similar stories could also be told for radar, operations research, acoustic torpedoes, and most other wartime triumphs. In each case, the key step was realizing that relevant academic knowledge existed and was ripe for exploitation. Crucially, however, no government R&D czar could know this in advance. In the first instance, at least, the decision to pursue a particular research question had to originate with the researchers themselves. Government’s crucial contribution was in knowing how to listen.

Notoriously, this is more easily said than done. In the end, success will depend on government R&D managers’ readiness to become better consumers of university knowledge. Still, talent isn’t everything. Institutional arrangements can markedly improve the chances of success. During World War II, individual faculty members were repeatedly urged to submit research ideas. The resulting proposals were then allowed to percolate upward through a series of peer-review committees until only the best survived. Today, something similar could be accomplished by creating faculty advisory boards that collectively represent a wide assortment of disciplines. In addition to contributing their own ideas, members would monitor their respective disciplines for new opportunities and emerging ideas. (Practically all biotechnology companies maintain academic advisory boards for exactly this reason.) Finally, government could offer “blue sky” grants that give applicants broad discretion to pick their own research topics. Once again, however, peer review would be needed to reject the inevitable bad ideas and identify gems.

Agencies, of course, can and should feel free to ask academics specific questions. The problem, for now, is that the pendulum has swung too far. During the past four years, I have been asked several times to organize “tabletop exercises” in which Coast Guard officers, say, are asked to respond to a simulated terrorist attack. The question is what I or any academic could bring to such an exercise. Does the Coast Guard really think that we can tell them where to put its machine guns? Such examples suggest that homeland security’s current emphasis on predefined, focused topics is becoming wasteful. At the same time, government’s record of eliciting the kinds of really big ideas that won World War II has been spotty.

Until recently, this analysis might have been seen as a criticism of the Bush administration. Today, however, Congress has increasingly delegated R&D incentives to new and presumably more sophisticated specialist agencies like BARDA. Furthermore, the White House is now in the hands of self-described “smart power” advocates. However tentative, these are encouraging signs. Americans have a long history of using university knowledge to defend the country. It is time that the nation reconnect with that tradition.

Calming Our Nuclear Jitters

The fearsome destructive power of nuclear weapons provokes understandable dread, but in crafting public policy we must move beyond this initial reaction to soberly assess the risks and consider appropriate ac tions. Out of awe over and anxiety about nuclear weapons, the world’s super-powers accumulated enormous arsenals of them for nearly 50 years. But then, in the wake of the Cold War, fears that the bombs would be used vanished almost entirely. At the same time, concerns that terrorists and rogue nations could acquire nuclear weapons have sparked a new surge of fear and speculation.

In the past, excessive fear about nuclear weapons led to many policies that turned out to be wasteful and unnecessary. We should take the time to assess these new risks to avoid an overreaction that will take resources and attention away from other problems. Indeed, a more thoughtful analysis will reveal that the new perceived danger is far less likely than it might at first appear.

Albert Einstein memorably proclaimed that nuclear weapons “have changed everything except our way of thinking.” But the weapons actually seem to have changed little except our way of thinking, as well as our ways of declaiming, gesticulating, deploying military forces, and spending lots of money.

To begin with, the bomb’s impact on substantive historical developments has turned out to be minimal. Nuclear weapons are, of course, routinely given credit for preventing or deterring a major war during the Cold War era. However, it is increasingly clear that the Soviet Union never had the slightest interest in engaging in any kind of conflict that would remotely resemble World War II, whether nuclear or not. Its agenda emphasized revolution, class rebellion, and civil war, conflict areas in which nuclear weapons are irrelevant. Thus, there was no threat of direct military aggression to deter. Moreover, the possessors of nuclear weapons have never been able to find much military reason to use them, even in principle, in actual armed conflicts.

Although they may have failed to alter substantive history, nuclear weapons have inspired legions of strategists to spend whole careers agonizing over what one analyst has called “nuclear metaphysics,” arguing, for example, over how many MIRVs (multiple independently targetable reentry vehicles) could dance on the head of an ICBM (intercontinental ballistic missile). The result was a colossal expenditure of funds.

Most important for current policy is the fact that contrary to decades of hand-wringing about the inherent appeal of nuclear weapons, most countries have actually found them to be a substantial and even ridiculous misdirection of funds, effort, and scientific talent. This is a major if much-underappreciated reason why nuclear proliferation has been so much slower than predicted over the decades.

In addition, the proliferation that has taken place has been substantially inconsequential. When the quintessential rogue state, Communist China, obtained nuclear weapons in 1964, Central Intelligence Agency Director John McCone sternly proclaimed that nuclear war was “almost inevitable.” But far from engaging in the nuclear blackmail expected at the time by almost everyone, China built its weapons quietly and has never made a real nuclear threat.

Despite this experience, proliferation anxiety continues to flourish. For more than a decade, U.S. policymakers obsessed about the possibility that Saddam Hussein’s pathetic and technologically dysfunctional regime in Iraq could in time obtain nuclear weapons, even though it took the far more advanced Pakistan 28 years. To prevent this imagined and highly unlikely calamity, damaging and destructive economic sanctions were imposed and then a war was waged, and each venture has probably resulted in more deaths than were suffered at Hiroshima and Nagasaki combined. (At Hiroshima and Nagasaki, about 67,000 people died immediately and 36,000 more died over the next four months. Most estimates of the Iraq war have put total deaths there at about the Hiroshima-Nagasaki levels, or higher.)

Today, alarm is focused on the even more pathetic regime in North Korea, which has now tested a couple of atomic devices that seem to have been fizzles. There is even more hysteria about Iran, which has repeatedly insisted it has no intention of developing weapons. If that regime changes its mind or is lying, experience suggests it is likely to find that, except for stoking the national ego for a while, the bombs are substantially valueless and a very considerable waste of money and effort.

A daunting task

Politicians of all stripes preach to an anxious, appreciative, and very numerous choir when they, like President Obama, proclaim atomic terrorism to be “the most immediate and extreme threat to global security.” It is the problem that, according to Defense Secretary Robert Gates, currently keeps every senior leader awake at night.

This is hardly a new anxiety. In 1946, atomic bomb maker J. Robert Oppenheimer ominously warned that if three or four men could smuggle in units for an atomic bomb, they could blow up New York. This was an early expression of a pattern of dramatic risk inflation that has persisted throughout the nuclear age. In fact, although expanding fires and fallout might increase the effective destructive radius, the blast of a Hiroshima-size device would “blow up” about 1% of the city’s area—a tragedy, of course, but not the same as one 100 times greater.

In the early 1970s, nuclear physicist Theodore Taylor proclaimed the atomic terrorist problem to be “immediate,” explaining at length “how comparatively easy it would be to steal nuclear material and step by step make it into a bomb.” At the time he thought it was already too late to “prevent the making of a few bombs, here and there, now and then,” or “in another ten or fifteen years, it will be too late.” Three decades after Taylor, we continue to wait for terrorists to carry out their “easy” task.

In contrast to these predictions, terrorist groups seem to have exhibited only limited desire and even less progress in going atomic. This may be because, after brief exploration of the possible routes, they, unlike generations of alarmists, have discovered that the tremendous effort required is scarcely likely to be successful.

The most plausible route for terrorists, according to most experts, would be to manufacture an atomic device themselves from purloined fissile material (plutonium or, more likely, highly enriched uranium). This task, however, remains a daunting one, requiring that a considerable series of difficult hurdles be conquered and in sequence.

Outright armed theft of fissile material is exceedingly unlikely not only because of the resistance of guards, but because chase would be immediate. A more promising approach would be to corrupt insiders to smuggle out the required substances. However, this requires the terrorists to pay off a host of greedy confederates, including brokers and money-transmitters, any one of whom could turn on them or, either out of guile or incompetence, furnish them with stuff that is useless. Insiders might also consider the possibility that once the heist was accomplished, the terrorists would, as analyst Brian Jenkins none too delicately puts it, “have every incentive to cover their trail, beginning with eliminating their confederates.”

If terrorists were somehow successful at obtaining a sufficient mass of relevant material, they would then probably have to transport it a long distance over unfamiliar terrain and probably while being pursued by security forces. Crossing international borders would be facilitated by following established smuggling routes, but these are not as chaotic as they appear and are often under the watch of suspicious and careful criminal regulators. If border personnel became suspicious of the commodity being smuggled, some of them might find it in their interest to disrupt passage, perhaps to collect the bounteous reward money that would probably be offered by alarmed governments once the uranium theft had been discovered.

Once outside the country with their precious booty, terrorists would need to set up a large and well-equipped machine shop to manufacture a bomb and then to populate it with a very select team of highly skilled scientists, technicians, machinists, and administrators. The group would have to be assembled and retained for the monumental task while no consequential suspicions were generated among friends, family, and police about their curious and sudden absence from normal pursuits back home.

Members of the bomb-building team would also have to be utterly devoted to the cause, of course, and they would have to be willing to put their lives and certainly their careers at high risk, because after their bomb was discovered or exploded they would probably become the targets of an intense worldwide dragnet operation.

Some observers have insisted that it would be easy for terrorists to assemble a crude bomb if they could get enough fissile material. But Christoph Wirz and Emmanuel Egger, two senior physicists in charge of nuclear issues at Switzerland‘s Spiez Laboratory, bluntly conclude that the task “could hardly be accomplished by a subnational group.” They point out that precise blueprints are required, not just sketches and general ideas, and that even with a good blueprint the terrorist group would most certainly be forced to redesign. They also stress that the work is difficult, dangerous, and extremely exacting, and that the technical requirements in several fields verge on the unfeasible. Stephen Younger, former director of nuclear weapons research at Los Alamos Laboratories, has made a similar argument, pointing out that uranium is “exceptionally difficult to machine” whereas “plutonium is one of the most complex metals ever discovered, a material whose basic properties are sensitive to exactly how it is processed.“ Stressing the “daunting problems associated with material purity, machining, and a host of other issues,” Younger concludes, “to think that a terrorist group, working in isolation with an unreliable supply of electricity and little access to tools and supplies” could fabricate a bomb “is farfetched at best.”

Under the best circumstances, the process of making a bomb could take months or even a year or more, which would, of course, have to be carried out in utter secrecy. In addition, people in the area, including criminals, may observe with increasing curiosity and puzzlement the constant coming and going of technicians unlikely to be locals.

If the effort to build a bomb was successful, the finished product, weighing a ton or more, would then have to be transported to and smuggled into the relevant target country where it would have to be received by collaborators who are at once totally dedicated and technically proficient at handling, maintaining, detonating, and perhaps assembling the weapon after it arrives.

The financial costs of this extensive and extended operation could easily become monumental. There would be expensive equipment to buy, smuggle, and set up and people to pay or pay off. Some operatives might work for free out of utter dedication to the cause, but the vast conspiracy also requires the subversion of a considerable array of criminals and opportunists, each of whom has every incentive to push the price for cooperation as high as possible. Any criminals competent and capable enough to be effective allies are also likely to be both smart enough to see boundless opportunities for extortion and psychologically equipped by their profession to be willing to exploit them.

Those who warn about the likelihood of a terrorist bomb contend that a terrorist group could, if with great difficulty, overcome each obstacle and that doing so in each case is “not impossible.” But although it may not be impossible to surmount each individual step, the likelihood that a group could surmount a series of them quickly becomes vanishingly small. Table 1 attempts to catalogue the barriers that must be overcome under the scenario considered most likely to be successful. In contemplating the task before them, would-be atomic terrorists would effectively be required to go though an exercise that looks much like this. If and when they do, they will undoubtedly conclude that their prospects are daunting and accordingly uninspiring or even terminally dispiriting.

It is possible to calculate the chances for success. Adopting probability estimates that purposely and heavily bias the case in the terrorists’ favor—for example, assuming the terrorists have a 50% chance of overcoming each of the 20 obstacles—the chances that a concerted effort would be successful comes out to be less than one in a million. If one assumes, somewhat more realistically, that their chances at each barrier are one in three, the cumulative odds that they will be able to pull off the deed drop to one in well over three billion.

Other routes would-be terrorists might take to acquire a bomb are even more problematic. They are unlikely to be given or sold a bomb by a generous like-minded nuclear state for delivery abroad because the risk would be high, even for a country led by extremists, that the bomb (and its source) would be discovered even before delivery or that it would be exploded in a manner and on a target the donor would not approve, including on the donor itself. Another concern would be that the terrorist group might be infiltrated by foreign intelligence.

The terrorist group might also seek to steal or illicitly purchase a “loose nuke“ somewhere. However, it seems probable that none exist. All governments have an intense interest in controlling any weapons on their territory because of fears that they might become the primary target. Moreover, as technology has developed, finished bombs have been out-fitted with devices that trigger a non-nuclear explosion that destroys the bomb if it is tampered with. And there are other security techniques: Bombs can be kept disassembled with the component parts stored in separate high-security vaults, and a process can be set up in which two people and multiple codes are required not only to use the bomb but to store, maintain, and deploy it. As Younger points out, “only a few people in the world have the knowledge to cause an unauthorized detonation of a nuclear weapon.”

There could be dangers in the chaos that would emerge if a nuclear state were to utterly collapse; Pakistan is frequently cited in this context and sometimes North Korea as well. However, even under such conditions, nuclear weapons would probably remain under heavy guard by people who know that a purloined bomb might be used in their own territory. They would still have locks and, in the case of Pakistan, the weapons would be disassembled.

The al Qaeda factor

The degree to which al Qaeda, the only terrorist group that seems to want to target the United States, has pursued or even has much interest in a nuclear weapon may have been exaggerated. The 9/11 Commission stated that “al Qaeda has tried to acquire or make nuclear weapons for at least ten years,” but the only substantial evidence it supplies comes from an episode that is supposed to have taken place about 1993 in Sudan, when al Qaeda members may have sought to purchase some uranium that turned out to be bogus. Information about this supposed venture apparently comes entirely from Jamal al Fadl, who defected from al Qaeda in 1996 after being caught stealing $110,000 from the organization. Others, including the man who allegedly purchased the uranium, assert that although there were various other scams taking place at the time that may have served as grist for Fadl, the uranium episode never happened.

As a key indication of al Qaeda’s desire to obtain atomic weapons, many have focused on a set of conversations in Afghanistan in August 2001 that two Pakistani nuclear scientists reportedly had with Osama bin Laden and three other al Qaeda officials. Pakistani intelligence officers characterize the discussions as “academic” in nature. It seems that the discussion was wide-ranging and rudimentary and that the scientists provided no material or specific plans. Moreover, the scientists probably were incapable of providing truly helpful information because their expertise was not in bomb design but in the processing of fissile material, which is almost certainly beyond the capacities of a nonstate group.

Kalid Sheikh Mohammed, the apparent planner of the 9/11 attacks, reportedly says that al Qaeda’s bomb efforts never went beyond searching the Internet. After the fall of the Taliban in 2001, technical experts from the CIA and the Department of Energy examined documents and other information that were uncovered by intelligence agencies and the media in Afghanistan. They uncovered no credible information that al Qaeda had obtained fissile material or acquired a nuclear weapon. Moreover, they found no evidence of any radioactive material suitable for weapons. They did uncover, however, a “nuclear-related” document discussing “openly available concepts about the nuclear fuel cycle and some weapons-related issues.”

Just a day or two before al Qaeda was to flee from Afghanistan in 2001, bin Laden supposedly told a Pakistani journalist, “If the United States uses chemical or nuclear weapons against us, we might respond with chemical and nuclear weapons. We possess these weapons as a deterrent.” Given the military pressure that they were then under and taking into account the evidence of the primitive or more probably nonexistent nature of al Qaeda’s nuclear program, the reported assertions, although unsettling, appear at best to be a desperate bluff.

Bin Laden has made statements about nuclear weapons a few other times. Some of these pronouncements can be seen to be threatening, but they are rather coy and indirect, indicating perhaps something of an interest, but not acknowledging a capability. And as terrorism specialist Louise Richardson observes, “Statements claiming a right to possess nuclear weapons have been misinterpreted as expressing a determination to use them. This in turn has fed the exaggeration of the threat we face.”

Norwegian researcher Anne Stenersen concluded after an exhaustive study of available materials that, although “it is likely that al Qaeda central has considered the option of using non-conventional weapons,” there is “little evidence that such ideas ever developed into actual plans, or that they were given any kind of priority at the expense of more traditional types of terrorist attacks.” She also notes that information on an al Qaeda computer left behind in Afghanistan in 2001 indicates that only $2,000 to $4,000 was earmarked for weapons of mass destruction research and that the money was mainly for very crude work on chemical weapons.

Today, the key portions of al Qaeda central may well total only a few hundred people, apparently assisting the Taliban’s distinctly separate, far larger, and very troublesome insurgency in Afghanistan. Beyond this tiny band, there are thousands of sympathizers and would-be jihadists spread around the globe. They mainly connect in Internet chat rooms, engage in radicalizing conversations, and variously dare each other to actually do something.

Any “threat,” particularly to the West, appears, then, principally to derive from self-selected people, often isolated from each other, who fantasize about performing dire deeds. From time to time some of these people, or ones closer to al Qaeda central, actually manage to do some harm. And occasionally, they may even be able to pull off something large, such as 9/11. But in most cases, their capacities and schemes, or alleged schemes, seem to be far less dangerous than initial press reports vividly, even hysterically, suggest. Most important for present purposes, however, is that any notion that al Qaeda has the capacity to acquire nuclear weapons, even if it wanted to, looks farfetched in the extreme.

It is also noteworthy that, although there have been plenty of terrorist attacks in the world since 2001, all have relied on conventional destructive methods. For the most part, terrorists seem to be heeding the advice found in a memo on an al Qaeda laptop seized in Pakistan in 2004: “Make use of that which is available … rather than waste valuable time becoming despondent over that which is not within your reach.” In fact, history consistently demonstrates that terrorists prefer weapons that they know and understand, not new, exotic ones.

Glenn Carle, a 23-year CIA veteran and once its deputy intelligence officer for transnational threats, warns, “We must not take fright at the specter our leaders have exaggerated. In fact, we must see jihadists for the small, lethal, disjointed, and miserable opponents that they are.” al Qaeda, he says, has only a handful of individuals capable of planning, organizing, and leading a terrorist organization, and although the group has threatened attacks with nuclear weapons, “its capabilities are far inferior to its desires.”

Policy alternatives

The purpose here has not been to argue that policies designed to inconvenience the atomic terrorist are necessarily unneeded or unwise. Rather, in contrast with the many who insist that atomic terrorism under current conditions is rather likely— indeed, exceedingly likely—to come about, I have contended that it is hugely unlikely. However, it is important to consider not only the likelihood that an event will take place, but also its consequences. Therefore, one must be concerned about catastrophic events even if their probability is small, and efforts to reduce that likelihood even further may well be justified.

At some point, however, probabilities become so low that, even for catastrophic events, it may make sense to ignore them or at least put them on the back burner; in short, the risk becomes acceptable. For example, the British could at any time attack the United States with their submarine-launched missiles and kill millions of Americans, far more than even the most monumentally gifted and lucky terrorist group. Yet the risk that this potential calamity might take place evokes little concern; essentially it is an acceptable risk. Meanwhile, Russia, with whom the United States has a rather strained relationship, could at any time do vastly more damage with its nuclear weapons, a fully imaginable calamity that is substantially ignored.

In constructing what he calls “a case for fear,” Cass Sunstein, a scholar and current Obama administration official, has pointed out that if there is a yearly probability of 1 in 100,000 that terrorists could launch a nuclear or massive biological attack, the risk would cumulate to 1 in 10,000 over 10 years and to 1 in 5,000 over 20. These odds, he suggests, are “not the most comforting.” Comfort, of course, lies in the viscera of those to be comforted, and, as he suggests, many would probably have difficulty settling down with odds like that. But there must be some point at which the concerns even of these people would ease. Just perhaps it is at one of the levels suggested above: one in a million or one in three billion per attempt.

As for that other central policy concern, nuclear proliferation, it seems to me that policymakers should maintain their composure. The pathetic North Korean regime mostly seems to be engaged in a process of extracting aid and recognition from outside. A viable policy toward it might be to reduce the threat level and to wait while continuing to be extorted, rather than to carry out policies that increase the already intense misery of the North Korean people.

If the Iranians do break their pledge not to develop nuclear weapons (a conversion perhaps stimulated by an airstrike on its facilities), they will probably “use” any nuclear capacity in the same way all other nuclear states have: for prestige (or ego-stoking) and deterrence. Indeed, suggests strategist and Nobel laureate Thomas Schelling, deterrence is about the only value the weapons might have for Iran. Nuclear weapons, he points out, “would be too precious to give away or to sell” and “too precious to waste killing people” when they could make other countries “hesitant to consider military action.”

It seems overwhelmingly probable that, if a nuclear Iran brandishes its weapons to intimidate others or to get its way, it will find that those threatened, rather than capitulating to its blandishments or rushing off to build a compensating arsenal of their own, will ally with others, including conceivably Israel, to stand up to the intimidation. The popular notion that nuclear weapons furnish a country with the capacity to dominate its region has little or no historical support.

The application of diplomacy and bribery in an effort to dissuade these countries from pursuing nuclear weapons programs may be useful; in fact, if successful, we would be doing them a favor. But although it may be heresy to say so, the world can live with a nuclear Iran or North Korea, as it has lived now for 45 years with a nuclear China, a country once viewed as the ultimate rogue.

Should push eventually come to shove in these areas, the problem will be to establish orderly deterrent and containment strategies and to avoid the temptation to lash out mindlessly at fancied threats. Although there is nothing wrong with making nonproliferation a high priority, it should be topped with a somewhat higher one: avoiding policies that can lead to the deaths of tens or hundreds of thousands of people under the obsessive sway of worst-case scenario fantasies.

In the end, it appears to me that, whatever their impact on activist rhetoric, strategic theorizing, defense budgets, and political posturing, nuclear weapons have had at best a quite limited effect on history, have been a substantial waste of money and effort, do not seem to have been terribly appealing to most states that do not have them, are out of reach for terrorists, and are unlikely to materially shape much of our future.

Better U.S. Health Care at Lower Cost

In the United States, the amount of money spent on health care by all sources, including government, private employers, and individuals, is approximately $7,500 a year per person. In other advanced industrial nations, such as Germany, the bill is roughly one-third less. Yet scores on health care quality measures in the United States are not generally higher than in other wealthy countries and compare poorly on multiple measures. Nor is higher spending buying greater user satisfaction, as chronically ill U.S. patients, who are in most frequent need of care, are generally less satisfied with their care than are their counterparts in other wealthy countries.

This picture can be changed. As the Institute of Medicine’s Roundtable on Evidence-Based Medicine has found through a series of three workshops, there is substantial evidence that the United States can attain better health with less money. Compared with other wealthy nations, the United States’ higher levels of health care spending are primarily attributable to higher prices for products and services rather than to higher service volumes. Nonetheless, opportunities exist to lower both volume and unit price of services without jeopardizing quality. Some methods for cutting excess costs are incorporated into one or another of the health care reform plans that have been proposed by both political parties. But no plan takes full advantage of the range of cost-cutting tools and enabling public policies that the roundtable estimates would lower per capita health care spending by double-digit percentages while protecting or raising quality of care.

Though the need for change cannot be overstated, trends have been running in the wrong direction. Unsustainable growth in U.S. health care spending—growth in spending on health care in excess of the growth in gross domestic product—out-paced other industrialized nations by 30% from 2000 to 2006, without evidence of a proportionally higher health dividend. Such excess growth is crowding out other spending priorities of federal and state governments and of employers. For example, although education is key to maintaining the nation’s standard of living in an increasingly competitive world, average state spending on Medicaid recently eclipsed state spending on public education. In the private sector, rapid growth in health care spending is suppressing growth in wages, employment, and corporate global competitiveness.

Rising health care costs also wreck havoc among non-affluent Americans who do not qualify for Medicaid or assistance through the Children’s Health Insurance Program. A recent Kaiser Family Foundation poll found that 53% of respondents reported their family decreased use of medical care in the past 12 months because of cost concerns. In addition, 19% reported serious financial problems due to medical bills, 13% had depleted all or most of their savings, and 7% were unable to pay for basic necessities such as food, housing, or heat.

Fixing this situation will require the U.S. health industry to more rapidly develop and adopt innovations that improve value without lowering quality of care or slowing biomedical progress. In essence, it must become a learning health care system, drawing continuously on insights from outcomes research and internal organizational performance assessment to more rapidly improve. What new public policies would enable our health care system to meet the implied annual productivity goal of generating more health with fewer dollars?

Inventory of wasted spending

There are five broad categories of waste in health care: providing services that are unlikely to improve health, using inefficient methods to deliver useful services, charging noncompetitive prices for services and products, inducing or incurring excess administrative costs in the health care and health insurance sectors, and missing opportunities to lower net spending via illness and injury prevention. Collectively, these streams of embedded waste represent a double-digit percentage opportunity to reduce per capita health care spending while improving clinical outcomes and patients’ experience of their care. Estimates of savings from policies to address them are included in an upcoming roundtable report.

Among illustrative problems, it is estimated that more than 3 million preventable serious adverse events occur in hospitals annually, with over half attributable to hospital-acquired infections and adverse drug events. Conservatively estimated, avoiding preventable defects in hospital care can produce net annual savings of $16.6 billion.

Medical imaging is ripe for waste reduction. Although extremely beneficial in a growing number of clinical circumstances, imaging too often is used in cases where it’s likely to add no clinical value and to harm patients by radiation exposure—a fact rarely discussed with patients. As an illustration of the short-term gains possible, at Virginia Mason Medical Center, which operates in a market with a long-standing tradition of efficient use of health care services, physicians treating patients with back pain recently achieved a 30% reduction in MRI use while speeding patients’ recoveries.

As with imaging, almost all service categories are marked by excess use, as when laboratory tests are performed without a clinical rationale. Some of these tests lead to further tests, such as cardiac catheterizations, that carry substantial risk of serious complications. What causes excess use of services? Failure to rapidly access prior medical records is one prominent cause. For example, it is estimated that $8.2 billion in annual spending is due to duplicative testing in hospitals, most often because physicians cannot readily obtain prior test results. Another source is that hospitals in some areas have too many beds and too many affiliated medical specialists who, in turn, are more inclined to order services of unproven value in order to fill available capacity. This phenomenon is demonstrated in studies where large variations in service use relative to population size and illness occur among hospitals in the same metropolitan area.

Health service and product prices that are not determined by robust market competition cause substantial wasted spending. In one study, introducing competitive bidding for durable medical equipment, such as wheelchairs and oxygen equipment, lowered prices offered to Medicare by more than 25% for many of the products. Mergers among insurers, among hospitals, and among physician groups—a growing trend—more often than not boost prices due to monopoly or oligopoly pricing power. Noncompetitive pricing resulting from hospital mergers is now estimated to account for approximately 0.5% of annual health care spending.

Administrative waste imposes excess direct and indirect costs on health care consumers, clinicians, and health plans alike. Patients waste time in repetitious completion of paper forms or in waiting for doctors with poorly managed schedules. This, in turn, lowers U.S. workforce productivity. Health system productivity losses accrue from the excessive time that physicians and their staffs spend on valueless paperwork, much of it the result of a failure to standardize billing and insurance-related activities. These activities consume roughly 43 minutes a day per physician on average—more than three hours per week, or nearly three weeks per year—and the value of this time translates into approximately $31 billion per year nationwide. The amount of time that physicians and staff members spend on various administrative tasks results, in large measure, from requirements imposed by third-party payers, often insurance companies. But variation in payers’ requirements of providers has been shown to add little or nothing to health care value. Indeed, roughly $26 billion of the total spent annually on administrative costs is attributable to differences in payer billing rules that do not add any value, according to estimates by the Massachusetts General Hospital Physicians Organization.

These examples illustrate the diversity and magnitude of waste that could be trimmed without loss of health or reductions in the quality of patients’ experience of their care. Fortunately, there is an extensive inventory of tools available for trimming this waste.

Waste-trimming tools

Electronic health records (EHRs) can be both a waste-trimming tool and an enabler of other tools. If implemented successfully nationwide, EHRs can yield savings of $77 billion annually, while simultaneously improving health outcomes. An EHR provides an easily accessible view of a patient’s health information generated by all encounters in any care-delivery setting. Such information typically includes the patient’s demographics, past medical history, physical examination findings, progress notes, medications, immunizations, laboratory data, and radiology reports. Giving clinicians access to full information enables them to order and provide safer, faster, and better-coordinated care.

More important to perpetual gains in the efficiency of U.S. health care, EHRs also enable clinicians to apply the tools of systems engineering to continuously improve how they provide care on a routine basis. Systems engineering applications can lower annual U.S. health care spending by a conservatively estimated $62 billion over the next ten years and greatly improve the safety and quality of care. For example, 40 million people are hospitalized each year. Reengineering the discharge process, by better educating patients about what they need to do after leaving the hospital and by improving communication between inpatient and outpatient clinicians and at-home care-givers can dramatically reduce readmissions—yielding savings of nearly $400 per hospitalization. Exemplars such as Virginia Mason and Thedacare in Appleton, Wisconsin report average reductions of over 30% in the cost of each clinical service that they systematically re-engineer. Similarly, reengineering health insurer administrative activities by, for example, instituting electronic funds transfers to providers and standardizing payer credentialing of clinical providers could yield savings of $332 billion over the next decade.

Expanding the availability of palliative care for serious advanced illness and quality end of life care would save at least $6 billion annually. Such care brings patients, families, and treating physicians together to discuss pain control and other quality of life issues as well as the likely outcome from available treatment options. When patients and family members are given objective information about the likely benefits and risks of available options, they more often choose less invasive and less costly treatments, often avoiding great patient suffering from treatments offering little or no additional longevity.

What would happen if all health providers were strongly motivated to attain the levels of quality and cost that are now generally accepted as benchmarks of high performance? In one estimate, hospitals’ production cost per admission (a common standard of comparison) and mortality rates would both drop by approximately 15%. Two plausible methods of motivating such attainment have been demonstrated: tying the amount of consumer sharing of the cost of health insurance or of health care at time of service to the comparative cost-effectiveness of the health insurer or clinicians that consumers’ select; and tying the amount of payment to clinicians or clinician organizations to the comparative cost-effectiveness of the care that they deliver. An example of the latter method is bundled payment methods that are subject to meeting high quality of care standards. In bundling, a clinician or clinician organization such as a medical group or hospital would be paid an all-inclusive amount for treating a patient with a given illness or injury rather than being paid a fee for each service provided. Such a payment method shifts clinician focus from service volume to service value. According to one projection based on serving commercially insured patients, if hospitals would agree to accept bundled payments geared to high-value care benchmarks for 13 common types of treatment, such as year-long care for asthma, diabetes, and heart disease, spending might be reduced nationally by up to $167.5 billion. There is evidence that payment methods geared to service value rather than service volume can improve health outcomes as well. In a Medicare study of 225 hospitals that committed to this form of payment for patients with heart attacks, hospitals reduced mortality by an estimated 4,200 deaths and increased use of proven methods to prevent hospital-acquired infections to 92.6% from 69.3%.

Public policy opportunities

After baseline waste has been trimmed via methods such as those described above, continuous improvement in the health industry’s ability to deliver valuable health care services would then offer a perpetual means of improving health and bringing growth in health care spending into closer alignment with growth in the nation’s gross domestic product. To attain this vision, the nation will need a coordinated set of new policies across a range of fronts. Many of these policy options were described almost a decade ago in the Institute of Medicine’s pioneering report Crossing the Quality Chasm. But even as its recommendations found fertile soil in some quarters—for example, in the evolution of the National Quality Forum—most remain unimplemented. Three unexploited broad policy options are pivotal.

First, the federal government should strengthen antitrust policies to ensure that no health industry participants are able to opt for noncompetitive price increases and tepid annual gains in quality and cost-efficiency. Second, stronger policies are needed to promote comprehensive health-relevant societal changes. These include enabling chronically ill people to easily access condition-specific and treatment-specific performance comparisons of providers and treatment options; including nutrition education and health-promoting life styles in public schools; and assuring that people in all communities have safe, accessible places to play and exercise. The government should give special attention to reducing the burden of obesity on health care spending by prioritizing programs likely to reduce obesity. In the near term, the president should issue an executive order requiring that an obesity impact statement be developed whenever federal funding is being considered for a project that might significantly impact obesity. As with environmental impact statements, these documents could help encourage all recipients of federal funds to support obesity reduction.

However, the greatest opportunity to improve health system efficiency probably lies in the enactment of federal policies to harmonize the influence that the nation’s health care payers have on the health care decisions of patients and their clinicians. Five facets of harmonization are especially important:

Standardizing measurements of comparative performance. Units of useful performance comparisons include multi-component health care systems, hospitals, physician groups, individual clinician-led care teams, treatment options, and treatment delivery methods. Measurement efforts should include clinical outcomes, patients’ experience of care, and combined consumer and payer spending per treatment episode or per year of chronic illness care. Sweden knows how many of its citizens are able to walk without pain 5 years after hip-joint replacement by each hospital and surgeon. The United States doesn’t have such information for the vast majority of the care that it is buying.

Fortunately, national momentum to standardize clinical performance measurement across payers is rising, and collaboratives of multiple stakeholders are converging on standardized performance measurements for public reporting and performance improvement. Groups such as the National Quality Forum, the National Committee on Quality Assurance, and the National Priorities Partners have begun to act, but much faster progress is needed. Once performance measurements are trustworthy and easily accessible, all payers should use the same standardized set to assess providers’ performance.

To help drive evolution of clinical performance measurements, payers should be encouraged to test new measures. But when doing so, payers should fully disclose to consumers and their providers the specifications of the measure being assessed, the rationale behind the measure, and the expected duration of the test. These stipulations have been endorsed in a 2008 “patient charter” by groups such as the American Medical Association and AARP, as well as by multiple other stakeholders. If new measurements require providers to collect new data, the payer should in some cases offer providers an incentive as a temporary bridging step. Past experience suggests the benefits of this approach. Several years ago, when the Centers for Medicare and Medicaid Services tied public quality reporting to its hospital payment, there was an immediate positive hospital response, substantially advancing standardized comparisons of hospital performance. The resulting comparisons have been incorporated in Hospital Compare, a national performance comparison tool that is now widely used by clinicians, hospital managers and their boards, payers, and consumers.

Standardizing payer methods for administrative interactions with providers. Existing multi-stakeholder efforts in administrative simplification provide a solid foundation for standardizing payer interactions with health care providers. Promoting spread of these efforts, which ultimately can ease providers’ administrative burdens, is a ripe opportunity for reducing waste.

For example, the Committee on Operating Rules for Information Exchange, a collaboration of more than 100 industry stakeholders, has been developing and promulgating operating rules and national standards for electronically exchanging data that enable providers to access administrative information before or at the time of service. The types of data covered include such things as patient eligibility and benefits verification, patient financial liability for various services, patient deductibles, and co-pays. In addition, the Workgroup on Electronic Data Interchange, through its Strategic National Implementation Process, has defined standards for Health ID Cards, and several payers have implemented these standards. The payers have produced millions of magnetic stripe ID cards to enable electronic eligibility determination and provide accurate co-payment information at the point of care. However, adoption is lagging badly in physician offices where photocopying of magnetic strip ID cards remains a common practice. Also, the Council for Affordable Healthcare, a nonprofit alliance of health plans, has developed the Universal Provider Datasource as a Web-based electronic service for collecting provider data used in credentialing, claims processing, quality assurance, and emergency response, and in providing such member services as directories and referrals. Approximately 760,000 physicians and other health care professionals in more than 500 organizations now use the system. Providers can enter information free-of-charge into a central, secure database, then authorize health care organizations to access it, greatly reducing or eliminating redundant paperwork.

Such policies to speed transaction automation and electronic connectivity are essential to moving all stakeholders from today’s cumbersome and error-prone administrative processes to a world of standardized workflows that will enable more attention to the care of patients. Congress now needs to legislatively mandate a timetable by which all parties will adopt uniform standards to support more efficient administrative transactions.

Standardizing payment methods that give providers robust incentives to improve the value of the care they deliver. Experts widely agree that “quality-blind” fee-for-service payments encourage services that may have little or no likelihood of improving health and that can sometimes cause harm. Payer attempts to dissuade or withhold payment for such services in individual cases have often proved unsatisfactory, due to the sparseness of comparative effectiveness research and limitations on applying such research to specific patients. Promising solutions include providing higher payments for primary care, bundling payments into a single all-inclusive payment, having payers share with providers savings that result from more efficient care, and conditioning provider access to opportunities for higher payment on the formation by providers of better-organized forms of care delivery and management. In order not to repeat the managed care backlash of the 1990s, all such proposed provider payment reforms should include methods of safeguarding and/or improving quality of care.

One simple example of such a better-organized form of care are “medical homes,” in which health care is organized around the relationship between the patient and a single clinician-led team, and when appropriate, the patient’s family. A more complex form is an accountable care organization (ACO). A typical ACO might include a hospital, primary care physicians, specialists, and potentially other service providers. Services would still be billed on a fee-for-service basis, but the ACO’s clinicians would coordinate care for their shared patients with the goal of meeting targeted reductions in quality flaws and total annual spending per patient . Because all components of an ACO assume joint accountability for the value of care, they would share in any cost savings if quality of care also improves. An example is an organization such as Kaiser Permanente that integrates care delivery with health insurance, thereby transferring to clinicians accountability for improvements in patient health, customer service, and the total annual cost of patient care.

The power of such value-based provider payment methods in speeding clinical performance improvements and cost reductions hinges on their adoption by all or most payers. Absent such standardization, the strength of the signal to clinicians and other health industry participants is lost. The most important place to begin may be standardization of provider payment methods and improvement incentives used by Medicare, state Medicaid and Children’s Health Insurance Programs, and large commercial insurers, since together they provide the majority of the health industry’s revenue.

One way to entice care providers to support a shift to more performance-dependent payment methods would be to offer national tort reform to clinical service providers in exchange for their acceptance of a major revenue-neutral shift to performance-sensitive payment methods. This societal tradeoff also would serve to reduce “defensive medicine,” the ordering of tests, imaging, and follow-up visits solely to minimize accusations of not doing “everything possible” to protect patients’ health. It is estimated that at least $20 billion could be saved each year by implementing well-crafted tort reform that ensures faster and more reliable payments to patients suffering harm, especially when causal factors are difficult to pinpoint.

Standardizing payer incentives for patients to improve the value of the care they receive. Payers have a number of avenues for encouraging patients to seek high-value care. Providing patients with information on the comparative value of choices that they can make will be key since many patients are instinctively wary of “cheap” care. For example, payers can structure their reimbursements to reward patients who choose high-value providers, identified as those clinicians, hospitals and/or health systems that offer higher quality care at a relatively low total cost of care per episode of acute care or per year of chronic illness and preventive care. Payers also can structure their plans to encourage consumers to choose higher-value treatment options or to adhere to physician-recommended treatment. In one estimate, if “value-tiered” provider networks induced all consumers with commercial health insurance to select providers who rank favorably on both low total cost of care and quality, per capita spending on health care would decline by roughly 10%. The size and direction of such incentives would need to be tailored to fit different beneficiary populations, since large negative financial penalties for selecting a low value care option would be unfairly coercive for non-affluent patients. These include incentives for consumers to select a health insurance plan that includes only higher-value providers, to select higher-value providers participating in their health insurance plan, and to select higher-value treatment options such as generic drugs.

Coordinating methods for assisting patients and providers to improve health care value. Payers, especially private payers, should be given incentives to pool their efforts to assist patients and providers to improve the value of health care. Examples of possible approaches include:

Private and public payers and physicians in their networks should jointly communicate to consumers the value of having a personal physician who is accountable for coordinating care and approaching regional benchmark performance on measures of quality, service, and low total cost of care.

All payers should contribute to supporting joint obesity prevention efforts, given the evidence about the mounting economic and health burden of this national epidemic.

Payers should collaborate to ensure that all patients with advanced illness have access to and coverage for accredited palliative care programs in all communities.

Payers should converge on a common method to provide clinicians and smaller institutional providers with information on how to improve the value of their clinical services. This should include joint funding by all payers to support effort by the nation’s highest-value providers to coach their colleagues on how to replicate their success most rapidly.

Within federal privacy guidelines, payers should agree to standards for sharing with providers the identity of other providers involved in their patients’ care to enable providers to better coordinate the care they provide.

Building public-private partnerships

Although most health reform debate in health care spending reduction has focused on government-sponsored health benefits programs, more than half of all national health expenditures originate in the private sector. The private sector therefore has a central role to play in lowering per capita health care spending and raising quality of care. Private groups, especially employer or union-managed self-insured health plans, have the ability to make management decisions about health benefit plan policies without the rancor that public payers and large insurers often face. Anyone who doubts the seriousness of this problem for public payers need only examine the current public debate around health care reform. Private health benefits sponsors’ unique ability to rapidly test and adopt successful methods make them a natural leader in payer innovation. But the private sector needs help in more rapidly disseminating their successes. Medicare and Medicaid, because of their concentrated purchasing power, are uniquely positioned to accelerate the spread and impact of successful private sector innovations in purchasing. Now is the moment to legislate an explicit partnership that combines the strengths of both purchasing sectors in a better harmonized pursuit of their shared goal of more health for fewer dollars.

To accelerate such cross-payer harmonization, Congress should create safe harbors from antitrust challenges for employers, unions, health insurers, and providers who collaborate to attain one or more of the five key facets of harmonization described above. Congress also should direct the Centers for Medicare and Medicaid Services to develop explicit all-payer harmonization plans. If nonfederal payers are reluctant to collaborate, Congress should then consider inducements, such as changes in federal tax policy.

Given the complexity of the nation’s health care system and how many factors can influence policy success or failure, payers around the country should be given fairly broad latitude—at least initially—in their efforts to move toward greater harmonization. Payer harmonization carries risks and benefits, so the more the various parties can test diverse ideas on a modest scale, the better. Action should be expected first in the pooling of health insurance claims data to facilitate publicly reporting on standardized all-payer measurements of clinician and clinician organization performance on measurements of quality and total cost of care. Such information will also supply the information required to evaluate the effects of payer harmonization initiatives and pinpoint which harmonization methods generate the greatest net benefit under various circumstances. In 2007, Senators Judd Gregg (R-NH) and Hillary Clinton (D-NY) jointly introduced the Medicare Quality Enhancement Act, which detailed a set of public policies that can enable this key initial step. The Medicare Quality Enhancement Act would have made Medicare claims data available to enable consumers and clinicians to understand the relative total cost of care and quality of individual health care providers, while also safeguarding patient privacy. The timing for this bill, coming in the last year of a Congress with little appetite for significant changes, was 2 years premature. The concept has been supported in the current reform debate by a bipartisan collection of legislators including Senators Warner (DVA), Cornyn (R-TX), Specter (D-PA), Collins (R-ME), and Lieberman (I-CT). Public comparisons of the value of care by providers is the oxygen of value-improvement efforts.

With national health care reform under intense debate and with an unequivocal need for quick and dramatic action to control costs and improve quality, we face an unprecedented opportunity to accelerate the evolution of a learning health care system that attains ever higher levels of population health at much lower growth rates in health care spending. Compared with the potential for progress, current reform proposals from both political parties appear under-powered. Although fear of vilification by advocates of the status quo understandably inhibits legislative efforts to attain more health with much less money, the impending exhaustion of the Medicare Trust Fund and rising middle class anxiety about exposure to uninsurance may provide the voter pressure to overcome the inertia.

The IOM’s series of roundtables provided ample evidence that the biggest barrier to progress is not lack of effective waste-trimming tools. Rather, it is the vulnerability of elected officials to accusations of impairing a service that is both consciously and unconsciously equated with protection from death and suffering. We can and must navigate through this political minefield so that we can accelerate efforts to produce better U.S. health care at a lower cost.

A pox on Smallpox

Smallpox is a severe viral disease that claimed hundreds of millions of lives during the course of history. A uniquely human affliction, it killed a third of its victims and left the survivors disfigured with pockmarks and sometimes blind. From 1966 to 1977, a global vaccination campaign under the auspices of the World Health Organization (WHO) eradicated smallpox from the planet in one of the greatest public health achievements of the 20th century. Since then, the absence of the disease has saved an estimated 60 million lives. In a dark irony, however, eradication created a new vulnerability with respect to the potential use of the smallpox virus as a biological weapon.

Smallpox—The Death of a Disease is an authoritative behind-the-scenes history of smallpox eradication and its aftermath by D. A. Henderson, the U.S. physician who directed the global campaign. At its best, Henderson’s memoir is a compelling human story about overcoming adversity in pursuit of a noble cause. The reader vicariously experiences the deep emotional lows and exhilarating highs of participating in a hugely ambitious international effort whose ultimate success depended on the vision, dedication, and teamwork of a relatively small group of individuals. Although the technical recounting of the eradication program is at times overly detailed for the general reader, the book provides valuable lessons for anyone interested in global health or the management of large and complex multinational projects. Those seeking a balanced, objective treatment of smallpox-related issues should look elsewhere, however. Henderson pulls no punches in assigning credit or blame where he considers it due and in conveying strong personal views on a variety of controversial topics.

The book weaves Henderson’s personal history with that of the smallpox virus. An Ohio native, he attended Oberlin College and then went to medical school at the University of Rochester. In 1955, he was drafted for two years of military service and offered a position with the Epidemic Intelligence Service of the Communicable Disease Center (CDC), later renamed the Centers for Disease Control and Prevention. Although Henderson had no particular interest in infectious diseases, tracking down outbreaks sounded more interesting than doing routine physicals, so he accepted the offer. At the CDC, he was trained in “shoe leather epidemiology,” or the investigation of epidemics by collecting data and interviewing patients. Eventually, he became deeply engaged by public health and decided to devote his career to it.

At the time, smallpox imposed a major burden of illness and death on the developing world. Although the last case in the United States had been in 1949, the CDC considered it almost inevitable that a traveler from a country where smallpox was endemic would reintroduce the disease. Several aspects of smallpox made it theoretically susceptible to eradication, including the easily diagnosed facial rash and the lack of an animal reservoir. In 1958, the Soviet Union had persuaded WHO to launch a smallpox eradication program, but it was seriously underfunded and made little headway.

After Henderson was promoted to section chief at the CDC, he submitted a proposal to the U.S. Agency for International Development for a five-year program of smallpox eradication and measles control in 18 countries of West Africa. Unexpectedly, President Lyndon Johnson decided to fully fund the project in late 1965 and later expanded it. This U.S. engagement tipped the balance in favor of a decision the following year by WHO member states to launch an intensified global smallpox eradication program, although the resolution passed by only two votes. Because the WHO director-general believed that eradication was impossible, he insisted on putting an American in charge so that when the program failed, the United States would be held responsible for the debacle. Henderson was chosen as the sacrificial lamb, and in October 1966, aged 39 and with only 10 years of experience in public health, he flew to Geneva to lead what looked to be a quixotic effort.

At that time, smallpox was endemic in 31 countries, was being routinely imported into 12 more, and was causing between 10 and 15 million cases a year, with 2 million deaths. At WHO Headquarters, the smallpox program was housed in three modest rooms and given a small staff and a shoestring budget. In addition to a skeptical boss, Henderson faced a dysfunctional WHO bureaucracy with six passive and uncooperative regional offices. To overcome these obstacles, Henderson had to be resourceful, pragmatic, and wily, circumventing the regional offices when it was necessary to get things done. He also recruited a small but talented group of epidemiologists with the determination to persevere against great odds.

Smallpox— The Death of a Disease describes the myriad political, logistical, and organizational challenges that the WHO eradication campaign had to overcome during its 11-year history, including wars, floods, refugee crises, and unresponsive governments. Despite repeated setbacks, the team of public health practitioners from several countries persisted in their efforts, inspired by the goal of conquering an ancient scourge and buoyed by an ethos of collegiality and teamwork. A major theme of the book is the improvisational and unorthodox way in which Henderson and his colleagues worked around technical and bureaucratic obstacles. Even so, the ultimate success of the campaign hung in the balance until the last moment.

Two technological advances were essential to the success of smallpox eradication: the development of a stable, freeze-dried vaccine (containing the related but benign vaccinia virus) that did not require refrigeration in tropical countries, and a simple but elegant method of inoculating the vaccine into the recipient’s skin with a bifurcated needle. The basic strategy also evolved over time. The initial approach was mass vaccination, designed to reach at least 80% of the population. But epidemiologists soon discovered that the rapid detection of smallpox outbreaks, followed by the isolation of patients and the vaccination of family members and other contacts in the immediate vicinity, created a “firebreak” of immune people that halted the further spread of the disease. This discovery led Henderson to augment mass vaccination with a targeted strategy called “surveillance-containment.” Because national health ministers had known only mass vaccination, however, they found the new approach hard to accept.

The greatest challenges of the eradication campaign arose in India and Bangladesh because of the high population density and the mobility of migrant laborers and refugees. During the fall and winter of 1972–1973, a massive smallpox epidemic broke out in three impoverished states of northern India with a total population of 189 million. WHO’s strategy was mass vaccination combined with intensive surveillance-containment, including monthly searches for smallpox cases in thousands of villages. This effort involved up to 150,000 health workers and required eight tons of forms and other documentation.

In 1974, as the outbreak in northern India continued to spiral out of control, the program staff faced a deeply discouraging period when it seemed that all their efforts would come to naught. Henderson describes a meeting at which the country team members were exhausted, having worked seven-day weeks for several months in sweltering heat. Four had serious medical ailments, yet “the only problem they would discuss was where to find the additional resources to keep the program going.” They persisted, and gradually the number of smallpox outbreaks began to wane; the last case in India was found in May 1975.

The final battles of the eradication campaign took place in Ethiopia and Somalia, where health workers faced armed villagers who resisted vaccination, uncooperative government officials who suppressed information about outbreaks, and widely dispersed groups of nomads who had to be tracked down and vaccinated. A 30-year-old Somali cook named Ali Maow Maalin was the world’s last case of natural smallpox, the endpoint in a continuing chain of transmission extending back at least 3,500 years.

BECAUSE THE WHO DIRECTOR-GENERAL BELIEVED THAT ERADICATION WAS IMPOSSIBLE, HE INSISTED ON PUTTING AN AMERICAN IN CHARGE SO THAT WHEN THE PROGRAM FAILED, THE UNITED STATES WOULD BE HELD RESPONSIBLE FOR THE DEBACLE.

The remainder of the book is devoted to the post-eradication period, including the painstaking two-year process of verifying that no cases of smallpox remained hidden in some remote corner of the globe. Henderson also addresses the contentious debate over whether to destroy the remaining laboratory stocks of the smallpox virus. After eradication, WHO gradually reduced the number of labs worldwide that possessed the virus from 75 to 2 authorized repositories: 1 in the United States and 1 in Russia. Although the U.S. government initially supported a plan to destroy the smallpox virus stocks, in late 1994 the Pentagon began pushing to retain the live virus for the development of improved defenses against its possible use as a biological weapon. Driving this concern were reports from Soviet defectors that Moscow had maintained a vast clandestine biological warfare program in violation of international law, including the production of smallpox virus as a strategic weapon—a shocking betrayal of the goals of the WHO eradication campaign. Other countries, such as North Korea and Iran, were also suspected of retaining illicit stocks of the virus. Because the routine vaccination of civilians against smallpox had ended in the early 1980s, the world’s population was increasingly vulnerable to a deliberate attack, yet only limited supplies of smallpox vaccine were available.

In the mid-1990s, Henderson reengaged with smallpox issues on a number of fronts, including efforts to train physicians and public health experts about the diagnosis, treatment, and control of the now-unfamiliar disease. After 9/11 and the anthrax letter attacks, the Bush administration appointed him to direct a new office of public heath preparedness, where he spearheaded a crash program to procure 200 million doses of smallpox vaccine by the fall of 2003. Henderson disagreed with a plan proposed by Vice President Dick Cheney to vaccinate the entire U.S. population against smallpox, arguing that the risk of serious side effects from the vaccine far outweighed the low probability of a military or terrorist attack with the virus. Over his objections, the White House proceeded with the voluntary vaccination of some 450,000 front-line health workers, but the program collapsed when fewer than 40,000 agreed to be vaccinated.

Henderson discusses at length his opposition to research with the live smallpox virus for the development of medical countermeasures such as antiviral drugs, which began in 1999 at the two WHO-authorized repositories and has continued since then. He questions the value of such research, noting that in tests with infected animals, no drug candidate has been effective when administered after the appearance of fever and rash. In his view, destroying the known stocks of the smallpox virus would set a powerful moral standard for the international community. Although it would be impossible to verify that no hidden caches still existed after the known stocks had been destroyed, WHO member states could formally agree that any scientist or country found to possess the smallpox virus would be deemed guilty of a crime against humanity and subjected to severe sanctions. Proponents of defensive research with the live virus consider Henderson’s proposal dangerously naïve and believe that the development of anti-smallpox drugs is both necessary and feasible. In an effort to resolve this policy debate, WHO is conducting a major review of the smallpox research program for discussion at the next annual meeting of member states in May 2010, with a final resolution of the issue scheduled for the subsequent annual meeting in 2011.

Henderson is also skeptical about the current WHO campaigns to eradicate Guinea worm and polio (launched in 1986 and 1988, respectively), neither of which has succeeded after more than 20 years of effort. “At this time, I don’t believe we have either the technology or the commitment to pursue another eradication goal,” he writes. “More useful and contributory would be to build and sustain effective control programs that are adapted to the social and public health needs of each country.” This view conflicts with that of WHO Director-General Margaret Chan, who recently affirmed that polio eradication remains one of the organization’s top priorities.

Although his critics portray Henderson as a curmudgeon, no one can deny the sincerity of his opinions or the magnitude of his achievements. Indeed, his powerful intellect, unbending will, impatience with bureaucratic ineptitude, and contrarian nature, all of which are on display in this intriguing memoir, were key to the success of the smallpox eradication campaign.


Jonathan B. Tucker () is a senior fellow specializing in biological and chemical weapons at the James Martin Center for Nonproliferation Studies and the author of Scourge: The Once and Future Threat of Smallpox (Atlantic Monthly Press, 2001).

Embracing uncertainty

This book is a pudding with a theme. The pudding is a mélange of observations on the cultural, economic, and political implications of the Internet, genetic engineering, nanotechnology, space exploration, and various other technologies. The chef, David D. Friedman, a self-described “anarchist-anachronist-economist,” concocted his creation with humility and humor, flagging problems while touting benefits, hammering home his arguments with historical antecedents, and spicing his observations with personal stories and humor. The pudding’s theme is libertarianism. Friedman has a strong predisposition for bottom-up decentralized policies and against national governments and international organizations. Within this larger framework, Future Imperfect has three subthemes that are at once its strengths and weaknesses.

First, from Friedman’s libertarian perspective, a significant virtue of technological change is that it often renders existing regulations, statutes, and conventions unenforceable or irrelevant. For example, information technologies have radically reduced the cost of information duplication, undercutting copyright law. The Internet permits anonymous distribution of information, undercutting libel laws and vitiating national controls over international flows of security-sensitive technologies. Cloning, in vitro fertilization, and surrogate motherhood enable physicians to create children using methods that render inapplicable conventional laws on marriage and definitions of child support obligations.

Many, if not most, observers see this process of de facto deregulation through technology development as a curse, and they favor addressing the problem by strengthening laws covering new technologies or, in some cases, limiting the development of technologies that undercut laws. In contrast, Friedman sees this technological disruption as a blessing, providing an escape route from regulatory tyranny and inefficiency, and he recommends that society take advantage of such opportunities to restore freedoms by scrapping laws and replacing them, if needed, with voluntaristic market solutions.

Friedman notes, for example, that the Internet’s present design has reduced the ability of government to detect and punish malicious mischief by hackers, and he recommends turning to private criminal justice systems for solutions. Rather than having society rely on state prosecutors and the criminal courts, individuals could use civil tort actions to recover the costs of crime and to deter criminals from misconduct, or they could form private societies for the prosecution of crimes. Voluntary societies for prosecution, as were used in Great Britain in the 18th and 19th centuries, might be a model for collective action by individual computer users who fear malicious attack today.

In another example, Friedman notes that information technologies have weakened copyright protection for authors and patent protection for inventors, and he suggests that individual and collective self help may work better than state-sanctioned intellectual property rights in this realm. Authors may protect themselves by encrypting their works when published online, by joining together to sanction violators, and by giving away unprotectable intellectual content to stimulate private-sector demand for services, such as lectures and consulting, that they can provide at a profit.

Although these creative examples of voluntary actions are a useful prod to readers to think hard about the need for state action, they face some basic questions. Do private solutions work in the specific examples that Friedman cites? Under what general conditions will private solutions work? The idea of substituting private tort actions for public prosecution of criminals runs directly into several problems. First, individual victims seeking redress for crimes are unlikely to pursue tort actions unless punitive damages far in excess of actual damages are allowed and unless criminals possess deep enough pockets to make good on damages awarded. Second, victims of crimes may not have the financial, legal, or emotional resources to mount tort actions. Inequality in financial resources produces biases in the current system of civil actions, and these biases may be even more pronounced if the state-based criminal law system is supplanted by private civil actions. Third, in the tort system, information on harms is often hidden. Tort actions under current civil law often result in private settlements that include nondisclosure clauses and gag rules. These provisions seal off public access to hard information on actions, damages, and blame that may be needed to defend the public interest in reducing cyber criminality. Similar problems arise in other alternatives to state action.

Friedman’s support for bottom-up defenses of intellectual property parallels positions by Rodney T. Long in “The Libertarian Case Against Intellectual Property Rights” (in Formulations, autumn 1995) and by Jacob Loshin in “Secrets Revealed: How Magicians Protect Intellectual Property Without Law” (in Law and Magic: A Collection of Essays, Carolina Academic Press, 2008). Indeed, the idea of protecting intellectual property without the state commands broad appeal among anarchist libertarian technologists. But Friedman and other bottom-uppers generally fail to consider how this approach to particular technological issues will scale up to general solutions to generic intellectual property and commons problems. For example, Friedman does not systematically discuss to what extent the transaction costs associated with self-help solutions to the protection of intellectual property differ from the nontrivial costs associated with traditional copyright and patent protections. Nor does he discuss the preconditions for private collective action that must be satisfied if collusive sanctioning strategies against infringement are to work. In sum, Friedman and others have yet to define the general conditions that limit the effectiveness of both libertarian and state-centric responses to such problems. The literature in economics on the sources of market failure, in political science on the sources of governmental failure, and in law and economics on the limits of voluntaristic approaches to enforcement are germane and could enrich this discussion.

As his second subtheme, Friedman notes that some emerging technologies pose security, safety, and environmental risks that have the potential, in extremis, to end life as we know it. At a minimum, most of these technologies may disrupt economic, political, and social systems. But Future Imperfect is uneven in its analysis of such risks. The author is at his best in discussing the Internet and computing, offering well-informed and clear analysis of encryption, authenticity, and privacy. He is weaker on genetic engineering, revisiting familiar ground on medical and legal issues associated with genetic engineering. He notes, for example, that DNA sequencing and synthesis are advancing at exponential rates, but he does not discuss the implications of revolutionary improvements in the tools that engineers use to modify evolved biological systems. Nor does he systematically consider the legal, economic, or security implications of modularization in biology, via which biological engineers seek to create Lego-like biological parts that may be assembled into useful devices with greater reliability, at lower cost, with greater speed, and by a larger pool of people than in traditional genetic engineering. To be fair, no book of reasonable length can cover more than a miniscule subset of implications of more than a handful of technologies. But Future Imperfect falls short of what might be considered an acceptable minimum of analytical quality.

For his thematic hat trick, Friedman takes note of the pervasive uncertainty associated with making projections, warns against excessive confidence in extrapolating trends, and suggests that the mitigation of uncertain future costs through present public expenditures is foolish. The opening paragraphs poke fun at a former (unnamed) member of the presidential cabinet for fearing that “in a century or so, someone would turn a key in the ignition of his car and nothing would happen because the world had run out of gas.” To Friedman, such a prediction is likely to prove as accurate “as if a similar official, 100 years earlier, had warned that by the year 2000 the streets would be so clogged with horse manure as to be impassable.” Friedman notes that he cannot know what the world will look like in 100 years, but that he knows for sure that transportation in that world will rely on better technologies than internal combustion engines and better power sources than gasoline. He then recommends deferring costly present actions to contain uncertain future risks, such as climate change, suggesting that future risks may be exaggerated, that technological advances may reduce the costs of mitigation, and that efforts to mitigate may have unforeseen negative consequences. In effect, Friedman turns pervasive uncertainty into another libertarian argument against government action.

But consider in more detail this illustrative case and general argument. A century ago, the streets of some major cities, including New York, were in fact clogged with manure, and futurologists of that time were worried about the health, transportation, and aesthetic implications of such a “brown goo” scenario. The scenario was avoided when governments and companies invested in electric trams, subways and railways, gasoline-powered horseless carriages, and the infrastructure to carry horseless carriages. The brown goo projection was wrong because governments, private companies, and individuals took actions to avoid that future. What appears, post hoc, as a forecasting error is a result of effective governmental and private action.

The mere existence of uncertainty is not justification for public policy inaction. But pervasive and irreducible uncertainty ensures that most public policies will be based on incorrect assumptions. As a rule of thumb, public policies will be wrong not because of stupidity or bias or corruption, but because the information needed to make right choices is not available up front. How can public policies and international agreements take account of uncertainty? They should be designed with provisions for updating, revision, and self-correction.

Indeed, policies should be viewed as experiments that generate information on political, economic, biological, or technical conditions that may, in turn, be used to update policies. Well-designed regulations and international agreements in areas marked by uncertainty will harvest information generated by policy experience, and policymakers can use this information to revise and update policies. For example, the U.S. Clean Air Act contains provisions for gathering information to update standards and quotas, and the Montreal Protocol on Substances That Deplete the Ozone Layer calls for gathering information on a number of issues, including the behavior of sources and sinks of troublesome compounds and the status of chemical substitutes, that may help in revising goals and quotas. To be sure, it may often prove difficult to pass self-correcting statutes and regulations and to negotiate adaptive international agreements, but the pitfalls of allowing uncertainties to block policy action are, in most cases, too great. The old adage “a stitch in time saves nine” remains a relevant reminder for policymakers.

In the end, Friedman deserves praise for his boldness in challenging of conventional wisdom, for his clarity in characterizing technologies, for his narrative gifts in laying out potential opportunities and problems, and for his respect for the intrinsic uncertainty associated with evaluating the social, political, economic, and legal implications of technologies. As readers engage with Future Imperfect, they should be prepared to contest assumptions and evaluate arguments even as they learn.


Kenneth A. Oye () is associate professor of political science and engineering systems at the Massachusetts Institute of Technology.

Russian Science Odyssey

The complex rivalries and connections between Soviet and U.S. science and technology were a subject of intense worldwide interest during the era of “big science” and throughout the Cold War, and Science in the New Russia: Crisis, Aid, Reform is exceptionally important because it not only reviews those historical legacies but offers a detailed account of developments in Russian science policy and international cooperation since the fall of the U.S.S.R. in 1991. The volume is the result of an extended professional collaboration between Loren Graham, who has published definitive works on the history of Soviet and Russian science and taught at the Massachusetts Institute of Technology and Harvard University, and Irina Dezhina, an internationally recognized scholar who is currently affiliated with the Institute of the World Economy and International Relations of the Russian Academy of Sciences in Moscow. The book is also useful because it summarizes much of Dezhina’s original research that has been published only in Russian-language sources, as well as internal evaluations of major international reform programs that often are not publicly available. Graham and Dezhina have also worked together, along with numerous other U.S., Russian, and European colleagues whose work is acknowledged in the volume, to shape and guide many of the large-scale international science reform projects described here. However, their insightful and thoroughly comparative analyses transcend the details of any specific reform or assistance program and remain rigorous and balanced throughout.

The authors ground their analysis in the historical legacies of the Soviet science establishment and offer a balanced assessment of the Soviet system’s very real strengths (most notably, massive state investment and a vast network of institutions and personnel; and its high-level theoretical research in key fields such as mathematics, plasma physics, seismology, and astrophysics) and its enduring weaknesses (such as the chronically weak links between theoretical work and applied research, the negative effects of party-state political interventions and censorship, and the system’s rigid generational hierarchies). The most important and persistent legacy of Soviet science is the seemingly dysfunctional separation between its three organizational “pyramids,” with theoretical and advanced research still dominated by the Russian Academy of Science and its research institutes (with similar academy structures in agriculture, medicine, and pedagogy); technical and applied research separated into economic or industrial branch ministries, much of which has been lost as enterprises have been privatized or experienced cuts in state funding; and a third pyramid of state universities and specialized professional institutes, which focus on undergraduate and graduate education but often with only weak research capacity. Finally, another enduring legacy was the pervasive militarization of science in the Soviet system, which bred yet more organizational barriers and lack of transparency.

Although the late 1980s witnessed a great intellectual opening and the exposure of this system to the full effects of internationalization, the collapse of the Soviet Union in 1991 led to catastrophe. The circulation of scientific elites within the formerly highly interdependent socialist bloc ended abruptly, and state funding plunged, in some cases to as little as 20% of former budgets. The authors develop and detail an array of interconnected themes: the existential threats to the survival of Soviet science in the early and mid-1990s, along with the massive brain drain out of the profession and/or out of the region; the unprecedented international assistance programs that sought to alleviate these crises; and the fitful attempts to comprehensively rethink and reform post-Soviet research institutions and Russian science policy.

The most useful aspect of the volume for international readers will be the authors’ detailed descriptions and evaluations of the massive and historically unprecedented international assistance programs that sought first to support and then to transform post-Soviet science, higher education, and research. These programs, funded by an array of governments and multilateral organizations, amounted to several billion U.S. dollars (when related programs in energy, nuclear nonproliferation, and agricultural research are included), and offered support in the form of individual grants, international travel and long-term professional exchanges, the purchase of scientific equipment and publications, collaborative research projects, and institutional support. These reform efforts had the enduring effect of establishing the principles of competitive grant funding and peer review in Russian science and higher education, regardless of the legitimate concerns that remain about how such mechanisms are actually implemented. The authors offer detailed descriptions of the most important of these programs, such as the George Soros–funded International Science Foundation in the mid-1990s, which paved the way for much of what followed in its insistence on open competition and peer review; the European-funded International Association for the Promotion of Cooperation with Scientists from the Independent States of the Former Soviet Union (1993–2007); the International Science and Technology Center (1992 to the present), which began with a focus on military conversion and later shifted toward commercialization and technology transfer as the Russian government restored funding for defense research; and an array of private foundation–funded programs.

Perhaps the most important of these foundation-funded public/private programs, in which both Graham and Dezhina have played key roles, is the still-functioning Basic Research and Higher Education program (BRHE), operated by the Civilian Research and Development Foundation (CRDF) and co-funded by the Russian Ministry of Education and Science, the John D. and Catherine T. MacArthur Foundation, and the Carnegie Corporation of New York. BRHE (along with a parallel program in the social sciences) was unique, and arguably uniquely influential, in its emphasis on co-funding and joint governance, its focus on building research capacity within leading Russian universities to overcome the academy/university divide, and its targeted support for young researchers and faculty members. The influence of BRHE can be seen in the increasing willingness of Russian partners to assume a leading role in funding and sustaining the program as well as in the growing replication of BRHE’s model of interdisciplinary Research and Education Centers in many other leading Russian universities. In a profound sense, the authors’ detailed and balanced analysis of the long-term beneficial effects of these international programs constitutes a convincing refutation of critics within Russia who depict such aid programs as deliberate attempts to degrade or to prey on Russian science and technology as well as of critics in the West who have depicted such programs as ineffective or ultimately wasted.

Finally, the authors complete their key themes of crisis, aid, and reform with a detailed analysis of the attempts within Russia to reform research institutions and science policy since the early 1990s. These attempts at reform, which took place amid conditions of acute economic crisis and brutal political conflict, were fitful and arguably largely unsuccessful in the 1990s, but they have continued and become increasingly coherent and comprehensive since 2001. These years witnessed the formation of two new funding mechanisms, the Russian Foundation for Basic Research and the Russian Foundation for the Humanities, modeled in part on international examples, even as debates continue about the adequacy of their budget funding; attempts to clarify patent law and intellectual property rights; the creation of dozens of technology parks and literally hundreds of technology transfer centers; the establishment of new funding mechanisms for technological innovation and entrepreneurship; and increasingly ambitious attempts to create a “national innovation system” based on market principles and in partnership not only with state economic enterprises but now with new private businesses as well. Efforts were also launched to disseminate new information technologies throughout Russian education and government and to restore state funding for scientific research, most notably through a series of massive federal grants for innovative university projects and investments in areas such as aerospace and aircraft design, biotechnology, and nanotechnology.

Thus, the authors come to a cautious optimism about the future of Russian science and its ability to contribute both to national recovery and to global cooperation. A key turning point came after 2001, when the highest reaches of the political leadership seemed to recognize the necessity of a coherent science policy and adequate state or public investment for the competitiveness of the Russian economy and began ambitious efforts to better coordinate venture capital with technological innovation, as well as to link together the research capacity of Academy institutes with the educational programs of universities.

Overall, this is an exceptionally thorough and useful book, which highlights the remarkable progress that has been made in Russian science in less than 20 years; illuminates the very real potential of mutually beneficial international cooperation; provides a clear roadmap of the equally real challenges that remain in science policy and professional practice; and suggests that after many years of isolation, Russian science might soon reclaim its status in the world community and thereby just possibly be able to more directly contribute to the resolution of our common scientific and technological challenges in the 21st century.


Mark Johnson is a visiting associate professor of history and education at Colorado College.

Water Woes

True to his title, in Unquenchable, legal scholar and water expert Robert Glennon describes a cross-section of the nation’s water problems and offers several prescriptions. But Glennon is never dusty and dry; his new book is readable and entertaining. Water experts will find much to learn and admire here, but you won’t have to know much about water concerns to find the book informative, amusing, sometimes alarming, and altogether a good update on the highest-profile water management issues of our times.

The book’s topic is lack of water, and it focuses on the technical and policy challenges of providing reliable water supplies for three essential but competing interests: the thirst of ever-growing population centers, support for the other life forms that make up our aquatic ecosystems, and the industrial and energy applications undergirding the economy.

The book’s chapters cut a broad swath through many water supply, demand, and quality issues. They include: the human right to water; groundwater contamination and overdraft; the legal system for allocating water rights in the western United States, known as the “prior appropriation doctrine”; the latest plans to transfer water from some parts of the country to others; water reuse; water conservation techniques and practices; combined sewer overflows; privatization; water pricing issues and options; and agricultural uses and new agricultural technologies.

It’s an extensive list, but Unquenchable provides adequate coverage without bogging down in excessive technical detail. Chapters generally are presented as specific case studies that explore fundamental scientific, historical, and legal aspects of water issues in a particular U.S. locale. Glennon’s expertise in water law and administration is apparent in discussions that are relevant and clearly presented. As water resources professionals will attest, water sciences, engineering, and policymaking are replete with thoughtful, creative, and colorful people; the book provides interesting personal descriptions of many of these professionals, augmented nicely with the liberal use of quotes.

With each chapter, the diversity of water problems and the breadth of water management options become increasingly evident. The reader learns about groundwater overdraft in many parts of the country; agriculture-to-urban water transfers in southern California and Utah; and water conservation efforts, or the lack of them, in industry, agriculture, and U.S. households. A particularly striking example is the growth of Las Vegas, the uses of water there, and the many efforts directed at augmenting and conserving its water supply. Even though the city has implemented many conservation schemes, city water officials continue to harbor visions of ambitious water transfer projects, such as the idea of diverting the Mississippi River to the desert Southwest.

An important point that emerges from Unquenchable is that the nation’s water problems are a combination of issues unique to each region’s culture, laws, geology, and hydrology, along with larger, more general water management challenges that are common across the nation. It is not always easy to distinguish between specifically local water issues and the more general ones, but it is necessary to do so if lessons and policies for more beneficial and effective water management are to be identified. Glennon’s efforts to distill broad lessons from his specific case studies and to identify policy options for better water management may be the best parts of the book.

He makes it clear that the nation’s water supply challenges will not be resolved through traditional methods such as building dams, creating reservoirs, channeling rivers, and pumping groundwater. Last century’s approach to water supply development reflected the conventional wisdom and opportunities then. But this is now. With the benefit of hindsight, the experiences of the 20th century—some positive, some negative—have resulted in second thoughts about how best to manage rivers and other water resources. Today, proposals for new dams generate strong opposition; the best dam sites have been used; and many existing reservoirs, such as Lake Powell on the Colorado River, are well below full storage levels. As Glennon states in his introduction, “Business as usual just won’t cut it.”

The book outlines numerous creative water conservation technologies and policies that are helping reduce per-capita use. These conservation measures, whether they be water harvesting or state-of-the-art plumbing fixtures, will be important to the wiser use of water. The book explains that there are instances in which conservation measures clearly represent savings and economic incentives for adoption; for instance, low-flush toilets often pay for themselves in the long run. But in other instances there are barriers to conservation. Proposals for moderate increases in household water prices or changes in rate structures to encourage more efficient use often meet political and social resistance.

Beyond conservation

The book offers several conservation-related policy and economic solutions in the final chapter, “Blueprint for Reform.” At the same time, Glennon is frank about the limits of conservation. He notes that although many water-stressed communities have implemented ambitious conservation programs, some need to reduce demand even more. He writes, “The reality is that reusing, desalinating, and conserving water may help to alleviate our crisis but will not solve it.”

The book documents many cases where water has been transferred among different uses, typically from lower (e.g., irrigating alfalfa) to higher (e.g., most urban water uses) economic uses. The ingenuity behind some of these exchanges is impressive. Clearly, water transfers will be an important component of future management regimens in the United States, especially in the West.

But Glennon also points out, correctly, that these transfer measures have their limits. For example, agriculture is the largest user of water in the nation, but water used by agriculture is often important for local food production. In addition, the large amounts of water allocated to agriculture are not necessarily easily sold or transferred to municipal and industrial users. That’s because of what Glennon calls “third-party” considerations. An example of these third-party effects would be the negative effects on businesses in a rural farming community if water that supported irrigated agriculture and related businesses, such as farm implement sales, were sold and diverted to urban uses and no longer applied to irrigated agriculture. Another example would be a downstream ecosystem that de pended on stream flows that were diverted when an upstream user sold and diverted water rights to another user. These results can have wide-reaching negative effects for people and ecosystems, and because they may have considerable implications regarding equity, can serve to inhibit water transfers.

One conclusion is that the nation’s policies, laws, and attitudes for allocating and using water resources have their roots in an earlier era. He points out that past practices may not reflect today’s supply, demand, and quality realities. Not only do standard institutional rigidity and inertia make change difficult, but many interest groups benefit from current allocations and arrangements. They contest proposed changes that are not in their favor.

Furthermore, and unfortunately, the nation’s water management apparatus may have taken some backward steps. For instance, the federal Water Resources Council, an interagency group established to facilitate cross-agency dialogue and communication on water management topics, was eliminated from the nation’s budget in 1983 and has not met since. “No effective mechanism currently ensures that the federal agencies work together toward sensible water quality and supply outcomes,” Glennon writes.

Another conclusion is that better management of the nation’s water resources will require shifts in how elected officials and citizens alike think about water. As the book’s introduction states, “Water is a valuable, exhaustible resource, but as Las Vegas did until just a few years ago, we treat it as valueless and inexhaustible.” If the nation is to move to a more sustainable water development path, we will have to purge ourselves of some long-held but wrongheaded beliefs: for example, that water supplies are unlimited; engineers always will find a way to deliver a bountiful supply; and rivers and other water sources have an infinite capacity to accept and assimilate pollutants.

A related theme is that major U.S. economic and social decisions historically have been made with an assumption that water resources would always be able to accommodate additional uses. Regarding water quality, this assumption is reflected in problems of groundwater pollution and estuarine nutrient pollution. Water quality goals often have been trumped by industrial or agricultural production goals. Regarding water supply and demand, a good example is seen in the nation’s population and immigration policies. Unquenchable explains that today’s population growth and immigration policies have the nation on track to add 120 million more people by 2050. This growth would be driven primarily by immigrants and their children and thus represents a conscious federal decision to increase population. The effects of this growth on water demand obviously would, as the book explains, have “immense consequences for our culture, economy, and environment.” Yet such policies are set with little regard for water supply, demand, and quality.

Glennon’s book represents an important contribution to the popular water resources literature and is an excellent overview of contemporary national water supply challenges for expert and novice alike. His call for actions to deal with “America’s water crisis” is divided broadly in two. One level is nuts-and-bolts activities such as more realistic water pricing and policies, better organizational cooperation, and the adoption of new technologies. The other level is more philosophical and cultural. Our water resources base, including aquatic ecosystems and groundwater, will have to be accorded higher value in national policy decisions, he argues, and the limits of our water systems must be better reflected in other sectors. As this book explains, successful changes at both levels are often difficult to accomplish.

There have been other popular and scientific national water policy assessments, and some of Glennon’s policy prescriptions have been offered previously. This is not a criticism. U.S. water history shows that policy perceptions and changes often occur only after a particular finding or recommendation has been driven home for years—or even decades. Unquenchable is upbeat, yet at the same time sends a sobering message. As Glennon observes, national water resources trends show some troubling signs. Substantial policy, economic, and sociocultural changes will be necessary if we are to manage our life-giving water better.

Stimulating Innovation in Energy Technology

Energy technology poses a special challenge to the U.S. innovation system. Fossil fuels are deeply imbedded in the economy and the political system. To the user, they are usually cheap, convenient, efficient, and available in huge quantities. They benefit from public investments in infrastructure as well as direct subsidies through the tax system. The industries that produce and sell them are major employers and benefit from the public expectation of low-cost and readily available energy. New energy technologies seeking to enter the marketplace thus face a far from level playing field.

But fossil fuels, of course, are not really cheap; their economic, environmental, and geopolitical costs place a heavy burden on the nation. The effort to reduce this burden amply justifies a program of the size and scope, although not the form, of the Manhattan Project, the Marshall Plan, or the Apollo Project. President Obama’s $39 billion energy stimulus program and his April address to the National Academy of Sciences make it clear that he regards support for innovation in energy technology as an essential element of this administration’s efforts to deal with the country’s addiction to fossil fuels. It constitutes a recognition that market forces alone, even if augmented by a carbon charge or cap-and-trade regime, will not generate the pace and scope of innovations in energy supply and efficient end use that are needed to overcome the huge built-in preferences for existing energy technologies.

For the past several decades, the nation’s investment in energy technology research has been pathetically inadequate compared to the $1.5 trillion that the energy sector contributes to the U.S. economy. Public-sector funding fell by half between its peak in 1980 and 2005, and private-sector funding has followed a similar path. Even the 2005–2008 flood of venture capital was directed in significant part to investments in existing technologies that are already benefiting from subsidies supported by powerful interest groups in spite of their dubious environmental or economic value—a “no lobbyist left behind” policy that runs counter to the urgent need for innovation leading to a sustainable energy future.

This underfunding of research, combined with the oscillating prices of energy and the history of subsidies to politically favored technologies (renewable energy for Democrats, nuclear power and fossil fuels for Republicans, coal for both), has left a multitude of technologies at all stages of development. These need to be given a chance to emerge, but no single technology has any special claim on support. This means that innovation policies need to be as technology-neutral as possible so that technological alternatives can compete on their own technoeconomic merits.

An integrated analysis

The most difficult step in the development and deployment of new technology in energy will be the launch of these technologies into extremely complex and competitive markets of enormous scale. Any program of government support for innovations in these technologies should therefore be organized around the most likely bottleneck to their introduction to the market. This goes well beyond the longstanding focus of government programs on basic research and the “valley of death” between research and late-stage development.

We therefore advocate an integrated consideration of the entire innovation process, including research, development, deployment, and implementation, in the design of policies to encourage innovation in energy technology. Only such an overview will make it possible to identify gaps in existing federal institutions for the support of the overall process of innovation. Such an examination is an essential element in the design of any program to stimulate innovation in energy or in any other complex established technology.

These considerations have led us to a new framework for innovation policy in complex established technology areas. It requires a four-step gap analysis: classifying promising innovations according to likely obstacles to their market launch, identifying policies needed to overcome these obstacles, identifying gaps in existing institutions and programs that prevent them from overcoming these obstacles, and recommending new institutions or policies to fill these gaps. We believe that a similar approach is likely to be a useful starting point for the design of innovation policy in sectors of comparable complexity to energy. Although the problem of the valley of death remains important, the analytical framework we suggest helps to evaluate and overcome an even larger problem in such complex sectors: the stage of market launch.

The first step of this analysis is the assessment of a large number of promising energy technologies, based on the likely bottlenecks in their launch path, and the classification of these technologies into groups that share the same likely obstacles to market launch. A complex sector such as energy is home to a great range of established and potential technologies in a variety of separate or connected market sectors. Each technology will follow a different route to emergence at scale, but some may share common features. Categorizing common technology-emergence pathways allows the design of support instruments appropriate to each category; without rigorous and careful categorization, workable support mechanisms will simply not emerge, and gaps in and barriers to implementation are inevitable. With this in mind, we have identified the following energy technology pathways:

  • Experimental technologies. This category includes technologies requiring extensive long-range research. The deployment of these technologies is sufficiently far off that the details of their launch pathways can be left to the future. Examples include hydrogen fuel cells for transport; genetically engineered biosystems for CO2 consumption; and, in the very long term, fusion power.
  • Potentially disruptive technologies. These are innovations that can be launched in niche markets that are apart from established systems. In these markets, such innovations face limited initial competition, may expand from this base as they become more price-competitive, and can then challenge established incumbent or “legacy” technologies. Examples include wind and solar technologies, which are building niches in off-grid power and LED lighting.
  • Secondary technologies (uncontested launch). This group includes secondary (component) innovations that will face market competition immediately on launch from established component technologies that perform more or less the same function. These innovations can be expected to be acceptable to recipient industries if the price is right. On the other hand, they must face the rigors of the tilted playing field, such as a competing subsidy, or the obstacle of a major cost differential without the advantage of an initial niche market. Examples include advanced batteries for plug-in hybrids, enhanced geothermal, and on-grid wind and solar.
  • Secondary technologies (contested launch). These are secondary innovations that in addition to facing the same barriers as the uncontested technologies have inherent cost disadvantages and/or can be expected to face economic, political, or other nonmarket opposition from recipient industries or environmental groups. Examples include carbon capture and sequestration, biofuels, and fourth-generation nuclear power.
  • Incremental innovations in conservation and end-use efficiency. The implementation of these innovations is limited by the short time horizons of potential buyers and users, who typically refuse to accept extra initial costs unless the payback period is very short. Examples include improved internal combustion engines, improved building technologies, efficient appliances, improved lighting, and new technologies for electric power distribution.
  • Improvements in manufacturing technologies and processes. These are improvements in the ways in which products are manufactured that can drive down costs and improve efficiency, enabling the new products to compete in the market more quickly. These investments are likely to be inhibited by the reluctance of cautious investors to accept the risk of increasing production capacity and driving down manufacturing costs in the absence of an assured market.

The second step of our analysis requires classifying support policies for the encouragement of energy innovation into technology-neutral packages and matching them to the technology groupings developed in the first step. In other words, once we have identified the different launch pathways by which new technologies can arrive in a market at scale, we can match them with the best support policies. The policy elements include:

  • Front-end technology nurturing. Technology support on the innovation front end, before a technology is close to commercialization, is needed for technologies in all six categories above on the technology-launch pathway. This includes direct government support for long- and short-term R&D, technology prototyping, and demonstrations.
  • Back-end incentives. Incentives (carrots) to encourage technology transition on the back end as a technology closes in on commercialization may be needed to close the price gap between emerging and incumbent technologies. Whereas experimental technologies are in too early a stage to need incentives, and many disruptive technologies may be able to emerge out of technology niches into a competitive position without further incentives beyond R&D support, other categories will probably require carrots. These include secondary technologies facing both uncontested and contested launch, incremental innovations in technology for conservation and end use, and technologies for manufacturing processes and scale-up. Carrots may also be relevant to some disruptive technologies as they transition from niche areas to more general applicability. These incentives include tax credits of various kinds for new energy technology products, loan guarantees, low-cost financing, price guarantees, government procurement programs (including military procurement for quasi-civilian applications such as housing), new-product buy-down programs, and general and technology-specific intellectual property policies.
  • Back-end regulatory and related mandates. Regulatory and related mandates (sticks), also on the back end, may be needed in order to encourage component technologies that face contested launch and also some conservation and end-use technologies. These include standards for particular energy technologies in the building and construction sectors, regulatory mandates such as renewable portfolio standards and fuel economy standards, and emission taxes.

In the energy sector, a system of carbon charges, such as a cap-and-trade program, may make many of the back-end proposals listed above less necessary insofar as it would induce similar effects through pricing mechanisms.

The third step is an institutional gap analysis that consists of a survey of existing institutional and organizational mechanisms for the support of innovation, with the objective of determining what kinds of innovations (as classified by the likely bottlenecks in their launch paths) do not receive federal support at critical stages of the innovation process and what kind of support mechanisms are needed to fill the gaps thus identified.

The fourth step identifies new institutions and organizational mechanisms needed to fill these gaps in the institutions for the promotion of innovation that were identified in the third step; namely, those needed for translational research, for technology financing, and for roadmapping.

Institutional gaps

Our analysis identified at least four separate gaps in current institutional arrangements for the promotion of energy innovation. First, there has been no strong program in the Department of Energy or elsewhere that is explicitly devoted to translational research. By this we mean supporting breakthrough research tied to needed energy technologies and then translating the technologies that derive from the breakthroughs to the prototype stage in a connected and integrated fashion and with commercialization in mind.

A second gap concerns the financing and management of commercial demonstrations of large-scale, engineering-intensive technologies with careful monitoring to ascertain technical feasibility, environmental performance, safety, and costs. Such demonstrations are essential to the development and deployment of technologies for carbon capture and sequestration, a technology essential to the future of coal, and of enhanced “hot rocks” geothermal, one of the more promising technologies now at the prototype/demonstration stage. These technologies will require multiple demonstrations, carrying price tags upward of nine figures.

A third gap concerns the financing of investments in improved manufacturing technology and processes and energy efficiency, especially investments in manufacturing cost-cutting and production scale-up, including conservation and efficiency technology.

Underlying these three gaps is a fourth, the need to encourage and facilitate technological collaboration between government and industry across the board, and specifically in collaborative technology-roadmapping exercises.

The first of the gaps we identified has now been filled, at least in principle, by the establishment and funding of the Advanced Research Projects Agency-Energy (ARPA-E), authorized in the America Competes Act in the fall of 2007 and funded at $400 million in the American Recovery and Reinvestment Act passed by Congress and signed by the president in February 2009. The details of the institutional design of an ARPA-E are critical to its effectiveness, and these are now under review and implementation inside the Department of Energy, with a core staff in place and an initial $150 million grant solicitation issued.

To bridge the second and third gaps, we recommend the establishment of a government corporation able to recruit private-sector engineering and financing expertise capable of operating outside the limits of government procurement systems. The corporation would be able to finance demonstrations of engineering-intensive technologies as well as accelerated manufacturing scale-up of promising technologies and investments in conservation technology. Energy legislation pending in the House and Senate proposes comparable financing institutions.

To bridge the fourth gap, we recommend a roadmapping exercise, led by private industry in collaboration with the government and academic experts, that looks at each technology element and its possible and preferred evolution pathways. It would then tie each pathway to the right elements of a menu of support for research, development, and demonstration, combined with mechanisms for government support for implementation and deployment, as well as demand-oriented policies providing incentives or regulatory standards to encourage or require adoption. Because we lack even an energy technology strategy at this time, such a strategy should be a first priority, with a roadmap to evolve from it.

International implications

Global warming, energy security, and economic competitiveness are inextricably linked, which greatly complicates any purely national effort to stimulate innovation in energy technology. Both global warming and energy security are inherently international problems, to which national solutions can at best offer partial answers. What is more, both issues raise tricky questions of ethics and international relations. Although China recently surpassed the United States to become the world’s leading emitter of carbon dioxide, it will be many decades before the aggregate carbon contribution of China, India, and the rest of the developing world to the atmosphere catches up with that of the presently industrialized economies. It is difficult for the United States to lecture the Chinese peasant or Indian oxcart driver on the virtues of energy conservation from the seat of its metaphorical SUV.

Research collaboration raises somewhat analogous issues. U.S., European, and Japanese companies rightfully see innovation as a source of future market competitiveness, but so do Chinese and Indian firms, which are making major investments in research and manufacturing capacity. On the other hand, it is important to the future of the planet that developing countries, especially India and China, adopt sustainable energy technologies as quickly as possible.

Because of the historic strength of its innovation system, the United States will probably be needed to play a significant role in energy innovation if global progress is to be made in coming decades. However, there should be an international dimension to collaboration on innovation. The key is to maintain a sound balance between commercialization and collaboration, with commercial competition prevailing unless there is market failure or delay, in which case government can play a role. We will need to enlist capitalism in the energy cause and need to be careful to pursue policies that encourage competitive firms to enter this field. However, basic and precompetitive R&D present particular collaboration opportunities, and bi- and multilateral collaborations may offer participating nations expanded innovation resources and opportunities for market entry they would not have on their own. The Pew Center on Global Climate Change and the Asia Society Center on U.S.-China Relations have issued a joint report proposing the following priority areas for U.S.-China collaboration on energy and climate change: deploying low-emission coal technologies, improving energy efficiency and conservation, developing an advanced electrical grid, promoting renewable energy, quantifying carbon emissions, and financing low-carbon technologies.

There is a strong case to be made for international collaboration at the technology implementation stage, particularly for developing nations. The World Bank’s Clean Development Fund, where developed nations support implementation in developing nations, will support this stage. Another partial answer could be linkage mechanisms in cap-and-trade systems whereby greenhouse gas reductions implemented outside a nation’s borders can be credited in its national cap-and-trade compliance system. If structured properly, this could not only promote the most economically efficient investments but also offer leverage to encourage a global effort incorporating developing nations.

This cooperative approach will need to be complemented by frank and constructive dialogue concerning the many perverse national subsidies and other policies, however politically entrenched, that contribute to environmentally and economically unsustainable energy production and use in virtually every country. Technical discussions along these lines should complement the even more politically charged international negotiations over emissions targets that now constitute a major feature of international environmental diplomacy. In the end, however, in a competitive and expanding global economy and in an economic sector as vast as energy, there should be market enough for all to share.

From principle to practice

The new integrated framework that we propose has implications beyond policy theory; it also leads to a different logic for the practical design of technology policy legislation. Compared to our framework, the current U.S. legislative process for energy technology innovation is exactly backward. Today’s preferred strategy, as reflected in the 2005 and 2007 energy bills, is to create legislation for each technology separately and to provide a different incentive structure for each. We argue that the incentive structure should be legislated first in such a way as to preserve the fundamental technology neutrality needed in this complex technology area.

Where complex technology sectors such as energy are involved, Congress needs to legislate standard packages of incentives and support across common technology launch areas, so that some technology neutrality is preserved and the optimal emerging technology has a chance to prevail. Particular technologies can then qualify for these packages based on their launch requirements. It is important to get away from the current legislative approach of unique policy designs for each technology, which is often based on the legislative clout behind that particular technology rather than the critical attributes of the technology itself.

The implications of our proposed approach are large, and the politics needed to implement it will not be easy, especially given the full plate of issues confronting the nation’s political leaders and institutions. Huge sums of money will be involved, and the dangers of the pork barrel will be serious. It will not be simple to keep energy technology innovation efforts apart from disruptive political tampering. As with so many other issues facing the nation, presidential and congressional leadership, combined with grassroots support, will be required for this to work. As Machiavelli observed in 1513, “There is nothing more difficult to carry out, nor more doubtful of success, nor more dangerous to handle, than to initiate a new order of things.”

Nevertheless, it may be easier to gain political support for spending money on research and innovation, as the Obama administration’s energy stimulus package suggests, than for imposing increased costs on energy and CO2 emissions at the levels required. Accelerating the supply side of energy innovation, as efforts to impose a cap-and-trade regime are deliberated and gradually implemented, will be critical because technology supply will assure industry, consumers, and markets that putting a price on energy demand will work and be affordable. There is little time to lose.

Archives – Fall 2009

DAVID MANN, Liquid Gravity, Oil, alkyd, acrylic on panel, 30 × 72 inches, 2001. Collection of the National Academy of Sciences.

Liquid Gravity

Central to the paintings by artist David Mann is the relationship between natural phenomena and the technological means of imaging it. Within Mann’s luminous abstractions, he constructs fantastical spaces that appear to hover simultaneously between the microscopic and the cosmic. The technologies that allow us to visualize the natural world invisible to the human eye serve as inspiration for the forms that fill Mann’s canvases.

David Mann is represented by McKenzie Fine Art, New York City (www.mckenziefineart.com).

Nanolessons for Revamping Government Oversight of Technology

In recent decades, the capabilities of U.S. federal agencies responsible for environmental health and safety have steadily eroded. The agencies cannot perform their basic functions now, and they are even less able to cope with the new challenges being created by rapid advances in science and technology. Nanotechnology provides a perfect case for considering ways to redesign the way in which government oversees science and technology in order to protect public health and safety.

Nanotechnology involves working at the scale of single atoms and molecules. The nanoscale is roughly 1 to 100 nanometers. For comparison, there are 25.4 million nanometers in an inch, or 10 million nanometers in a centimeter. Nanoscale materials can have different chemical, physical, electrical, and biological characteristics than their larger-scale counterparts, and they often behave differently than do conventional materials, even when the basic material (say, carbon or silver) is the same.

Their tiny size and novel characteristics offer considerable promise for developing valuable new products. Indeed, many observers suggest that almost every area of human activity will be affected by future nanotechnologies. Medicine, food, clothing, defense, national security, environmental cleanup, energy generation, electronics, computing, and construction are among the leading sectors that will be changed by nanotechnology innovations.

On the flip side, however, the novel characteristics of nanomaterials mean that risk assessments developed for ordinary materials may be of limited use in determining the health and environmental risks of the products of nanotechnology. Although there are no documented cases of harm attributable specifically to a nanomaterial, a growing body of evidence points to the potential for unusual health and environmental risks. This is not surprising. Nanometer-scale particles can get to places in the environment and the human body that are inaccessible to larger particles; and as a consequence, unusual and unexpected exposures can occur. Nanomaterials also have a much larger ratio of surface area to mass than do ordinary materials. It is at the surface of materials that biological and chemical reactions take place, and so it is to be expected that nanomaterials will be more reactive than bulk materials. Such new exposure routes and increased reactivity can be useful attributes, but they also carry the potential for health and environmental risk.

Starting from scratch

The government’s current system for overseeing technologies and their spinoffs was designed to deal with the problems of steam engines in a precomputer economy-that is, yesterday’s world. It was based on assumptions that most problems are local, that programs can be segmented and isolated from each other, and that technology changes slowly. All of these concepts are no longer valid, if they ever were.

As a first step, the government should create a new agency, which might be called the Department of Environmental and Consumer Protection, in light of its expanded mission.

Given this woeful state, combined with the diverse challenges that nanotechnology and other rapidly advancing fields often pose, it would be a mistake to imagine that simply tinkering with the existing oversight system will suffice. Only a complete overhaul will succeed. The government needs to adopt a new organization that will operate under new legal authority and be equipped-and funded-to use new tools that will provide the knowledge and flexibility required for effective oversight. At the same time, however, new oversight requirements should be applied with a constant awareness of the need to encourage technological innovation and economic growth.

As a first step, the government should create a new agency, which might be called the Department of Environmental and Consumer Protection, in light of its expanded mission. It would incorporate six existing agencies: the Environmental Protection Agency (EPA), the U.S. Geological Survey (USGS), the National Oceanic and Atmospheric Administration (NOAA), the Occupational Safety and Health Administration, the National Institute of Occupational Safety and Health, and the Consumer Products Safety Commission. It would also incorporate a number of new units devoted to particular tasks that now get short shrift or are scattered across agencies. Many of the components would be allowed to operate with a good deal of independence. The success of the new organization would depend greatly on the degree to which it could strike a good balance between the integration and independence of the components.

The agency would differ from existing agencies in that it would be a science agency with a strong regulatory component rather than a regulatory agency with a science component. The emphasis on science is necessary to deal with the rapidly changing technologies that require oversight.

The agency would be among the smaller federal cabinet departments, but not the smallest. It would employ approximately 43,500 full-time equivalent personnel and have an annual budget of roughly $18 billion. This would make it 10 times larger than the Department of Education and 4 times larger than the Department of Housing and Urban Development. However, it would be half the size of the Treasury Department and a quarter the size of the Department of Homeland Security.

Importantly, the agency would be significantly larger than the EPA or any of the other federal oversight agencies. One advantage of housing oversight functions in a larger organization is that it increases overall flexibility. In the smaller agencies, resources are devoted primarily to ensuring their survival and to the performance of the minimal required functions; they have limited ability to anticipate and respond to new problems or to consider new ways of doing things. Also not to be overlooked, the current small size of the regulatory agencies makes them vulnerable to becoming even smaller. “The large get larger” seems to be the organizational analog to the rich getting richer.

Within this broad framework, the agency would focus oversight on products and pollution, and do so in a more integrated way. For example, its activities would include:

Product regulation. The current government oversight system tries to focus on both materials and products, often varying by agency, but this hodge-podge serves neither goal very well. Materials can be thought of as raw ingredients with their own sets of properties. But the way in which a material is used or how it is combined with other materials often determines whether adverse effects will occur. Therefore, focusing on materials alone will not provide a sufficient basis for evaluating risk. Products, on the other hand, are the items made from the materials and sold to public consumers, manufacturers, or others. A product may go through multiple stages, with each stage being a separate product. For example, carbon nanotubes (one product) can be combined with plastic in a compound used for car bodies (a second product), and that compound can be used in a finished automobile (a third product). Because the specific characteristics of specific products are likely to determine the adverse effects that might occur, future oversight will need to focus primarily on products.

Such a product-focused system must incorporate at least two principles. First, oversight should encompass the life cycle of the product through manufacture, use, and disposal. (Transportation is also part of the life cycle, but it can be regulated separately by the Department of Transportation.) Second, the degree of oversight-that is, the stringency of regulatory requirements-should be related to the anticipated harm the product will cause. This is a function of the severity of anticipated harm and the likelihood that it will occur. The government is not likely to have detailed and current information about the composition of a product, its intended use, or its anticipated effects. Only the manufacturer will be able to know or obtain this information on a timely basis. Thus, the government inevitably must depend on the manufacturer to test the product and accurately report relevant information. The government should put in place strong penalties for distorting, concealing, or failing to obtain required data, in order to deter such behavior.

One approach to engaging manufacturers would be to require them to develop a sustainability plan (SP) for each of their products. The plan would contain a summary of known information about the components of the product, the adverse effects of the product, a life-cycle analysis of the product describing its use and manner of disposal, and an explanation of why the product would not cause any undue risk. The government would define as precisely as possible what data are required and what constitutes undue risk, and it could require additional information for particular categories of products.

It seems reasonable to require every manufacturer to know this information before selling its product. (Small businesses should not be exempted, though special efforts will be needed to inform them about the requirements and to provide them with technical assistance to help them meet the requirements.) Moreover, manufacturers would have to update an SP after a product is approved and marketed if they became aware of new information that affected the product’s risk. A number of firms have voluntarily produced statements similar to an SP. For example, DuPont, in cooperation with Environmental Defense, developed a framework for analyzing the risks of nanomaterials and now applies the framework to all of its new nanoproducts.

Because every product, except for a few that would be exempted, would have to have an SP, manufacturers would be able to know the potential risks of components they use by requiring their suppliers to provide them with the SPs for the components. This would be a major benefit to manufacturers of complex products such as automobiles. At present, manufacturers may be legally liable for problems caused by components they use, but they may have no practical way to find out what the risks of the components are.

How such SPs would be used and what additional information, if any, might be required would depend on the harm the product might cause. A possible typology is as follows:

Category 1: This category would cover products that have a low probability of having adverse effects. There would be no oversight; the manufacturer would simply retain the SP, or, if there were clearly no significant risks, the manufacturer might be exempted from the SP requirement altogether. Most products are likely to fall in this category. There is always the possibility, however, that new evidence will move a category 1 product to a different category.

Category 2: This category would cover products for which risk-communication measures should be sufficient to avoid adverse effects. The manufacturer would be required to use the SP as the basis for producing a product safety data sheet to be given to users or for adding labels to products sold to consumers.

Category 3: This category would cover category 1 or 2 products that have been marketed but later become suspected of causing adverse effects. The government would be empowered to halt manufacture or distribution of the product pending a review of its safety.

Category 4: This category would cover products that have some probability of causing adverse health or environmental effects and would therefore be subject to premarket review. Products in this category might include pesticides, fuel additives, and products containing designated types of materials such as persistent organic pollutants.

The government would define the categories and decide which products belong in which categories. To the extent possible, the government would assign broad classes of products to particular categories. If a manufacturer wanted to produce a product that was not included in one of the classes, it would have to submit a request to the government to designate which category the product belonged in. For categories 3 and 4, the burden of proof would be on the manufacturer to demonstrate that the data in the SP were valid and adequate, and that they supported the conclusion that the product would not or did not pose undue risk. The government might have to show some cause for categorizing a product as category 3.

The major challenge in regulating products is the enormous number of products on the market at any given time. For example, the Consumer Products Safety Commission oversees 15,000 types of products, and each type contains numerous individual products. Inevitably, the number of products placed in each category would, to some extent, be determined by the resources available to the government oversight agency. The first two categories would require only spot checking by government, and category 3 probably would apply to only a relatively small number of products. Category 4 would require intensive use of government resources.

To help pay for this system, the government should consider charging manufacturers a fee for seeking product approval, as the Food and Drug Administration now does for drug registration (although steps would need to be taken to avoid some of the problems with that system). Consideration also should be given to making public on a regular and timely basis whatever gap may exist between resources and oversight requirements. This could be done by requiring the agency to regularly publish the number of products that should be reviewed but for which resources were not available to do the review.

Pollution control. Control of nanoparticles released during manufacture must be based on preventing the releases from occurring. Trying to deal with the problem by separately regulating releases to the air, water, or land, as current law does, will not work. Instead, the goal should be to foster the adoption of integrated pollution prevention and control methods that capture or recycle pollutants. In Europe, such technology is facilitated by having a single environmental permit for each facility. In the United States, however, various federal (and often state and local) programs regulate different kinds of pollution and require their own permits. The system not only results in bureaucratic duplication and confusion but also makes the permitting process opaque to the public. Moreover, because of the fragmentation, the system often fails to control a significant portion of a facility’s environmental impact.

Monitoring. Monitoring provides the link between government actions and the real world. Various components of the proposed agency would do two types of monitoring: environmental and human. A number of federal agencies now conduct a range of environmental monitoring activities. Recently, a group of science policy experts proposed combining the two largest monitoring agencies-the USGS and NOAA-into a single independent Earth Systems Science Agency. This idea would be incorporated in the proposed agency, though the new component would be granted a semi-independent status to ensure maximum flexibility. The agency also would take over the EPA’s monitoring functions and would add a new Bureau of Environmental Statistics, analogous to the Bureau of Labor Statistics.

Monitoring human health will be important to spot any products that turn out to cause adverse health effects that were not identified before the products were marketed. Given the uncertainties of risk assessment for new technologies, this situation is almost certain to occur. Such consequences will not be identified unless there is an extensive surveillance system that spots abnormal health phenomena, such as an excess number of cases of a given disease or a spike in emergency room admissions. The system should be coordinated with other domestic and international health reporting systems and it should be as unobtrusive as possible.

Technology assessment. With only a few exceptions such as nuclear power, technology as such is not and should not be regulated in the same sense that products and wastes should be regulated. But it remains critical to fully understand the potential consequences of any technology, in order to guide its application along beneficial lines. The proposed agency can play an important role in conducting and disseminating thorough and balanced assessments that are intended to inform, not promote, and to engage a broad swath of the population.

The need for such assessments may be especially great given the possible social and moral implications of nanotechnology. For example, if it proves possible that nanotechnology can be used to improve the functioning of the human brain, should it be used that way? And if so, for whose brains? If nanoscale materials are incorporated in foods to improve nutrition, taste, or shelf life, should the food have to be labeled to show that nanotechnology has been used? If synthetic biology, using nanotechniques, can create new life forms, should it be allowed to do so?

Focusing on particular applications may miss the overall effects of a technology, and by the time the implications of the applications become clear, it may be too late to effectively influence the direction the technology takes. What is needed is a capability to consider the overall effects of major new technologies and to do so while there is still time to deal with them. This requires a forecasting capability as well as an assessment capability. The techniques for doing forecasting and assessment have not received the attention they need. Not coincidentally, the institutions for making forecasts and conducting assessments are weak or nonexistent.

Moving to a radically different oversight system will require new approaches in a number of other areas as well. These areas include:

Risk assessment. Forecasting the risk of any technology involves basic scientific information about the technology, test data on specific products, and risk assessment. Each of these components has a different source and different characteristics. Basic scientific information comes primarily from university and government laboratories. The motives for developing the information include scientific curiosity, the possibility of obtaining grants and contracts, and the possibility of making money through patents or startup companies. Meeting societal needs, such as identifying the risks of new technologies, is often not a major consideration in setting the basic science agenda. This is one reason why it is important for government oversight agencies to have their own scientific resources.

Testing of specific products is done primarily by their manufacturers, either in house or through contract laboratories. It is beyond the resources of government agencies to test the multitude of products and, in any case, the manufacturer will be most knowledgeable about the products it is making. Testing for new kinds of products can be problematic. For example, it is often not known what end points (cancer, asthma, fish mortality, and so on) to look for when testing nanomaterials, nor is it understood which characteristics of the material are associated with adverse effects. In the absence of testing, conclusions about the safety of a product or material are often based on analogous items that have been tested. However, by definition new types of products do not have exact analogs that have been tested. When technologies are evolutionary, as many nanotechnologies are, analogs may help predict behavior, but they are still generally not an alternative to testing.

The technology of testing is itself changing, and there has been progress in developing tests that are much faster and cheaper than current tests that rely on laboratory animals. The type of risk assessment usually done by the government has evolved into a highly sophisticated set of procedures that are intended to help decisionmakers make rational decisions. But risk assessments typically are not scientific products; they are a way of organizing and analyzing data about a particular substance or product. They are not scientific, because only in unusual cases can they be empirically verified. The typical risk assessment may result in a finding that substance X will produce Y number of additional cancer cases per million people exposed. However, whether Y is 0 or 1,000 in reality will never be known because there are too many other causes of cancer. Regulatory decisions almost always must be made based on the weight of the available evidence. Conclusive scientific proof is usually not to be had, although the better the available science, the easier it is to do a risk assessment and the more accurate the assessment is likely to be.

Because decisions typically must be based on balancing the available evidence, the default assumption about who has the burden of proof is critically important. In the United States, the Toxic Substances Control Act puts the burden on the government to prove the risk. In Europe, however, the burden falls on manufacturers. Industry occasionally argues that the burden should be on the government because it is not possible to prove safety, but this is a fallacious argument. It is not possible to conclusively prove the safety of a product, just as it is usually impossible to conclusively prove the risk. Risk and safety are both operationally defined by required tests, and it is equally difficult to prove either one.

Enforcement. Enforcement has two related dimensions: incentives and compliance. The stronger the incentives, the better the compliance, but the two dimensions involve different considerations. The increasingly rapid pace of technological innovation and the diversity of the innovations have made it difficult to apply many of the older enforcement approaches. Newer approaches have emphasized economic incentives and flexibility. Liability has been used as the major incentive in one U.S. waste law (the Comprehensive Environmental Response, Compensation, and Liability Act of 1980), and it might be possible, for example, to make manufacturers legally liable for failure to develop an SP or for any adverse consequences that could reasonably have been foreseen but were not included in the plan.

A downside to using liability and litigation in implementing regulatory oversight is that government employees might have to spend large amounts of time giving testimony in court, making depositions, and participating in litigation in other ways. This might seriously affect their ability to perform their primary duties. Cap-and-trade programs, such as the one the federal government uses to regulate sulfur dioxide emissions from power plants, have been proposed as a substitute for much of the existing pollution control structure. Effluent fees and charges also have been used in a few situations and have been suggested as an approach that could be used more widely. Whether these kinds of approaches could be used for oversight of useful products (as contrasted with wastes) is not clear, and at the least, caution must be exercised when proposing that incentives developed for curbing wastes be applied to useful products.

Insurance is another incentive that can be important. It can be used either negatively or positively. Negatively, one insurance company has already refused to insure for any damage connected with nanotechnology, citing the lack of adequate risk information. If other companies follow suit, this could be a major incentive for more research and more testing of products by private firms. Insurers could deny insurance to manufacturers that do not have an SP. On the positive side, insurance could be given to manufacturers against tort suits if the manufacturer had an adequate SP and had implemented that plan, and the tort suit covered a subject that was included in the plan.

With respect to compliance, the key question probably is the extent to which one can rely on voluntary compliance. The answer depends on the cultural context. In the United States, oversight in many contexts has shown voluntary compliance to be undependable. Legally enforceable requirements, vigorously implemented, are necessary to deal with the usually small, but important, percentage of firms that are not good corporate citizens.

International cooperation. The combination of a worldwide economy and near-instantaneous global communication has made technology oversight an international issue. Every oversight function, from research to enforcement, now has important international dimensions. The challenge is how to embody the international dimensions in effective institutions. The European Union oversees many issues related to technology within its member nations. The Organization for Economic Cooperation and Development (OECD), which includes most of the industrialized nations, has taken a variety of initiatives related to new technology. It has agreed to test 14 generic nanomaterials for health and environmental effects, and it has established a database for sharing research information on potential adverse effects of manufactured nanomaterials. The United Nations (UN) also has several components relevant to oversight, including the World Health Organization (WHO), the UN Environment Programme (UNEP), and the International Labor Organization. In addition, many nongovernmental international organizations, including international trade associations and mixed public/private organizations, such as the International Organization for Standardization, play a part in oversight efforts.

In the long run, an international regime for product oversight may develop to match the international trade in products. At the least, U.S. and European regulatory approaches should be made consistent. In the interim, the emphasis should be on information sharing. At least three types of information should be made available internationally: research results on adverse effects of a technology; standards, regulations, and other oversight policies, as well as decisions applied to a product or technology; and reports of any adverse health or environmental effects that occur and could be attributed to a product. The OECD has made a start on the first two. The third is an important function that needs to be supported, perhaps by a joint effort of WHO and UNEP. An international system for reporting adverse effects would have to draw heavily on existing surveillance systems. The current worldwide economic crisis and the collapse of the Doha Round of international trade talks have made the future of all international efforts uncertain. One outcome of the current crisis could be a stronger set of international institutions, even perhaps including the basis for an internationalized system for dealing with new technologies and products.

What is needed is a capability to consider the overall effects of major new technologies and to do so while there is still time to deal with them. This requires a forecasting capability as well as an assessment capability.

Public involvement. Transparency should be the hallmark of oversight activities. Without it, the public interest tends to be submerged beneath the interests of bureaucrats, politicians, and special interests. Transparency becomes even more important in the context of new technologies, because if the public senses that secrets are being kept and motives are being hidden, it may reject a new technology regardless of its benefits. As the International Risk Governance Council has noted, new technologies will require more public involvement because their “social, economic and political consequences are expected to be more transformative.” The challenge, as expressed by the UK’s Royal Commission on Environmental Pollution, “is to find the means through which civil society can engage with the social, political, and ethical dimensions of science-based technologies, and democratize their ‘license to operate’… a challenge of moving beyond the governance of risk to the governance of innovation.”

In the United States, the 21st Century Nanotechnology Research and Development Act, the law governing domestic nanotechnology research, requires the National Nanotechnology Coordination Office to provide “for public input and outreach to be integrated into the Program by the convening of regular and ongoing public discussions, through mechanisms such as citizens’ panels, consensus conferences, and educational events, as appropriate.” The National Science Foundation has experimented with some of these techniques, but overall, little effort has gone into implementing this part of the law.

In the context of new technology oversight, the public can be thought of as three groups: the insiders (such as industry representatives, nongovernmental organizations, academic experts, and labor union representatives), the somewhat informed general public, and the bystanders. The majority of the population falls in the category of bystanders. They do not know about or understand most new technologies, and they do not follow what the government does or says about them. However, even the bystanders may influence oversight through their role as consumers, and the products they buy may be influenced by the opinions of the insiders. A goal of U.S. public policy has been to move people from the bystander category to the informed category. This is consistent with a Jeffersonian view of democracy and is an important way of reducing the chances that the public will react against a technology based on propaganda or misinformation. How successful efforts to inform the public can be, what methods can be used, and how to draw the line between information efforts and propaganda remain important areas for study.

Government impact. How the government should influence the direction of new technology is a knotty question. The government exerts a major influence now through financial support for private R&D, appropriations for defense and other science-intensive government programs, and regulations (or the absence of regulations) on various activities. All of these actions are usually taken piecemeal, without any coherent strategy for the overall technological future of the world or even for the future of any particular technology. Consideration should be given to using “social impact statements” analogous to the environmental impact statements required of government projects. The statements would provide a vehicle for the public to learn about new technologies and for both the public and the government to consider what steps, if any, should be taken to maximize the beneficial impact of the technology and to minimize its adverse effects. Who would prepare the statements, when would they be prepared, what would be their scope and level of detail, and how they would be disseminated are all questions that need to be answered.

In addition, individual government agencies need to become more aware of their impact on technological development and of the impact of technologies on society. The foremost example is the military, which has pioneered the development of a large number of significant technologies, ranging from the pesticide DDT to the Internet. The Department of Defense should establish a Defense Technology Review Board to weigh the civilian as well as the military consequences of new military technology. Board members would have to be privy to all aspects of defense R&D. The board would provide advice to the military departments and to the president’s science advisor.

A remodeled oversight program would improve the government’s ability to handle almost all major environmental and consumer programs. For example, it would allow climate change research and modeling to be brought together under one agency under the research and monitoring functions. The same agency would be responsible for controlling greenhouse gases under the oversight function, and the head of the agency could formulate overall climate policy with the benefit of advice from the scientific and regulatory components of the agency.

It is also clear, however, that moving to such a program will not be easy. The political system operates incrementally except when faced with a crisis, and it is to be fervently hoped that no crisis arises with respect to nano- or any other technology. But over the long run, the political system also responds to models of what could or should exist. Goals and ideals, even if a sharp departure from the status quo, can influence the thinking of policymakers and the public. Many of the needed changes will take a decade or more to accomplish, but there is an urgent need to start thinking about them now.

Why Is This So Hard?

If everyone from T. Boone Pickens to Vinood Khosla to Steven Chu agrees that the world needs to develop affordable, low-carbon, efficient, and sustainable energy technologies, why do we have to spend so much time dithering about the design of research and development, demonstration, diffusion, and adoption programs? Why are governments and the private sector investing so little in energy research? Why are reliable experts such as Daniel Yergin writing that fossil fuels will dominate the energy economy for at least another two decades?

The articles in this issue provide some of the answers. First, the energy system is large, expensive, durable, and deeply embedded in the economy and the physical infrastructure. This very large ship will not turn on a dime, or even a few billion dimes. The inertia of sunk investment in the oil, gas, and coal industries, in gas stations and power plants, in buildings, lighting fixtures, and appliances, and in millions of jobs will not be overcome by even the most dazzling laboratory breakthroughs. Although broad macro-support exists for energy innovation, very vehement micro-opposition is in place to block or slow specific changes in the energy system.

Second, innovation isn’t easy. Although it is easy to call for a Manhattan Project to develop genetically engineered liquid fuels, flexible photovoltaic roofing materials, and passively safe nuclear power plants with proliferation-proof fuel recycling, complete success cannot be guaranteed. What is clean might not be affordable, what is affordable might not be sustainable, what is sustainable might not be scalable. Besides, the availability of materials, the distribution of wind and solar energy, the preferences of communities, not to mention the crotchety laws of thermodynamics can derail even the most promising dreams.

So don’t expect to find all the answers here. What you will find is thoughtful discussion on the numerous dimensions of the innovation process that must be taken into account when designing a research and development program that is informed by the experience of people who have extensive experience in the U.S. Congress, the World Bank, Harvard’s School of Engineering and Applied Science, Sandia National Lab, Bell Labs, the United Nations, AFL-CIO, and industrial and financial services companies. This rich mix of experience is exactly what is needed to create an effective energy innovation program.

But the authors in this issue are a tiny piece of an impressive national brainstorming effort directed at unlocking the secrets of innovation and applying the lessons to government and private sector efforts. Although many share a desperate sense of urgency to push new technologies into the market, using our financial and intellectual resources efficiently is as important in designing energy innovation programs as it is in using energy. We should learn from the history of the last great energy groundswell, which followed the energy crisis of the 1970s.

I worked for renewable energy advocacy and industry groups during that period and was a cheerleader for many of the Carter administration’s initiatives. I learned that sound energy policy requires more than good intentions. My own ignorance of energy markets, the technology development process, the minefields of scaling up, material costs, manufacturing realities, and the financial ingenuity that enables people to turn incentives into scams made me a recruit in an army of true believers who did not give birth to a solar revolution. Instead, that period in energy policy is often remembered as the age of the ill-begotten Synthetic Fuels Corporation, which is ritualistically invoked by market purists as a reason why government should be kept at a safe distance from the energy business.

Unfortunately, in spite of compelling economic, environmental, and national security motivation to do something about energy, the private sector investment in energy innovation is anemic. And in recent decades, the federal government has not done much better. That seems to be changing with the Obama administration, so the next priority is to spend that money wisely. A wealth of high-quality analysis is being directed at this goal. Indeed, presidential science advisor John Holdren and energy secretary Steven Chu were leaders in this effort before they entered the government. Now they must work to see that the best of this analysis informs congressional action.

A survey of some of the group efforts that deserve to be heeded could begin at the National Academies, where the America’s Energy Future project, chaired by former Princeton University president Harold T. Shapiro, is a massive effort with dozens of expert committee members looking at the overall technology challenge as well as focused efforts on specific energy sources. Another influential effort has been organized by the Bipartisan Policy Center, a think tank founded by fromer senators Howard Baker, Tom Daschle, Bob Dole, and George Mitchell. Its National Commission on Energy Policy, launched in 2002 and originally co-chaired by John Holdren, has sponsored research by other organizations such as the work that is the source of the article by Joel S. Yudken and Andrea M. Bassi in this issue and a review of energy innovation systems by the Consortium for Science, Policy and Outcomes at Arizona State University.

Innumerable other efforts are also at work. The Brookings Institution developed a proposal for a group of Energy Discovery-Innovation Institutes. The National Research Council’s Board on Science, Technology, and Economic Policy is providing guidance on the development of the solar electricity industry. The Information Technology and Innovation Foundation is exploring how lessons from other high-tech sectors can be applied to the renewable energy industry. The Council on Competitiveness is holding a National Energy Summit & International Dialogue.

Numerous universities have ambitious multidisciplinary efforts devoted to energy and innovation. A list of programs that deserve attention should include Harvard’s Energy Technology Innovation Policy group, the MIT Energy Initiative, Berkeley’s Energy and Resources Group, a number of programs in Carnegie Mellon’s Engineering and Public Policy program, Princeton’s Energy Group, and Stanford’s Global Climate and Energy Project. And of course the Department of Energy’s national labs have research programs in every conceivable energy source and technology.

An understandable concern about this onslaught of policy analysis is that we’ll end up with Hamlet directing our innovation efforts and that no practical progress will be made while we second guess ourselves at every turn. But that is not a real danger because the R&D is moving ahead at the same time. While some people at the universities and the national labs are pondering questions of institutional design and program priorities, many more of their colleagues are doing the essential science and engineering work that will build the foundation for new energy technologies. This work should not stop while we explore how to build the connections throughout the complex network of research, development, demonstration, and deployment, and then on into marketing, acceptance, and effective use.

It’s an enormously complex and critically important endeavor. Why would we expect it to be easy?

Forum – Fall 2009

Too few fish in the sea

Carl Safina’s review of how traditional fisheries management strategies have failed both the fish and the fishermen is right on target, as are his recommendations for new management strategies (“A Future for U.S. Fisheries,” Issues, Summer 2009). But I encourage all of us who think about the ocean to expand our thinking: Fishing isn’t the only activity that affects fish, and the National Marine Fisheries Services (NMFS) isn’t the only agency with jurisdiction over the ocean.

Traditional fisheries management has focused explicitly on fish, with plans addressing single species or groups of similar species. Traditional governance of resources has similarly focused on individual resources, ceding management of different resources to different agencies: fish to NMFS, oil to the Minerals Management Service, navigation to the Coast Guard. But increasingly, science is showing us that these seemingly disparate resources are delicately interconnected.

Discussion abounds in the scientific and popular media about dead zones and harmful algal blooms in the ocean and the nutrients from runoff that create them. Debate rages about the future of oil drilling and offshore renewable power sources in America’s energy future. Tensions and tempers continue to run high as the Navy develops sonar technology that could harm marine mammals. Congress is working up climate change legislation to reduce greenhouse gas emissions.

All of these activities affect the ocean and its inhabitants, and all of these activities affect one another. As we begin to move toward more integrated systems for managing fisheries, we must recognize the broader ocean and global ecosystem—including humans—and integrate it as well.

Toward that end, I have been working for nearly a decade in Congress on a piece of legislation: Oceans-21. It establishes a national ocean policy for the United States. It creates a governance system based on the notion that the multiagency approach we currently employ must be streamlined so we can continue to enjoy the many benefits the ocean provides us.

For too long we have reaped those benefits. We’ve harvested fish and extracted oil. We’ve hidden away our waste and pollution, pumping or dumping it offshore and out of sight. But the ocean’s bounty isn’t infinite, nor is its capacity to absorb our refuse. The time has finally come for us to recognize these truths and take action: We must learn to use the ocean in a sustainable way.

The ocean is the single largest natural resource on the planet and is critical to the habitability of our planet. It’s not just our fisheries that are at risk, but our very future.

REP. SAM FARR

Democrat of California

Co-chair of the House Oceans Caucus


Food for all

In “Abolishing Hunger” (Issues, Summer 2009), Ismail Serageldin presents a cohesive argument for action to abolish hunger while ensuring sustainable management of natural resources. I agree with the action he proposes, but I am concerned that such action will not be prioritized by those in charge. It is a long-standing lesson from historical developments and research that growth and development in agriculture are the key to successful economic growth and poverty alleviation in virtually every low-income developing country. That is so because most poor people are in rural areas and because agriculture usually is the best driver of general economic growth, generating $2 to $3 of general economic growth for every $1 of agricultural growth. Yet most African countries have failed to prioritize agricultural growth and development. Low-income developing countries with stagnant agriculture are almost certain to experience a stagnant economy and high levels of hunger and poverty.

The majority of the world’s poor, who reside in rural areas, are still being ignored, and in some cases exploited, and little investment is being made to produce more food, improve productivity, and reduce unit costs on small farms.

The recent global food crisis is a warning of what can happen when the food system is ignored by policymakers. National governments must now make the necessary investments in public goods for agricultural development, such as rural infrastructure, market development, agricultural research, and appropriate technology. These public goods are essential for farmers and other private-sector agents to do what it takes to generate economic growth and reduce poverty and hunger within and outside the rural areas. Farmers and traders in areas without such public goods cannot expand production, increase productivity and incomes, reduce unit costs of production and marketing, and contribute to the eradication of hunger. When food prices increased during 2007 and the first half of 2008, farmers responded by increasing production. But virtually all the increase came from countries and regions with good infrastructure and access to modern technology. A large share of the world’s poor farmers could not respond. In fact, the majority of African farmers cannot even produce enough to feed their own families. They are net buyers of food and as such were negatively affected by the food price increase.

This has to change if hunger is to be abolished. The knowledge is available to do so. What is missing is the political will among most, but not all, national governments in both developing and developed countries. Large food price decreases from the mid-1970s to 2000 created a false complacency among policymakers and low incomes among farmers in developing countries. Both caused very low private and public investments in agriculture and rural public goods. The recent global food crisis changed all that; or did it? What we have seen is a great deal of conference activity, talk, and hand-wringing but very little action, except irresponsible trade policies and short-term policy interventions to protect urban consumers, who may be a threat to existing governments (remember food riots). The majority of the world’s poor, who reside in rural areas, are still being ignored, and in some cases exploited, and little investment is being made to produce more food, improve productivity, and reduce unit costs on small farms.

At each of several high-level international conferences dedicated to the food and hunger situation, promises were made for large amounts of money to be made available for long-term solutions. Most recently, the G8 meeting in Italy promised up to $20 billion for that purpose. “Up to” are key words. So far, only very small amounts of the money promised at the earlier conferences have in fact come forth, and much of what did materialize was merely transferred from other development activities instead of being additional funds. Will the $20 billion materialize and will governments in developing countries now begin to prioritize agricultural and rural development? If the answer to the second question is no, the global food crisis of 2007–2008 is going to look like child’s play compared to what is to come as climate change increases production fluctuations and makes large previously productive areas unproductive because of drought or floods, and as countries lose their trust in the international food market and pursue self-serving policies at the expense of neighboring countries, something that has already begun. The planet is perfectly capable of producing the food needed in the foreseeable future and eradicating hunger without damaging natural resources, but only with enlightened policies. The time to act is now.

PER PINSTRUP-ANDERSEN

H. E. Babcock Professor of Food, Nutrition and Public Policy

Cornell University

Ithaca, New York


Better science education

Reading Bruce Alberts’ article was nothing if not painful (“Restoring Science to Science Education,” Issues, Summer 2009). More than 40 years ago, I was deeply involved in the work of PSSC Physics, the Elementary Science Study, and the Introductory Physical Sciences program. The work of some of the nation’s most gifted scientists, mathematicians, and teachers, these instructional programs were brilliant responses to the very challenges that Alberts correctly describes. Those programs are now gone, replaced by precisely the kind of standards, materials, and tests they were themselves designed to replace.

But Alberts is right. We should be discouraged. There is a developing consensus that the instructional system is a shambles. Growing numbers of people understand that the standards we have been using—even those of states like California that have been widely admired—suffer from all the crippling defects Alberts catalogues; that American-style multiple-choice, computer-scored tests will never adequately capture the qualities of the greatest interest in student performance; that the materials we give our students stamp out wonder and competence and produce instead aversion to science. But there is a chance that we might get it right this time.

We have been benchmarking the performance of the countries with the most successful education systems for more than two decades. Unlike the United States, many use what we have come to call board examination systems. These systems consist, at the high-school level, of a core curriculum, each course in which is defined by a well-conceived syllabus emphasizing the conceptual foundations of the discipline, deep understanding of the content, and the ability to apply those concepts and use that deep content mastery to solve difficult and unfamiliar problems. Each course comes with an examination at the end that calls on the student to go far beyond recall of facts and procedures to demonstrate a real analytical understanding of the material and the ability to use it well. The grades are also based on substantial student projects undertaken during the year, doing work that could not possibly be done in a timed examination. Most of the questions on the exams are essay questions, and most of the scoring is done by human beings.

My guess is that if Alberts looked carefully at the curriculum and exams produced by the best of the world’s board examination organizations, he would agree that this country would be well served by simply asking our high schools to offer them to their students and then train those teachers to teach those courses well. This is a solution that is available today, and, if it were implemented, could vault the science performance of our high-school students from the bottom of the pack to the top, especially if we were to prepare our students in elementary and secondary school for the board examination programs they should be taking in high school.

MARC TUCKER

President

National Center on Education and the Economy

Washington, D.C.


Nurturing sustainability

Fully half of the “five-year plan” articles in the Summer 2009 edition of Issues called for research to advance various aspects of sustainable development. Unasked was the question, “Who is going to do the work?”

The present system for training and nurturing young scientists is almost certainly not up to the job. To foster “The Sustainability Transition” outlined by Pamela Matson, we need researchers capable not only of (i) doing very demanding cutting-edge science, but also of (ii) working with decisionmakers and other knowledge users to define and refine relevant questions; (iii) integrating the perspectives and methods of multiple disciplines in their quest for answers; and (iv) promoting their inevitably incomplete findings in the rough-and-tumble world of competing sound bites and conflicting political agendas. But such skills are neither taught in most contemporary science curricula nor rewarded in the climb up most academic ladders.

The challenge of doing the work to harness science for sustainable development is compounded by the fact that there is so much work to be done. Elinor Ostrom has argued forcefully that the dependence of human/environment interactions on local contexts means that we must move beyond panaceas in our quest for effective sustainability solutions. This also means, however, that we need lots of scientists and engineers working close to the ground in order to design interventions appropriate to particular places and sectors.

To be sure, a number of initiatives have been launched during the past several years that are beginning to address the human capital needs of harnessing science for a sustainability transition. As documented on the virtual Forum on Science and Innovation for Sustainable Development (), an increasing flow of use-inspired fundamental research on sustainability is finding its ways not only into top journals but also into effective practice at scales from the local to the global. Elements of the training needed to help a new generation of scholars contribute to this flow are increasingly available from the undergraduate to postdoctoral level, backed by an increasing number of novel fellowships. New programs, centers, and even schools of sustainability science and its relatives have been springing up around the world. But as encouraging as these signs of progress surely are, they almost certainly remain wholly inadequate to the challenges before us. An essential complement to the “five-year plans” sketched in Issues’ summer edition must therefore be a systematic and comprehensive effort to characterize, and then to build, the workforce we will need to carry them through. The President’s Office of Science and Technology Policy, together with the National Academies, ought to give high priority to providing the U.S. leadership for such an initiative in building the science and technology workforce for a sustainability transition.

WILLIAM C. CLARK

Harvey Brooks Professor of International Science, Public Policy and Human Development

Kennedy School of Government

Harvard University


Pamela Matson’s article provides a compelling vision of the connection among science, nature, and the changes brought by a globalizing economy. This is a connection often described in terms of crises (climate change) or unbounded opportunity (the Internet). Matson takes a measured, sensible view, drawing attention to long-term transitions already in progress—in population, the economy, and the rising human imprint on nature. We do not yet know whether these transitions will lead toward sustainability: a durable, dynamic relationship with the natural world, in which the activities of one generation enhance the opportunities of future generations and conserve the life support systems on which they depend.

Examples from philanthropy show how science is essential to the search for sustainability. The Packard Foundation is probing consumption by supporting and evaluating the certification of products such as responsibly harvested fish. Does such an approach work to curb irresponsible harvest? Will organic food or carbon offsets scale up to dominate their markets? These are questions of sustainability science.

Together with the Moore Foundation, Packard has supported large-scale science-driven monitoring in the California Current. The Partnership for Interdisciplinary Study of the Coast Ocean has provided, in turn, much of the scientific basis for declaring marine reserves in California waters. Reserves are now under study in Oregon as well. Ecosystem-scale sustainability science has been developing the essential methods of integrating oceanography with nearshore biology. Ahead lies the exciting challenge of demonstrating, in partnership with government, the value of these approaches as the oceans acidify and warm, currents and upwellings change, and sea level rises. Without surveillance, humans will fail to see the warning signs of the thresholds that we and other living things are now crossing and needing to navigate.

For more than a decade, the Aldo Leopold Leadership Program has identified leading young environmental scientists; provided them training in science communications; and strengthened their voice in regional, national, and international policy debates. This too is sustainability science, in service of improved governance.

These three initiatives all aim at forging durable links between knowledge and action: science guided by the needs of users, relevant and timely, legitimate in a mistrustful world, and credible as a guide to difficult, consequential social choices.

Private donors’ means are meager in comparison to the needs of a sustainability transition. We and our grantees in the nonprofit sector need to collaborate far more and more effectively with business and government. As Matson urges, there are institutions to be engaged—not least the National Academies and the global resources of science. It is not that there is no time. But there may not be enough time, unless we seize this day.

KAI N. LEE

Conservation and Science Program Officer

David and Lucile Packard Foundation

Los Altos, California


Energy nuts and bolts

Vaclav Smil’s “U.S. Energy Policy: The Need for Radical Departures” (Issues, Summer 2009) is filled with wise insights and judgments. However, the use of vague language detracts from the article’s value for public policy (for example, “vigorous quest,” “relentless enhancement,“ “fundamental reshaping,” “responsible residential zoning,” and “serious commitment.”) These kinds of broad, unbounded adjectives appear in numerous reports on energy policy nowadays. They give the illusion of meaningful policy prescription, when in fact nothing useful can follow without the hard work of drawing boundaries on the scope, cost, and timing of programs—work that presumably is thought to be so trivial that it can be left to legislative bodies, nongovernmental organizations, and corporations to fill in the details.

Admittedly, there is only so much an analyst can do in one brief article. And laying out an abstract goal for Congress and other leaders, as Smil does so well, is in itself useful, particularly when the goal has a realistic time frame. However, we do not live in a world where wise people sit down together, think about what would be best, and join hands to put wise proposals into effect. We live in a messy, contentious world, where different stakeholders with different ideological biases, philosophies, and personal interests argue with each other, often to the point of paralysis. Programs are put into place by one party, only to be replaced by the other party after subsequent elections. A major challenge is to develop and negotiate energy transition policies that the great majority of stakeholders can accept; policies that are likely to survive party transitions. This is not just the job of legislators and lobbyists. Academics and other outside observers knowledgeable about energy issues could usefully spend time developing policies that are optimal from the perspective of negotiated conflict resolution, rather than from an individual’s view of an optimal world.

JAN BEYEA

Core Associate

Consulting in the Public Interest

Lambertville, New Jersey


Science politicization

Daniel Sarewitz’s “The Rightful Place of Science” (Issues, Summer 2009) is naive. Can he actually believe that U.S. presidents rely on science to drive policy, rather than using science to promote political ends? The White House Office of Science and Technology Policy, for example, exists less for providing advice and guidance than as a means to support and promote the policy views of the president.

By acknowledging President Obama’s support for increased spending on scientific research as the price to pay for congressional passage of his economic stimulus package, Sarewitz does show some recognition of the link between science policy and politics. He makes reference to the attitude toward science by several presidential administrations since the Eisenhower years, but inexplicably fails to mention President Bill Clinton or his science and technology czar, Al Gore. One wonders whether Sarewtiz is amnesic about that period, or as a loyal Democrat and science policy wonk is just embarrassed by it. In spite of the blunders and clumsy manipulation by George W. Bush and his minions, arguably the blatant and heavy-handed politicization of science by Vice-President Gore was the most egregious of all.

In government, the choice of personnel is often tantamount to the formulation of policy, and as vice-president, Gore surrounded himself with yesmen and anti-science, anti-technology ideologues: presidential science adviser Jack Gibbons; Environmental Protection Agency (EPA) chief and Gore acolyte Carol Browner, whose agency’s policies were consistently dictated by environmental extremists and were repeatedly condemned by the scientific community and admonished by the courts; Food and Drug Administration (FDA) Commissioner Jane Henney, selected for the position as a political payoff for politicizing the agency’s regulation while she was its deputy head; State Department Undersecretary Tim Wirth, who worked tirelessly to circumvent Congress’s explicit refusal to ratify radical, wrong-headed, anti-science treaties signed by the Clinton administration; and U.S. Department of Agriculture Undersecretary Ellen Haas, former director of an anti-technology advocacy group, who deconstructed science thusly, “You can have ‘your’ science or ‘my’ science or ‘somebody else’s’ science. By nature, there is going to be a difference.”

We now find ourselves in the questionable position of relying on taxing the very fuels whose consumption we are trying to curtail or eliminate.

Many Obama appointees who will be in a position to influence science-and technology-related issues are overtly hostile to modern technology and the industries that use it: Kathleen Merrigan, the deputy secretary of agriculture; Joshua Sharfstein, deputy FDA commissioner; Lisa Jackson, EPA administrator; and Carol Browner (she’s baaack!), coordinator of environmental policy throughout the executive branch. None of them has shown any understanding of or appreciation of science. Browner was responsible for gratuitous EPA regulations that have slowed the application of biotechnology to agriculture and to environmental problems; Jackson worked in the EPA’s notorious Superfund program for many years; and Merrigan relentlessly promoted the organic food industry, in spite of the facts that organic foods’ high costs make them unaffordable for many Americans, thereby discouraging the consumption of fresh fruits and vegetables; and because of their low yields, are wasteful of farmland and water. While a staffer for the Senate Agriculture Committee, Merrigan was completely uneducable about the importance of genetically improved plant varieties to advances in agriculture.

As Sarewitz says, the “rightful place” of science is hard to find. The Obama administration’s minions are not likely to take us there, and to pretend otherwise is disingenuous.

HENRY I. MILLER

The Hoover Institution

Stanford University

Stanford, California


Paying for pavement

Martin Wachs’ “After the Motor Fuel Tax: Reshaping Transportation Financing” (Issues, Summer 2009) paints a clear picture of what ails the current U.S. system of revenue-raising that uses motor fuel taxes and other indirect user fees to fund roads and mass transit needs. The article also points to a new direction: pricing travel more fairly using a flexible approach that is based on vehicle-miles traveled (VMT).

The past two to three decades have dramatically exposed the weaknesses of the motor fuel tax: substantial gains in fuel economy, the development of hybrid and other alternative-fuel vehicles, and a concerted effort to reduce the United States’ overreliance on fossil fuels. We now find ourselves in the questionable position of relying on taxing the very fuels whose consumption we are trying to curtail or eliminate. When we add the growth in truck VMT and overall travel and the eroding effects of inflation on a federal fuel tax that has not been raised in more than 20 years, it is not surprising that the Federal Highway Trust Fund has, and will continue to, experience deficits and require periodic infusions from the General Fund.

The Federal Highway Trust Fund, which is based on the motor fuel tax, was set up more than 50 years ago to ensure a dependable source of financing for the National System of Interstate and Defense Highways and the Federal Aid Highway Program. Given the shortcomings described above, we can conclude that the motor fuel tax is no longer a “dependable source of financing” for the transportation system.

What is the alternative? Recent commission reports and studies point to distance-based charges or VMT fees as the most promising mid- to long-term solution to replace the fuel tax. The use of direct VMT fees can overcome most, if not all, shortcomings of the fuel tax. Furthermore, because VMT fees relate directly to the amount of travel, rates can be made to vary so as to provide incentives to achieve policy objectives, including greater fuel economy and the use of alternative-fuel vehicles, which the current fuel tax encourages. In addition, however, rates could vary to include factors such as weight, level of emissions, and time-of-day charges. For example, Germany’s national Toll Collect distance- and global positioning system (GPS)–based system for trucks varies the basic toll rate as a function of the number of axles and level of truck emission.

Non-GPS technology is currently available that makes it possible to conduct a large-scale implementation of distance-based charges within two to five years. This approach would use a connection to the vehicle’s onboard diagnostic port, installed in all vehicles since 1996, to obtain speed and time. An onboard device uses these inputs to calculate distance traveled and the appropriate charge, which are the only information that could be sent by the vehicle onboard unit to an office for billing purposes. This approach goes a long way toward addressing public concern about a potential invasion of privacy. (There is a widespread perception that a GPS-based VMT charging system “tracks” where a driver is. This is an unfortunate misconception.)

Given the crisis that road and transit funding is facing, we strongly endorse Wachs’ “hope that Congress will accept the opportunity and begin specifying the architecture of a national system of direct user charges.”

LEE MUNNICH

Director, State and Local Policy Program

FERROL ROBINSON

Research Fellow

Hubert H. Humphrey Institute of Public Affairs

University of Minnesota

Minneapolis, Minnesota


Goal-oriented science

Lewis Branscomb’s “A Focused Approach to Society’s Grand Challenges” (Issues, Summer 2009) offers a good measure of astute analysis and worldliness to research and innovation policymaking in the Obama administration. Yet there is an irony underlying the “Jeffersonian science” approach that, if not dealt with head-on, may derail even the most intelligent and wise of prescriptions.

In my reading of the ongoing attempts to re-label (rather than actually reorient) publicly sponsored knowledge and innovation activities, Jeffersonian science has much in common with Donald Stokes’ earlier “Pasteur’s Quadrant,” as well as with the older, more prosaic, and more institutionally grounded “mission-oriented basic research.” Whatever you call it, this kind of research is not the pure, curiosity-motivated stuff that the National Science Foundation (NSF) was supposed to fund in the Endless Frontier model, but rather the stuff that the Departments of Defense, Commerce, Health and Human Services, and, eventually, Energy were supposed to fund.

Nevertheless, it seems—and here is the irony, as well as the way in which both Jeffersonian science and Pasteur’s Quadrant conceal it—that NSF has found better ways of eliciting and sponsoring this kind of research than the mission agencies have. NSF has done this through several mechanisms, including the increasing importance it has placed on centers, such as the Nanoscale Science and Engineering Centers that are the centerpiece of its involvement in the National Nanotechnology Initiative (full disclosure: I direct one such center); its commitment, however partially realized, to evaluating all peer-reviewed proposals according to a “broader impacts” criterion as well as an “intellectual merit” criterion; and its support (again, contrary to its initial Bushian conception) of a robust set of research programs in the social sciences, in particular in the ethics, values, and social and policy studies of science and technology.

This suite of mechanisms has meant, in my experience, a much more receptive atmosphere at NSF than at the mission agencies for engaging in the kind of transdisciplinary collaborations among social scientists, natural scientists, and engineers that are necessary to understand and manage, as Branscomb delineates, policies to address the “institutions and organizations that finance and perform R&D; … attend to the unanticipated consequences of new technologies; and manage matters related to such issues as intellectual property, taxes, monetary functions, and trade.”

I am thus skeptical that the “four-step analytical policy framework” that Branscomb advances from Charles Weiss and William Bonvillian will have much success in the mission agencies unless it is wed to a new understanding in those agencies; for example, that the “weaknesses and problems [emerging energy technologies] might face in a world market” have to do with many more dimensions than scientific novelty, engineering virtuosity, and economic efficiency, or that the “barriers to commercialization” that must be “overcome” in part involve how innovation happens and who does it and for and to whom, as well as what attitudes toward innovation people and institutions have.

A Jeffersonian science program that fulfills Branscomb’s ambitions would need much more than “greater depth scientifically.” It would need robust incentives for understanding the multiple disciplinary and societal dimensions of innovation and a merit review process, including written solicitations, program officers, and peer reviewers, that embraced the Jeffersonian ideal and was not beholden to outmoded ideas of research and development.

DAVID H. GUSTON

Professor, School of Politics and Global Studies

Co-Director, Consortium for Science, Policy and Outcomes

Director, NSEC/Center for Nanotechnology in Society

Arizona State University

Tempe, Arizona


China‘s Future: Have Talent, Will Thrive

When China’s leaders surveyed their development prospects at the onset of the 21st century, they reached an increasingly obvious conclusion: Their current economic development strategy, heavily dependent on natural resources, fossil fuel, exports based on cheap labor, and extensive capital investment, was no longer viable or attractive. For a range of pressing competitiveness, national security, and sustainability reasons, they decided to shift gears and as a result have embarked on an effort to move their country in the direction of building a “knowledge-based economy” in which innovation and talent are positioned as the primary drivers of enhanced economic performance. Their actions are driven by a rather pervasive sense of urgency about the need for China to catch up more quickly with the rest of the world, especially in terms of science and technology (S&T) capabilities. In fact, the top echelon of Chinese leaders, foremost among them both President Hu Jintao and Premier Wen Jiabao, has recognized that solving the country’s talent issue is crucial to China’s ability to cope with an increasingly competitive international environment; build a comprehensively well-off and harmonious society; and, more important, consolidate and fortify the ruling base of the Chinese Communist Party (CCP). China’s leaders further understand that the successful creation and growth of a knowledge-based economy requires a greatly enhanced talent pool composed of high-quality scientists, engineers, and other professionals.

Indeed, in fulfilling the policies of “revitalizing the nation with science, technology, and education” (kejiao xingguo) and “empowering the nation through talent” (rencai qiangguo), China has turned out millions of college students since 1999, especially in science and engineering, and more recently in management, to meet the country’s new innovation imperatives. Government officials have tried to upgrade the existing Chinese S&T workforce by dispatching many talented individuals overseas for advanced training and research experience to expose them to international standards of world-class science and know-how. Today, it almost has become common practice, even for a large number of Chinese undergraduates, along with their counterparts at the graduate level, to obtain foreign study experience. Exposure to the outside world, particularly Western education and modern technology, appears to have stimulated entrepreneurial activities among many returning Chinese S&T personnel as they seek to harness their newly acquired know-how and convert it into new, commercially viable products and services. China also has encouraged multinational corporations (MNCs) to move up the value chain in their China operations, upgrading their manufacturing activities and adding a substantial R&D capability to their local presence in China. Meanwhile, MNCs are on the lookout for Chinese brainpower, thus creating the context for the possible emergence of talent wars in China.

This new, very positive orientation toward talent and high-end knowledge seems to have begun to pay off, as China in the early 21st century is significantly different from the China that existed when the reform and open-door policy started in the late 1970s or even Chinese society in the early 1990s when China tried to step out of the shadows of the Tiananmen Square crackdown. In fact, it is increasingly clear that China is now not only better positioned but also steadily more confident about becoming a true economic and technological power on both regional and global levels. Moreover, it also is obvious to Chinese political and scientific leaders that the country’s recent progress and future potential can be directly attributed to the increasing productivity and performance of the nation’s emerging talent assets. The rise of China as a global technological power will have important consequences for the country internally as well as externally.

The new China

Numerous indicators suggest that China is well on its way to becoming an established global technological power. China has significantly increased its gross expenditure on R&D (GERD) to U.S. $49 billion in 2007, still far below the United States’ level of U.S. $368 billion. China’s GERD was 1.49% of its gross domestic product in 2007, and the official goal is to raise this to the developed-country level of 2.5% in about a decade—though this may be an overly ambitious target. Investment in R&D has been growing almost twice as fast as the overall economy. In addition, China’s share of publications in the international S&T literature has been increasing. These and other similar indicators reveal a clear picture that China no longer sits on the sidelines of world S&T affairs. The ability to sustain this effort, however, as well as the ability to leapfrog ahead in a number of key areas, are heavily dependent on the ability of Chinese leaders to spur on the sustained development, deployment, and mobilization of a high-quality, high-performance cohort of scientists, engineers, and R&D professionals who can position China at the cutting edge of global innovation and scientific advance. China today possesses the largest human resources in S&T in the world (3.13 million scientists and engineers at the end of 2007) and the second-largest “army” in terms of scientists and engineers who are devoted to R&D activities (1.74 million full-time equivalents as of the end of 2007), and the country remains the largest producer of S&T students at the undergraduate as well as at the doctoral level. Even taking into account the unevenness of quality across the spectrum of Chinese higher education, it is clear that China’s top institutions have the ability to produce graduates that are as well-prepared and technically competent as their Western peers coming out of similar top-tier institutions in the United States and Europe.

To say that China will soon emerge as one of the world’s leaders in global S&T affairs, however, does not necessarily indicate that a growing confrontation between China and the incumbents is on the horizon. Instead, when viewed from the perspective of talent, the most probable and desirable scenarios for the future are characterized by a much more positive-sum set of possibilities and outcomes. First, although China has yet to become a full-fledged epicenter of technological innovation, its domestic scientists have become actively involved in the production of new scientific knowledge on a global scale. One of the key indicators that underlie this orientation is the rather impressive Chinese S&T publication record. In 2007, Chinese papers accounted for 7.5% of the total in journals catalogued by the Science Citation Index (SCI), and Chinese scientists and engineers contributed more than 9.8% of the world’s S&T literature. In addition, with Chinese science steadily moving toward the international frontiers of scientific research, more and more foreign scientists have sought collaborative opportunities with their Chinese colleagues. Between 1996 and 2005, for example, the number of China’s international collaborative papers doubled every 3.81 years, slightly faster than the total number of Chinese papers catalogued by the SCI, which doubled every 3.97 years. Between 2001 and 2005, China’s leading S&T collaborators, measured by the number of coauthored papers catalogued by the SCI, include the United States, Japan, Germany, the United Kingdom, Australia, Canada, France, and South Korea, all technologically advanced nations. China’s investment in S&T infrastructure in the past decade or so has been extensive, and many new big-science state-of-the-art facilities are being constructed. These facilities have made China an attractive hub for research, not only providing a foundation for expanded international cooperation but also offering opportunities to produce first-rate achievements that will attract international attention. In other words, China’s scientific community has become steadily more prominent in international networks of new knowledge creation.

For countries such as the United States, collaboration with China offers possibilities for redefining the R&D world of the 21st century. Finding solutions to some of the pressing global problems with scientific significance such as energy and climate change, avian flu, and HIV/AIDS will be impossible without China’s participation. Meanwhile, participation in these globally oriented, cross-functional, and cross-cultural knowledge networks represents an important learning platform for China’s scientific community. As the world has moved steadily toward a more collaborative model of research and transborder R&D cooperation has become more commonplace, China’s growing involvement has served as a valuable mechanism for a new form of technology transfer. By being able to bring its extensive pool of brainpower to participate in these vibrant networks of new knowledge creation, China has been able to move up the R&D value chain to become a more critical participant and partner. As these knowledge networks increasingly populate the world of R&D and innovation, China’s broader and deeper involvement will serve as a strategic mechanism for bringing China’s own S&T capabilities closer and closer to the forefront of the global S&T frontier.

Despite the problems it will face as it is implementing the comprehensive 15-year S&T development plan announced in early 2006, there is little doubt that China is becoming an important player in world S&T innovation. Both countries and corporations will be affected by China’s S&T development trajectory, and face the challenge of devising competitive and collaborative strategies to prepare for and take advantage of these changes in China. Not surprisingly, many governments throughout the world are entering into or expanding and deepening S&T relations with China. Moreover, among the growing number of MNCs that are focused on tapping into the global talent pool, more and more are establishing R&D centers in China. In addition, there is a rush among universities and research centers in Asia, Europe, and North America to build relationships with Chinese educational and research institutions. The changing complexion of China’s S&T relations and cooperative linkages reflects the deepening of the country’s talent pool and strengthening S&T capabilities. To ignore or underestimate the potential of these developments in China is to miss one of the most important strategic transformations and structural changes in the global economy of the 21st century.

Second, foreign direct investment (FDI) in China has been moving steadily up the value chain over the past three decades. Since China joined the World Trade Organization in late 2001, many MNCs have decided to relocate to China more and more of their high-end business operations, including the establishment of R&D laboratories. According to the Chinese Ministry of Commerce, the country is now host to more than 1,200 foreign R&D centers, including IBM, Cisco, GE, Procter and Gamble, Nokia, AstraZeneca, Panasonic, and Samsung. The purpose of these R&D laboratories is not simply to adapt their products for local markets but also to assemble native talent to carry on genuine R&D activities that could benefit the MNC’s global as well as regional competitive standing. For example, Microsoft Research Asia, the successor to Microsoft Research China, established in 1998, has expanded into an organization that employs more than 350 researchers and engineers, most hired locally; has published 3,000 papers in top international journals and conferences; and has achieved many important, albeit incremental in most cases, technological breakthroughs. Technologies from Microsoft Research Asia have not only made it into Microsoft global products but have also had an influence on the broader information technology (IT) community. MNCs now see enhanced prospects for migrating more of their outsourcing operations to China. Foreign firms also have expanded their collaboration with Chinese institutions of learning on various research fronts.

Third, China’s S&T talent has the ability to help support scientific enterprises in other countries. The quality of Chinese graduates, especially those at the top universities, is well known internationally. Chinese graduates, including many of the best and brightest, are working all over the world as part of a globally networked S&T diaspora. In the past, these were the people who were least likely to return home immediately after finishing their foreign stint. Some 62,500 Chinese-origin Ph.D. holders were in the U.S. workforce in 2003, and it is certain that other highly qualified Chinese-origin scientists and engineers are now staffing universities, research laboratories, and enterprises in many technologically advanced countries.

Fourth, as Chinese companies become more internationally active and expand their operations beyond the physical borders of the People’s Republic of China (PRC), they will extend their overseas technological reach, not only deploying their skilled research personnel but also looking for new opportunities to hire local scientists and engineers. The acquisition of IBM’s personal computer (PC) business by Lenovo represents the type of effort by which a Chinese-based firm, albeit one globally oriented, secures access to critical knowledge, high-end talent, and advanced management know-how. Huawei, China’s leading telecommunications firm, is another example of a PRC company that has begun to globalize and establish R&D hubs and listening posts around the world to ensure that China remains aware of the most cutting-edge developments in hardware and software. With U.S. $2 trillion in foreign exchange reserves, China is likely to become much more active in global mergers and acquisitions and thus much more prominent in international S&T affairs.

China’s talent challenge

Although China enjoys the benefits of a larger and more capable talent pool, four factors are still contributing to a serious talent challenge for the country. First, the pain and aftereffects of the Cultural Revolution, especially its damage to higher education, continue to be felt more than three decades later. The country faces a serious shortage of seasoned specialists and professional R&D managers. Second, the numerous overseas study and research initiatives that have occurred in conjunction with China’s open-door policy have resulted in some critical brain drain. Although it is true that there continue to be positive links between those who have remained abroad and China’s domestic S&T community, the loss of some extremely talented people has hurt China’s S&T enterprise. Third, as Chinese society has begun to age as well, the changing demographic composition of the scientific community also has started to have an impact on the potential for progress in the future, especially as the limited number of more experienced scientists and engineers enter retirement. And finally, the majority of Chinese S&T graduates still are not up to the international quality standards required to meet the steadily increasing skill demands of the overall economy. For example, almost half of the more than one million students who graduated in engineering in 2008 have completed programs that are only three years in length, and their skills are not on par with graduates of four- or five-year engineering programs at home and abroad.

Numerous analysts have cautioned about taking the publicly available official Chinese statistics about supply at face value. The rapid expansion of higher education since 1999 and the emphasis on the number of graduates has raised serious questions about quality. One problem is that the dramatic increase in student admissions in China in recent years has not been matched by a corresponding increase in government investment in higher education. As a result, China’s overall investment in education still ranks among the lowest in the world, thus further complicating and confounding the task of growing its S&T workforce and improving its quality.

The government estimates that at least one-third of 2007, 2008, and 2009 graduates have had serious problems with finding employment matched to their field of concentration, and the questionable quality of their skills is one reason. The actual situation may be even worse. Unmet expectations increasingly fill the air among the ranks of recent graduates. Whereas the Japanese developed the concept of “just in time” in manufacturing, in the field of talent, China’s emphasis has been on “just in case.” There seems to have been an unrealistic expectation among some Chinese officials that a quantitative leap in enrollments would yield substantial qualitative improvements across the ranks of the available talent pool and an accompanying scientific or technological leap forward. Obviously, it will take more time than anticipated by the current leadership for the desired type of sustained concerted advance in S&T and innovation to occur, a development that leaves Beijing increasingly uneasy in a world where innovation and technological advance have become the hallmarks of global leadership in economic, military, and even political terms.

Given the size of its population, China still lags behind most developed countries in terms of the number of researchers per capita. Indeed, using this metric, China has a long way to go before catching up with the talent situation in Korea, Russia, and Singapore. Of China’s 758 million individuals aged between 25 and 64 years in the labor force in 2005, only 6.8% had attained a tertiary level of education; across the Organization for Economic Cooperation and Development (OECD) countries, the average percentage is 26%.

At this point, it also is important to consider the evolving demographic shifts that in years to come will influence both China’s high-end talent pool in S&T and those of its counterparts around the world. It is well known that China has started on the path toward becoming an aging society, with a larger percentage of its workforce, including scientists and engineers, reaching retirement age and surpassing the percentage of those in their college-bound years by around 2017; some expect that Chinese universities will begin to face an admissions challenge as early as 2010. As an aging society, China will move into a comparatively disadvantageous position vis-à-vis its chief competitor India, which has a younger population. Accordingly, China must act quickly to expand and improve the ranks of the educated to create a qualified, highly competent talent pool capable of meeting the country’s goals. Otherwise, the goal of becoming “an innovative nation” by 2020 will not be realized easily.

The road ahead

With these challenges in mind, China could take several actions to better ensure that its talent pool evolves and grows. First, China will need to overcome the structural obstacles that hinder the effective training and use of the S&T workforce by filling in the gap between classroom education and quickly evolving job requirements, attracting qualified high-end talent to assume leadership positions, providing continuous on-the-job training to upgrade the knowledge of the existing workforce, and so on. One of China’s distinctive attributes is its second-tier provinces and cities that are not necessarily inferior to coastal regions in S&T development and education. Therefore, more attention should be given to Anhui, Hebei, and Shandong, along with Liaoning, Shaanxi, and Sichuan, in terms of development of local talent as well as the local S&T infrastructure. Shandong is geographically close to the Yangtze River Delta, and Hebei is close to Beijing and Tianjin, China’s newest development hot spot. Hefei city in Anhui province is home to the famous University of Science and Technology of China, one of the strongest S&T talent training centers in the country. Another city that is on the rise is Dalian, located in northeast China, which aspires to become the “Bangalore of China,” a software and outsourcing base. These provinces and cities are busy creating a new array of incentives to become serious hubs of innovation and sources of opportunity for talented professionals. The three existing technological hubs of Beijing, Shanghai, and Shenzhen simply are not enough to fulfill Chinese dreams.

Second, China will need to further increase and deepen its investment in R&D, not only to absorb more scientists and engineers but also to raise the sophistication and efficacy of their activities. In view of the inefficiencies and frustrations associated with the use of talent by domestic institutions and enterprises, an increasing number of skilled young scientists and engineers and other professionals choose to work for foreign-owned businesses and joint ventures in China as they seek out their best career opportunities. Although unintentional, a new, perhaps no less problematic, internal brain drain to foreign-invested enterprises has started to occur, at least in the short term. This could slow the development of an indigenous innovation capability. Meanwhile, the more general underuse of the Chinese S&T workforce also has further opened the door for MNCs to outsource more of their operations to China, because those in the domestic talent pool are looking for better, more prestigious career opportunities and work experiences. Retention and effective use of the current talent pool are both extremely important tasks, because the group currently in place represents the reservoir of needed scientific, engineering, and managerial human capital that is indispensable for China’s future S&T development.

Third, China will need to foment and instill, more deeply and more broadly, a culture of creativity into its students, S&T workforce, and overall education and R&D environment. Rote learning has been blamed for the lack of creativity among many Chinese. Students are seldom encouraged to think outside the box, take risks, or tolerate failure as a means to progress. Following the tradition of respecting the elderly and experienced, Chinese are sincere in their deference to their seniors; consequently, many students feel uneasy about daring to challenge their teachers and other forms of authority. Therefore, Chinese education needs not only to reconfigure its curriculum to accommodate the evolving social, economic, legal, and political environment, but also to introduce a new pedagogical approach to stimulate more inventive and innovative thinking. This task must involve not only the formal education system but also the entire social system in China, a rather daunting task in view of the cultural and sociopolitical complexities that must be addressed in the process.

In a word, China’s top priority regarding the growth of its talent pool will be to lift the professional standards of its S&T workforce so as to change the nation from simply being the world’s largest manufacturer to a leading nation in innovation. Although this is, of course, a high-level strategic goal requiring long-term commitment and investment, Chinese leaders have to take concrete steps now to enhance the chances of substantially realizing China’s innovation goals in the future. One mechanism for facilitating such a transition is the increasing emphasis placed on securing the return of top scientists and engineers who continue to live and work abroad. No less than the head of the Organization Department of the CCP Central Committee, Li Yuanchao, recently announced a series of concerted efforts to attract back a substantial number of senior Chinese experts from abroad in the hope that they can become a vanguard across the country to drive China to the next level in its S&T development. Missions are being sent from various provinces and cities across China to recruit persons of Chinese ethnic origin to leave their university positions and corporate jobs overseas to take on attractive and often well-compensated positions back in China. The “Thousand Talent Program” has brought back several renowned scientists such as Xiaodong Wang, a Howard Hughes Medical Institute investigator at the University of Texas Southwestern Medical Center, who was elected to the U.S. National Academy of Sciences in 2004 at age 41, and Yigong Shi, a chaired professor of structural biology at Princeton University.

Talent and China’s political development

The analysis of China’s talent must take into account the political environment. The plight of intellectuals throughout the history of the communist regime has been inextricably linked with the ebb and flow of Chinese politics, most noticeably and decisively during the heyday of the decade known as the Cultural Revolution (1966–1976). The links run historically deep as well. For example, it was the intellectuals in China who advocated the enlightenment value of science and democracy during the May Fourth Movement of 1919. This same type of advocacy became embedded in the pro-democracy movement and demonstrations that occurred in Tiananmen Square 70 years later. It was the intellectuals who introduced Marxism into China and eventually helped establish it as the guiding ideology of the CCP. The ranks of the leadership of the PRC over the past 20 years have been dominated by technocrats, most having been educated in science and engineering; since 2007, there has been an inflow of those trained in economics and law into the top ranks of the Chinese leadership.

Politics has at times been a barrier to the development of talent, and not just during the Cultural Revolution. The June 1989 Tiananmen Square incident discouraged the return of a significant number of overseas Chinese students and scholars under Chinese government sponsorship, thus exacerbating the current shortage of Chinese talent at the high end. Within the prevailing institutional climate that has been established under President Hu Jintao and Premier Wen Jiabao, there is every reason to believe that Chinese leaders in power today recognize the huge costs and damage to the country from the unbridled intervention of politics into the realm of science, technology, and education affairs. Accordingly, barring any sudden reversals in the prevailing political climate in China, it is expected that knowledge, expertise, and talent will be valued in their own right as well as in their contribution to improving the innovation system and sustaining the economic growth trajectory of the country.

The regime has granted more authority to scientists and engineers in areas related to their profession because they are no longer viewed as posing a direct challenge to state power. Furthermore, because intellectuals were one of the primary sociopolitical forces that intensively pushed the CCP to undertake more progressive and substantive political reforms to make China more democratic during much of the 1980s, the post-1989 leadership has changed its tactics and strategy toward managing intellectuals and private entrepreneurs by co-opting them and giving them new elite status. Professionals have been recruited into the CCP and appointed to powerful administrative positions in universities and research institutions and even in the central government. In the spirit of support and encouragement, the government also parcels out professional recognition, along with material perks, to a select group of intellectuals and scientists as rewards for their achievements and contributions.

Although talented Chinese professionals are neither organized nor primed to be mobilized as a democratizing force, they have political interests. One day, they may feel a need to assert themselves politically, if not for larger democratic values, then for their own self-interest. Along with other members of the middle class, in all likelihood they will demand an increase in political choices commensurate with their increased economic independence and perhaps agitate for greater political openness and participation. In a word, they may leverage expertise to advocate for a more fundamental change of the political system. Therefore, it remains to be seen how long Chinese professionals will maintain their cozy relationship with China’s political leadership, which soon may need to develop some new tactics to manage the potential political rise of China’s new and perhaps increasingly cohesive talent pool.

In spite of the multiple challenges and issues that Chinese leaders must address in cultivating and nurturing an effective S&T human resource pool, it has become increasingly clear that China’s science, technology, and managerial base does constitute an emerging source of competitive advantage in economic and technological terms. However raw or immature the Chinese talent pool may be at this time, there are good reasons to believe that the present set of shortcomings, which frequently have made talent issues into a serious liability, are now being addressed in a concerted, coherent fashion. The question is not if talent will become a source of competitive advantage, but when and under what conditions.

The critical missing piece of China’s innovation puzzle seems to lie in better deployment and use of the evolving contingent of S&T talent. The associated hurdles that must be overcome are as much about the workplace environment and performance expectations as they are about macrostrategic issues such as setting S&T priorities and funding levels. Some observers have suggested that there are deep cultural inhibitors standing in the way of the establishment of a more creative, innovative atmosphere in China. The persistence of the guanxi or relationship culture, for example, is one of the most important elements that continue to influence the nature of cooperation and economic exchange in China. In addition, petty jealousies and personal rivalries continue to be ever-present features of the research environment. That said, however, the more significant underlying shortcomings derive from the still transitional nature of the reforms and incomplete structural changes taking place in the S&T system. In essence, China has yet to fully realize the onset of an achievement-oriented set of norms and values that fully define the framework of performance, compensation, rewards, and incentives. Another way of saying this is that there still is too much socialism left in the Chinese research system. Nor have many Chinese organizations been able to assimilate completely into their own operating environments critically needed notions of personal responsibility and accountability that sociologists such as Talcott Parsons and others long have associated with sociopolitical and economic advance in the West. Although these limitations will not stop China’s S&T ambitions, they create enough friction to reduce the efficacy of several of the new policy initiatives and financial investments coming out of Beijing.

Nevertheless, the attractiveness of the Chinese talent pool will continue to raise its profile in global S&T activities and knowledge networks. In some cases, key segments of this pool will become the engine behind the formation of a series of emerging pockets of excellence in Chinese S&T, which will be characterized by a somewhat different set of operating features and principles than the bulk of the country’s S&T system. Evidence of Chinese capabilities already is apparent in the life sciences, nanotechnology, high-speed computing, and fuel cell technology. Members of this talent community, large numbers of whom will have spent time abroad for work and/or education, will represent the cutting edge of an emerging wave of Chinese technological entrepreneurs who will help to redefine the operating environment for knowledge creation and innovation in China by introducing successful Western practices. Most important, however, this critical mass of high-end talent will be the mechanism through which more and more foreign collaboration and cooperation occurs. It is incumbent on interested observers of the Chinese S&T scene, therefore, to be able to identify the unique features of these emerging pockets as well as to understand where and how they will emerge. Competition for the brainpower that resides in these pockets will become one of the key defining features of the West’s interactions with the PRC over the coming decades and could become the primary vehicle for helping to ensure China’s role as a stakeholder in global S&T affairs.

Mobilizing Science to Revitalize Early Childhood Policy

President Barack Obama has called for greater investment in the healthy development of the nation’s youngest children. But policymakers are facing difficult decisions about the allocation of limited funds among a range of competing alternatives, including home visiting services beginning in pregnancy, child care from infancy to school entry, and various early education options, among others. Advocates argue that more money is needed, yet there is no consensus about which programs should be priorities for increased support. Current early childhood programs should be viewed as a promising starting point for innovation, not a final destination that simply requires increased funding. Remarkable progress in neuroscience, molecular biology, and genomics has provided rock-solid knowledge that underscores the role of positive early experiences in strengthening brain architecture, along with compelling evidence that “toxic” stress can disrupt brain circuits, undermine achievement, and compromise physical and mental health. Evaluation science also is providing the means to differentiate effective early childhood programs that should be scaled up from underperforming efforts that need to be either strengthened or discontinued. It is time for policymakers to strengthen efforts to equalize opportunities for all young children by leveraging the science of child development and its underlying neurobiology to create the framework for a new era of innovation in early childhood policy and practice.

Seeking new strategies

Exciting new discoveries at the intersection of the biological, behavioral, and social sciences can now explain how healthy development happens, how it is derailed, and what society can do to keep it on track. It is well established, for example, that the interaction of genetics and early experience builds a foundation for all subsequent learning, behavior, and health. That is to say, genes provide the blueprint for building brain architecture, but early experiences determine how the circuitry actually gets wired, and together they influence whether that foundation is strong or weak. Families and communities clearly play the central role (and bear most of the costs) in providing the supportive relationships and positive experiences that all children need, yet public policies that promote healthier environments for children can also have significant positive effects.

In today’s political world, policy discussions about improving the environments in which children live typically begin and end with questions about the quality of the nation’s air, water, and food supplies. For example, there have been heated battles over the regulation of coal-burning power plants, which emit mercury that contaminates rivers and streams and can lead to elevated levels of methyl mercury in the food supply. This presents a serious problem for embryos, fetuses, and young children, whose brains are vulnerable to damage from mercury at levels that appear to be relatively harmless to adults. Beyond the compelling need to protect children from neurotoxic chemicals, however, science also has a lot to say about how a young child’s environment of relationships, including his or her family, nonfamily caregivers, and community, can also be strengthened to produce better outcomes, not only for children themselves but for all of society.

Ensuring the provision of a healthy and supportive environment for all young children requires responsible management of the nation’s physical and human resources. Thus, it is essential that we act on existing knowledge and close the gap between what is known and what is done. Equally important, however, is the need to seek new and more effective strategies to support families and expand opportunities for children, especially for those who are unable to excel because of significant adversity that is built into the environments in which they are being raised. The science of early childhood and early brain development offers a useful framework for productive public discussion and policy deliberation on this critical issue. The most constructive way to begin is to focus the nation’s collective attention on core concepts that are well grounded in the cumulative findings of decades of rigorous research. These concepts are:

  • Brain architecture is constructed through a process that begins before birth and continues into adulthood. As this architecture emerges, it establishes either a sturdy or a fragile foundation for all the capabilities and behavior that follow.
  • Skill begets skill as brains are built in a hierarchical fashion, from the bottom up, and increasingly complex circuits and skills build on simpler circuits and skills over time.
  • The interaction of genes and experience shapes the circuitry of the developing brain. Young children offer frequent invitations to engage with adults, who are either responsive or unresponsive to their needs. This “serve and return” process—what developmental researchers call contingent reciprocity—is fundamental to the wiring of the brain, especially in the early years.
  • Cognitive, emotional, and social capacities are inextricably intertwined, and learning, behavior, and physical and mental health are highly interrelated over the life course. It is impossible to address one domain without affecting the others.
  • Although manageable levels of stress are normative and growth-promoting, toxic stress in the early years—for example, from severe poverty; serious parental mental health impairment, such as maternal depression; child maltreatment; and family violence—can damage developing brain architecture and lead to problems in learning and behavior as well as to increased susceptibility to physical and mental illness. As with other environmental hazards, treating the consequences of toxic stress is less effective than addressing the conditions that cause it.
  • Brain plasticity and the ability to change behavior decrease over time. Consequently, getting it right early is less costly to society and to individuals than trying to fix it later.

These core concepts constitute a rich return on decades of public investment in scientific research. This evolving knowledge base has informed the development, implementation, and evaluation of a multitude of intervention models aimed at improving early childhood development during the past 40 years. The theory of change that currently drives most early interventions for children living in adverse circumstances, which typically involve poverty, emphasizes the provision of enriched learning opportunities for the children and a combination of parenting education and support services for their families, focused mostly on mothers. This model has been implemented successfully in a number of flagship demonstration projects, such as the Perry Preschool and Abecedarian projects, each of which has confirmed that effective intervention can produce positive effects on a range of outcomes. Documented benefits include higher rates of high-school graduation and increased adult incomes, as well as lower rates of special education referral, welfare dependence, and incarceration.

Persistent challenges, however, lie in the magnitude of the effects that have been achieved, which typically falls within the mild to moderate range, as well as in the marked variability in measured outcomes, which is associated largely with inconsistent quality in program implementation. In order to address these challenges effectively, early childhood policy must be driven by two fundamental directives. The first will help to close the gap between what we know and what we can do right now to promote better developmental outcomes. The second calls for new ideas. Both are essential. These directives are:

Decades of program evaluation indicate that the quality of early childhood investments will determine their rate of return. Programs that incorporate evidence-based effectiveness factors that distinguish good services from bad will produce positive outcomes. Programs with inadequately trained personnel, excessive child/adult ratios, and limited or developmentally inappropriate learning opportunities are unlikely to have significant effects, particularly for the most disadvantaged children. The strongest data on positive effects come from a few model programs that are based on a clear theory of change that matches the nature of the intervention to explicit child and family needs. The dilemma facing policymakers is the debate about the relative effectiveness of current programs that vary markedly in the skills of their staff and the quality of their implementation. Overcoming that variability is the most immediate challenge. Continuing to invest in programs that lack sufficient quality is unwise and unproductive.

The most effective early childhood programs clearly make a difference, but there is considerable room for improvement and a compelling need for innovation. For example, 40 years of follow-up data from the most frequently cited preschool program, the Perry Preschool Project, reveal increased rates of high-school graduation (to 66% from 45%) and lowered rates of arrest for violent crime (to 32% from 48%) that represent impressive results with large benefit/cost ratios. But it is impossible to look at this intervention model, which results in only two of three participants completing high school and one-third committing violent crimes, and conclude that the remaining challenge is simply a matter of expanded funding for replication. The data clearly demonstrate that improved interventions are needed. Meeting this need will require the nation to build on current best practices and draw on strong science to develop innovative interventions that get a bigger bang for the buck.

Promising new directions

With this challenge in mind, two areas of scientific inquiry are particularly ripe for development. First, achieving a deeper understanding of the biology of adversity and the evidence base regarding effective interventions would help in fostering innovative policies and programs for children and families whose life opportunities are undermined by toxic stress. Toxic stress differs markedly from other types of stress, called positive or tolerable, in terms of the distinctive physiological disruptions it triggers in the face of adversity that is not buffered by protective relationships.

Positive stress is characterized by moderate, short-lived increases in heart rate, blood pressure, and levels of stress hormones, such as cortisol and inflammatory cytokines, in response to everyday challenges such as dealing with frustration, meeting new people, and getting an immunization. The essential characteristic of positive stress in young children is that it is an important aspect of healthy development that is experienced in the context of stable, supportive relationships that facilitate positive adaptation.

Tolerable stress is a physiological state that could potentially disrupt brain architecture through, for example, cortisol-induced disruption of neural circuits or neuronal death in the hippocampus. Causes of such stress include the death or serious illness of a parent, family discord, homelessness, a natural disaster, or an act of terrorism. The defining characteristic of tolerable stress is that protective relationships help to facilitate adaptive coping that brings the body’s stress-response systems back to baseline, thereby protecting the brain from potentially damaging effects, such as those associated with post-traumatic stress disorder.

Toxic stress comprises recurrent or prolonged activation (or both) of the body’s stress-response systems in the absence of the buffering protection of stable adult support. Major risk factors in early childhood include deep poverty, recurrent maltreatment, chronic neglect, severe maternal depression, parental substance abuse, and family violence. The defining characteristic of toxic stress is that it disrupts brain architecture and affects multiple organ systems. It also leads to relatively lower thresholds for physiological responsiveness to threat that persist throughout life, thereby increasing the risk for stress-related chronic disease and cognitive impairment.

This simple taxonomy, proposed by the National Scientific Council on the Developing Child, differentiates normative life challenges that are growth-promoting from significant adversities that threaten long-term health and development and therefore call for preventive intervention before physiological disruptions occur. Within this taxonomy, programs that serve children whose well-being is compromised by the generic stresses of poverty have demonstrated greater effectiveness than have programs for children whose development is threatened further by additional risk factors, such as child maltreatment, maternal depression, parental substance abuse, family violence, or other complex problems that few contemporary early care and education programs have the specialized expertise needed to address effectively. This gap can be seen when highly dedicated yet modestly trained paraprofessionals are sent to visit the homes of deeply troubled families with young children whose problems overwhelm the visitors’ limited skills. Of perhaps greater concern, children and parents from highly disorganized families struggling with mental illness and substance abuse are less likely to participate in any formal early childhood program and are more likely to drop out if they are enrolled. Thus, significant numbers of the most disadvantaged young children who are at greatest risk for school failure, economic dependence, criminal behavior, and a lifetime of poor health are neither reached nor significantly helped by current programs.

One promising route that innovation in early childhood policy might pursue is illustrated by the efforts of scientists in the Division of Violence Prevention at the federal Centers for Disease Control and Prevention, who are reconsidering child abuse and neglect as a public health issue rather than as a social services concern. This shift in perspective incorporates new research about the extent to which early maltreatment gets built into the body and leads not only to impairments in learning but also to higher rates of diabetes, heart disease, hypertension, substance abuse, depression, stroke, cancer, and many other adult diseases that drive escalating health care costs. The high prevalence of child abuse and neglect alone, estimated to affect 7.5% of children aged 2 to 5, is arguably one of the most compelling threats to healthy child development and certainly the most challenging frontier in early childhood policy.

Since their establishment more than a century ago, child welfare services have addressed the needs of abused and neglected children by focusing on physical safety, reduction of repeated injury, and child custody. But advances in neuroscience now indicate that evaluations of maltreated children that rely exclusively on physical examination and x-rays are woefully insufficient. They must be augmented by comprehensive developmental assessments of the children and sophisticated evaluations of the parent-child relationship by skilled examiners. Moreover, when foster care arrangements are deemed necessary, the need for additional intervention for the child and specialized support for the foster parent is often not recognized. Consequently, the current gap between what is known and what is done for children who have been maltreated may well be the greatest shortcoming in the nation’s health and human services system. Because incremental improvements in child welfare systems have been difficult to achieve, dramatic breakthroughs will require creative thinking, scientific justification, and strong leadership committed to bold change.

The second promising area for scientific inquiry—creatively applying new knowledge from the growing science of early learning, beginning in infancy and extending to school entry—would enhance the effects of early care and education programs for all children, and programs for disadvantaged youngsters would be especially strengthened. To achieve this goal, it will be necessary to think beyond the emphasis on language stimulation and early literacy that informs current practice—efforts that certainly should be continued—and to develop innovative teaching strategies that target other domains of development that are essential for success in school, at work, and in the community. This will mean focusing on the early emergence of competencies in areas known as executive functioning, such as working memory, attention, and self-regulation, that contribute to the ability to plan, use information creatively, and work productively with others.

Additional efforts also will be needed to integrate programs that target the emotional and social needs of young children into the broader early care and education environment. Indeed, failure to acknowledge the interrelatedness of cognitive, language, emotional, and social capabilities, in skill development and in their underlying brain architecture, undermines the full promise of what evidence-based investments in early learning might achieve. The conventional approach to this challenge focuses on treating behavioral problems and emotional difficulties as they become apparent. Yet advances in evidence-based preventive interventions offer much promise in the early childhood years, particularly when combined with the skills and commitment required to address the mental health needs of parents as well. Promising areas for creative intervention include adopting preventive approaches that do not require the assignment of clinical diagnoses to young children and providing health professionals who typically work outside of the mental health field with the skills they need to address, or at least identify, the mental health needs of their patients.

Key policy opportunities

Within the evolving context of current early childhood policy, the creative mobilization of scientific knowledge offers an opportunity to close the gap and create the future in three important areas.

The dilemma facing policymakers is the debate about the relative effectiveness of current programs that vary markedly in the skills of their staff and the quality of their implementation. Overcoming that variability is the most immediate challenge. Continuing to invest in programs that lack sufficient quality is unwise and unproductive.

First, the nation would benefit from a more enlightened view of public expenditures for high-quality early care and education programs in the first five years of life as an investment in building a strong foundation for later academic achievement, economic productivity, and responsible citizenship, and not as a burdensome subsidy for places to watch over children of working parents at the lowest possible cost. The evidence is clear that positive early learning experiences are beneficial for children at all income levels, and strategic investments in youngsters from disadvantaged families yield the largest financial returns to society. The coordination of effective developmental programs with primary health care and interventions that enhance economic security can further increase the odds of more favorable outcomes for children living in poverty. A regular source of health care, for example, increases the likelihood that a young child’s developmental progress can be monitored, concerns can be identified early, and effective interventions can be provided when needed. Linking innovative services that bolster parent employment, income, and assets presents another promising strategy for strengthening family resources, both human and material, that are associated with more favorable child outcomes.

Second, specialized interventions as early as possible, at or before birth, should be focused on improving life outcomes for children whose learning capacity and health are compromised by significant adversity above and beyond the burdens of poverty alone. As described above, the physiological effects of excessive or chronic activation of the stress-response system can disrupt the developing architecture of the immature brain. This can be particularly problematic during sensitive periods in the formation of neural circuits affecting memory in the hippocampus and executive functioning in the prefrontal cortex. In a parallel fashion, the wear and tear of cumulative stress over time can result in damage to the cardiovascular and immune systems that may help explain the association between adverse childhood experiences and greater prevalence of chronic disease in adulthood.

Third, significant social and economic benefits to society could be realized from greater availability of effective prevention and treatment services for young children with emotional or behavioral problems, along with increased assistance for parents and nonrelated caregivers whose own difficulties with depression adversely affect a young child’s environment of relationships. This area of unmet need has been underscored in recent years by media reports and empirical evidence of young children being removed from child care centers and preschool programs that are ill-equipped to deal with behavior problems that undermine learning. Several studies by the National Academies offer a wealth of knowledge to address this challenge. The Institute of Medicine’s 1994 report Reducing Risks for Mental Disorders emphasizes the difference between preventing and treating mental health problems and highlights the promise of prevention. A 2009 report from the National Research Council and Institute of Medicine, Preventing Mental, Emotional, and Behavioral Disorders Among Young People, summarizes the extensive progress that has been made in the development and evaluation of a broad array of evidence-based preventive services during the past decade and notes that the data on benefit/cost analyses have been most positive for interventions in the early childhood years.

Taken together, these priority areas reinforce the importance of early childhood policies that focus on both children and parents, but they also highlight the extent to which current interventions are often too limited. For example, families who must deal with the daily stresses of poverty and maternal depression need more help than is typically provided by a parent education program that teaches them the importance of reading to their children. Youngsters who are struggling with anxiety and fear associated with exposure to violence need more than good learning experiences during the hours they spend in a preschool program. Families burdened by significant adversity need help to achieve greater economic security, coupled with access to structured programs, beyond current informal efforts, that focus on the mental health needs of adults and children. Such two-generational models of intervention must be implemented by personnel with sufficient expertise to deal with the problems they are asked to address.

Finally, continuing debate in the world of early childhood policymaking raises important questions about the definition of “early.” Neuroscience tells us that infants and toddlers who experience toxic stress are at considerable risk for disrupted neural circuitry during early sensitive periods of brain development that cannot be rewired later. This would suggest that later remediation for children who are burdened early on by the physiological effects of toxic stress will be less effective than preventive intervention at an earlier age. Other observers point toward the positive effects of preschool education beginning at age four and argue that missed learning opportunities during the infant and toddler period can be remediated by enrichment in the later preschool years, thereby saving earlier program expenses. Whether broad-based investments are made earlier or later, the long-term societal costs associated with significant early adversity underscore the potential benefits of beginning interventions as early as possible for the most vulnerable young children.

Reasons for optimism

Although serious challenges remain, public understanding of the importance of the early years has grown considerably during the past decade. This increasing awareness is grounded in a greater appreciation of the extent to which early experience influences brain architecture and constructs a foundation for all the learning and health that follow.

Science has been quite effective in answering the question of why public funds should be invested in the healthy development of young children. In contrast, however, science has been less conclusive in its answers to the how questions, which are now primed to be addressed at a much more focused and rigorous level. The challenge is straightforward and clear: to move beyond the simple call for investing in the earliest years and to seek greater guidance in targeting resource allocation to increase the magnitude of return. A 2000 report from the Institute of Medicine and the National Research Council, From Neurons to Neighborhoods: The Science of Early Childhood Development, articulated this challenge: “Finally, there is a compelling need for more constructive dialogue between those who support massive public investments in early childhood services and those who question their cost and ask whether they really make a difference. Both perspectives have merit. Advocates of earlier and more intervention have an obligation to measure their impacts and costs. Skeptics, in turn, must acknowledge the massive scientific evidence that early childhood development is influenced by the environments in which children live.”

Pretending that the early years have little impact on later life outcomes is no longer a credible position. Contending that full funding of existing early childhood programs will completely eliminate later school failure and criminal behavior is similarly indefensible. The concept of early intervention as a strategy for improving life outcomes for young children is well grounded in the biological and social sciences, but the translation of that concept into highly effective programs that generate strong returns on investment needs more work. For those who insist that the United States can do better, current practice provides a good place to start.

If the nation is ready to support a true learning environment that makes it safe for policymakers, practitioners, researchers, and families to ask tough questions, experiment with new ideas, learn from failure, and solve problems together, then the benefits of a more prosperous, cohesive, and just society surely lie ahead.

A Vision for U.S.-Russian Cooperation on Nuclear Security

The United States and Russia have reached a new stage in their relationship, and the time is right to consider how the world’s two most powerful nuclear powers can work together to enhance global security. Cold war polarization ended more than 15 years ago. During the 1990s, attention shifted to the threat posed by the nuclear ambitions of Iran and North Korea. After September 11, 2001, the possibility of nuclear terrorism became the focus of concern. The long cooperative effort between the two countries to improve the security of nuclear materials in Russia will soon have achieved its major goals. Now Russia and the United States are in an ideal position to forge a true partnership to pursue enhanced global nuclear security with an emphasis on other countries.

The potential for nuclear proliferation, the danger of nuclear terrorism, and the challenges of the coming renaissance in nuclear energy all combine to make the nuclear security landscape of 2015 a complicated one. As United Nations (UN) Security Council members, technologically advanced nuclear weapon states, and states with deep involvement in nuclear energy, Russia and the United States are ideally positioned to provide global leadership during this crucial period. Their influence and effectiveness will be enhanced to the degree that they are able to act in consort. President Obama’s call to “reset” the relationship with Russia opens the way for the two sides to take the first steps toward building this partnership. One way to start is by outlining a vision of the world of 2015 that might be achieved if U.S. and Russian leaders commit themselves to the task. Optimists projecting themselves into 2015, will see the following.

In the ideal relationship of 2015, the two sides understand each other’s perceptions of nuclear threats (although they may not completely agree with each other’s threat perceptions), including the degree to which each feels threatened by the actions of the other. They have reached agreement on measures to prevent misunderstanding. These include improved sharing of ballistic missile warning information through the Joint Data Exchange Center and some mechanism to integrate (or at least accommodate) the U.S. ballistic missile defense system now being deployed in Europe.

Because of the extensive dialogue that has taken place since 2009, Russia and the United States view each others’ strategic forces with reduced concern. The two countries have agreed to replace the Strategic Arms Reduction Treaty and the Treaty of Moscow with new treaties that reduce arms and ensure transparency and predictability of strategic offensive and strategic defensive forces. This mechanism has been designed to meet the political and security concerns of both sides. The two countries maintain rough parity in their nuclear forces and continue to work together to reduce their nuclear stockpiles. Because of these elements of predictability and parity, neither side is concerned with asymmetries in internal force composition, leaving each free to shape its forces as it sees fit.

Although the two countries do not completely share a common nuclear threat perception, extensive discussions have brought their views closer to one another on the threats from states such as Iran and North Korea and the existence of other potential proliferator states. In addition, working through mechanisms such as the U.S.–Russian Counter Terrorism working group, the two sides have deepened their mutual understanding of the risk of nuclear terrorism and the threat from improvised nuclear devices.

Nonproliferation. In this ideal future, the Treaty on the Non-Proliferation of Nuclear Weapons (NPT) remains in effect in its current form. The United States and Russia have a common view of the importance of its implementation including the necessity for universal adherence to the Additional Protocol, with its additional authority for the International Atomic Energy Agency (IAEA) to ensure safeguards against diversion of nuclear material from peaceful programs. They also both consistently stress the requirements of UNSCR 1540, requiring states to adopt domestic legislation to prevent proliferation. While preserving the concept of sovereignty in treaty-making, the two countries have taken the lead within the international community to make it difficult for a state to withdraw from the NPT and to preclude states from retaining the benefits such as technical assistance or the provision of nuclear fuel that they have received from nuclear cooperation under NPT Article IV should they withdraw. The two countries also actively develop innovative approaches toward countries not party to the NPT in order to limit proliferation and to move non-parties toward the implementation of NPT norms.

All plutonium and spent fuel in the Democratic People’s Republic of Korea (DPRK) has been removed to Russia for reprocessing, with the cost borne equitably by all states whose security is enhanced by a nuclear weapons-free DPRK. The United States and Russia have worked jointly to play a leading role in verification of the elimination of the existing North Korean weapons program.

Iran has abandoned its plans for nuclear weapons because of consistent international pressure under joint U.S.–Russian leadership. Iran has implemented the Additional Protocol and developed commercial nuclear power under strict International Atomic Energy Agency (IAEA) safeguards using a fuel-leasing approach with fuel supplied by Russia and spent fuel returned to Russia.

The United States and Russia have improved their diplomatic coordination and normally take coordinated, coherent, and effective positions in international forums designed to inhibit proliferation. They consistently work together to strengthen export control mechanisms and other elements of the international regime to counter proliferation and nuclear terrorism. They have cooperated to ensure negotiation and implementation of an effective and verifiable Fissile Material Cutoff Treaty with widespread (ideally universal) application.

In 2015, the United States and Russia jointly take the lead to strengthen adherence to treaty commitments and international norms relating to nuclear security. Where states fail to comply with international non-proliferation and counterterrorism regimes, the United States and Russia work jointly in the Security Council and elsewhere to ensure adequate sanctions. They cooperate closely within the Proliferation Security Initiative and look for other innovative approaches to counter proliferation.

In this ideal future, the United States and Russia both agree that the political conditions to permit the complete abolition of nuclear weapons are unlikely to exist for the immediate future. They also recognize that the technical ability to verify such abolition does not now exist, although scientists in both countries continue to work both independently and together to improve verification techniques. The two countries (and, if possible, the other NPT nuclear weapon states) have cooperated in disseminating honest analyses that demonstrate these facts, while continuing to work to overcome the impediments to abolition. This openness, coupled with continued reductions in the total arsenals of Russia and the United States, and increased transparency concerning the size and composition of those arsenals, has significantly mitigated (although not eliminated) the pressure from non–nuclear weapon states for the nuclear weapon states to take additional action in response to Article VI of the NPT.

IN THE IDEAL RELATIONSHIP OF 2015, THE TWO SIDES UNDERSTAND EACH OTHER’S PERCEPTIONS OF NUCLEAR THREATS (ALTHOUGH THEY MAY NOT COMPLETELY AGREE WITH EACH OTHER’S THREAT PERCEPTIONS), INCLUDING THE DEGREE TO WHICH EACH FEELS THREATENED BY THE ACTIONS OF THE OTHER.

Nuclear power. The world of 2015 is undergoing a renaissance in nuclear power generation. This renaissance is driven in part by the recognition that nuclear energy is indispensable if the world is to meet its growing energy requirements without the unacceptable contributions to global climate change resulting from increased fossil fuel emissions. To ensure that this renaissance does not create proliferation problems, the United States and Russia support a common vision of discouraging the spread of sensitive technology associated with the fuel cycle, based on a harmonization of the current U.S., Russian, and IAEA proposals. This common vision does not enhance a sense of discrimination among the non–nuclear weapons states because it does not ask them to abandon their legal rights. Instead, it offers incentives that make it financially, technically, and politically attractive for states to take advantage of fuel supply and take-back services offered by several states in commercial competition with one another. The two countries complement this effort by working together to create an international nuclear waste management regime.

Both countries recognize that a nuclear reactor accident anywhere in the world will bring this renaissance to a halt. Because they understand that a strong regulatory regime is a prerequisite for nuclear reactor safety, they work together to assist new reactor states in establishing such regimes. They also work with existing channels such as the IAEA, the World Nuclear Association, and the World Association of Nuclear Operators to help share nuclear safety best practices throughout the world, giving special attention to states with limited experience in operating reactors.

Preventing terrorism. In 2015, both the United States and Russia have confidence that the nuclear weapons and materials in the other country are secure against theft from either terrorist attack or insider diversion. They routinely exchange best practices concerning nuclear weapons and nuclear material security and have found a mechanism to share information on security that builds confidence while not revealing specific information that would cause either state concern. Both countries make the consistent investments needed to ensure long-term maintenance of weapons and material security. Through appropriate and well–designed transparency measures, they demonstrate to the international community that their weapons remain safe and secure, thus providing leadership by example to other nuclear weapon–possessing states.

The United States and Russia actively engage other states to encourage them to ensure that the security of nuclear materials and, where appropriate, nuclear weapons in these countries matches the strong security in Russia and the United States. As part of this effort they work together to offer technical security improvements and the sharing of best practices to all states, working through the IAEA where feasible. They also work together to assist states in the effective implementation of both UNSCR 1540 and the Additional Protocol.

As part of this effort, the United States and Russia have worked—and continue to work—to eliminate the non-military use of highly enriched uranium (HEU), especially in research reactors, to complete the return of all U.S.- and Russian- origin HEU from research reactors in third countries, and to eliminate stocks of such material in all non–nuclear weapons states. To set an example for the world, Russia and the United States convert all of their own research rectors to use only low-enriched uranium.

As one element in their broad technical collaboration on security, Russia and the United States take the lead in creating an international system of nuclear attribution based on a technical nuclear forensics capability. While recognizing the practical limits of nuclear forensics, they expect this system to help identify the origin of nuclear material seized from smugglers or terrorists as well as the origin of any device actually detonated. Both Russia and the United States make it clear that if a state assists terrorists in obtaining a nuclear weapon or the materials to construct an improvised nuclear device and terrorists subsequently detonate such a device, both the United States and Russia will have a high probability of knowing where the material originated. Both states make it clear that terrorist use of nuclear weapons or improvised nuclear devices anywhere in the world will inspire universal condemnation. They also each make it clear that they will regard nuclear terrorism within their respective states as justifying a response against the supplier of the weapon or material in accordance with the inherent right of self-defense cited in Article 51 of the UN charter.

This nuclear forensics and attribution effort is part of a continuing effort in organizing and leading the global community under the auspices of the Global Initiative to Combat Nuclear Terrorism. This joint U.S.–Russian initiative has continued to grow and by 2015 is a leading vehicle for preventing nuclear and radiological terrorism.

Scientific cooperation. In 2015, the United States and Russia have expanded and deepened their science and technology coordination in order to provide new technical tools for counter-terrorism; for the verification of reductions in nuclear weapons and nuclear materials, for safeguards; for improving the detection of nuclear weapons and materials; for materials protection, control, and accounting; for reactor technology (including safety); and for spent fuel management. In the last two areas, they have built on the plan for nuclear energy cooperation they established jointly in 2006. They work together to make these new tools available to other states and urge their widespread adoption. By doing so, the two countries seek to create an international strategy of continuous improvement in nuclear safety and security.

The expanded scientific cooperation in support of nuclear security is part of a broad overall program of scientific cooperation, built around strong relationships between the various U.S. and Russian national laboratories. Russia and the United States both recognize the scientific benefits available from more extensive collaboration. As a result, while carefully protecting access to national security information, they have worked to expand overall scientific and technical cooperation, including joint projects and exchanges of personnel.

Both countries are committed to facilitating these scientific exchanges through the timely review and issuance of visas. They have explored the potential of a special visa regime for key scientists whose expertise may be needed in the event of a nuclear crisis.

Bumps in the road

Relations in the area of nuclear security will inevitably reflect the overall political relationship between the two states. Both Russia and the United States have consistently expressed a desire for close, collegial working relations based on partnership and mutual respect. Both seek to maintain and deepen their ties. Leaders of both Russia and the United States have repeatedly stated that if their two countries are not yet allies, both are determined to avoid once again becoming adversaries.

Yet it would be unrealistic to ignore the probability that significant political strains will remain in 2015. Although both countries will work to reduce current tensions, they may not be completely successful. Political conditions could improve, but they may remain the same or even deteriorate. It is possible, and perhaps likely, that in 2015 the United States will be concerned, as it is today, with an apparent Russian drift toward authoritarianism and away from pluralism. If so, Russia will regard, as it does today, U.S. pressure as an inappropriate interference in Russian internal affairs based on a failure to appreciate the special character of the Russian political system and the difficulties of Russia’s post-Soviet transition. Similarly, in 2015, Americans will continue to regard the continuation and expansion of NATO as a way to draw all European states into a 21st century international regime and will assert that Russia should not find this threatening. Russians will continue to ask who such a military alliance is aimed at and will have difficulty accepting that many European states formerly allied with (or part of) the Soviet Union seek military ties to the United States and links to its extended nuclear deterrent because they fear a future return of an expansionist Russia. Americans will continue to seek ballistic missile defenses aimed at Iran and North Korea, while Russians will fear such defenses could (and may be intended to) weaken the Russian nuclear deterrent. In 2015, Americans will continue to look askance at periodic apparent Russian nostalgia for a Soviet-era past that Americans see as marked by despotism and aggression. Russians will continue to recall the international respect they gained as one of the two superpowers more clearly than they recall the accompanying problems of that bygone era. And no amount of desire for partnership can alter the fact that two major powers with global interests will sometimes find that their national interests are in conflict.

Sound analysis and wise policy demand that the two sides not ignore these enduring tensions. Nor should they fail to recognize that political developments within Russia might make cooperation more difficult in the coming decade. But it would be a serious error of both analysis and policy to believe that either internal political developments or the existence of such tensions preclude strengthened cooperation in the area of nuclear security. Even at the height of the Cold War, when military planners on both sides thought that nuclear war was a real possibility, the United States and the then–Soviet Union cooperated to help create the international non-proliferation regime that, despite the challenges it faces today, has served humanity well. The challenge for today’s policy makers and analysts is to find those areas where cooperation is possible and build on them to strengthen the overall relationship.


Linton Brooks (), an independent security consultant based in Washinton, D.C., has five decades of national security experience, including as administrator of the Department of Energy’s National Nuclear Security Administration, assistant director of the Arms Control and Disarmament Agency, chief U.S. negotiator for the Strategic Arms Reduction Treaty, and director of arms control on the National Security Council staff This article is an updated version of a presentation made at a November 2007 international nuclear security workshop sponsored by the U.S. National Academies and the Russian Academy of Sciences.

Transforming Energy Innovation

The United States must change the way it produces and uses energy by shifting away from its dependence on imported oil and coal-fired electricity and by increasing the efficiency with which energy is extracted, captured, converted, and used if it is to meet the urgent challenges facing the energy system, of which climate change and energy security are the most pressing. This will require the improvement of current technologies and the development of new transformative ones, particularly if the transition to a new energy system is going to be timely and cost-effective.

The Obama administration and the secretary of energy understand that the status quo will not deliver the results that are needed and have significantly increased funding for energy research, development, demonstration, and deployment and are putting in place new institutional structures. But policymakers must also pay much more attention to improving the management and coordination of the public energy-technology innovation enterprise. For too long, a focus on design and management elements necessary to allow government-funded innovation institutions to work effectively has largely been absent from policy debates—although not from many analysts’ minds and reports.

The nation is, in fact, at a historical point where the energy innovation system is being examined, significantly expanded, and reshaped. This provides not only a rare opportunity, but indeed a responsibility, to improve the efficiency and effectiveness of the system to make sure that this investment yields the maximum payoff. The importance of improving and better aligning the management and structure of existing and new energy innovation institutions to enhance the coordination, integration, and overall performance of the federal energy-technology innovation effort (from basic research to deployment) cannot be overemphasized. The technology-led transformation of the U.S. energy system that the administration is seeking is unlikely to succeed without a transformation of energy innovation institutions and of the way in which policymakers think about their design.

Drawing lessons from successful efforts by some large U.S. private-sector research institutions and by the national laboratories, we highlight five particular elements that we believe are key to effective management of the energy innovation institutions: mission, leadership, culture, structure and management, and funding. All of these elements must be healthy and in balance with one another to ensure the robustness of the innovation ecosystem.

The greater the integration between those doing basic and applied research, the faster technologies will incorporate relevant new fundamental knowledge about processes and materials.

Furthermore, the external political and policy environment within which these institutions are embedded and the evolving nature of innovation also determine their effectiveness. Therefore, managing outside intervention, coordinating with broader policies and regulations, and adapting to the dynamic nature of the innovation should be seen as an integral part of the challenge.

Apparently aware of the inadequacy of the structure and management of the current federal energy-technology innovation effort, the administration has promoted several initiatives such as the Energy Frontier Research Centers to support larger research teams working in coordination and the Energy Innovation Hubs to better integrate the innovation steps from research to commercialization as was done by Bell Labs. It has also supported the creation of ARPA-E to try to replicate the success of the Defense Advanced Projects Research Agency (DARPA). Other experts have proposed additional strategies. In Issues, Ogden, Podesta, and Deutch proposed the creation of an Energy Technology Corporation to select and execute large-scale demonstration projects, and a panel assembled by the Brookings Institution recommended the creation of a national network of a few dozen Energy Discovery–Innovation Institutes to address the failure that “most federal energy research is conducted within ‘siloed’ labs that are too far removed from the marketplace and too focused on their existing portfolios to support ‘transformational’ or ‘use-inspired’ research targeted at new energy technologies and processes.”

The S&T community welcomes decisive government action, but the management and coordination of new and old initiatives and institutions are indeed very difficult and complex tasks and are not receiving enough attention from policymakers. The Energy Innovation Hubs, for example, could—if they adhere to the principles we describe—fill an important gap in the U.S. innovation system by better integrating basic and applied science in a mission-driven approach. The design of ARPA-E has incorporated some of these principles from its inception, particularly the independence of the leadership. Its success will be determined in part by the extent to which the other principles are applied in practice.

The technology innovation process is complex and nonlinear; complex because it involves a range of actors and factors and nonlinear because technology innovation occurs through multiple dynamic feedbacks between the stages of the process. Furthermore, the technology innovation system is made up of many institutions, including universities, large firms, start-ups, the federal government, states, and other international and extranational institutions, and the relationships among them.

The complexity of the innovation process is especially great for energy technologies for a number of reasons:

  • There are limited and uncertain market signals for energy research, development, and demonstration (RD&D) and for deployment. The externalities of greenhouse gas emissions and energy security, for instance, are not appropriately represented in the market, and thus there is widespread belief that the RD&D and deployment that take place are not commensurate with the challenges facing the energy sector and the technical opportunities that are available.
  • The large scale of many energy technologies such as clean coal processes and nuclear power and the long time frames over which their development takes place further hinder the participation of the private sector in the development of such technologies.
  • Energy technologies are very heterogeneous within each stage—RD&D, early deployment, and widespread diffusion—offering different challenges, with early deployment and subsequent expansion issues being particularly important.
  • Energy technologies ultimately have to compete in the marketplace with powerful incumbent technologies and integrate into a larger technological system, where network and infrastructure effects may lead to technology “lock-in.”

The public-goods nature of energy technologies thus necessitates a multifarious role for the government: ensuring the availability of future technology options, reducing risk, developing more appropriate market signals, and often even helping create markets. As a result, the federal government is not only a major funder and performer of energy RD&D but also a major player in facilitating the introduction of technologies into the market. Given the particularly large scale and scope of the energy area, the design and management of energy innovation institutions and their interactions with the rest of the energy system are extraordinarily complex.

Working out the specific details for each of the energy innovation activities will require further effort, but the literature on the design and management of innovation institutions and the personal experiences of managers do offer some key interconnected principles for the success of the new, and reshaping of old, public institutions and initiatives for enhancing U.S. energy technology innovation.

Guiding principles

The public-sector effort must be pluralistic, reflecting the range of activities and needs relating to the energy sector. For example, in technology areas where the venture capital community or other private-sector entities are active in pursuing the commercialization of technologies and the barriers to demonstration and deployment are lower, it may make more sense to emphasize the university research/spinoff model. In technology areas where infrastructure and networks are essential, public/private partnerships may play a key role. And in many cases, centralized R&D facilities, modeled on large corporate laboratories that focus on long-term research but are informed by real-world needs, may be the most appropriate.

The concept of “open innovation” illustrates the dynamic nature of the innovation system. Pioneered by Berkeley’s Henry William Chesbrough, this model of innovation assumes that useful knowledge is widely distributed and that “even the most capable R&D organizations must identify, connect to, and leverage external knowledge sources.” Chesbrough and his colleagues argue that ideas that were once generated only in large corporate laboratories now grow in settings ranging from the individual inventor, to start-ups, to academic institutions, to spinoffs, and they claim that at least in some high-tech industries, the open-innovation model has “achieved a certain degree of face validity.”

Although we recognize the heterogeneity of energy systems and the ever-changing nature of the relevant innovation enterprise, on the basis of our personal experience and the history of the corporate and national labs, we maintain that it is still possible to identify five elements that are essential to create and run a successful technology innovation institution. These elements are: having a clearly defined mission; attracting visionary and technically excellent leaders; cultivating an entrepreneurial and competitive culture; setting up a structure and management system that balances independence and accountability; and ensuring stable, predictable funding. We believe that they provide a good framework for restructuring current institutions and designing and evaluating proposals, once a functional gap in the system has been identified.

A clearly defined mission that is informed by, and linked to, a larger systems perspective. As is clear from the examples of some large corporate labs (such as Bell Labs, Xerox PARC, and IBM), mission, leadership, culture, and funding are important to create a productive innovative environment. Several examples of successful large U.S. government–driven efforts, such as the Manhattan Project and the Apollo Project, highlight particularly well the importance of having a clearly defined mission.

A well-defined mission facilitates attracting top employees, enables reaching an institution’s objectives more effectively, and adds significantly to the overall integration and coordination of the energy innovation system. The inspiration provided by an exciting mission has often been the most forceful magnet for attracting the best people and for research at the frontier. And clarity about the mission facilitates the design of an appropriate organizational structure for the institution.

The technological underpinnings of an innovation institution should be adaptive and flexible enough to reflect the needs, information, and context of the times. The telecommunications sector, for instance, provides an excellent example of research efforts that are able to adapt quickly to incorporate new technologies, satisfy new demands, and comply with new regulations.

Finally, the mission of the publicly funded innovation institutions should typically focus on providing a public good and/or tackling a market failure. Whenever possible, the licensing or technology-transfer practices of these entities should aim to create an industry rather than a product.

Leadership that has proven scientific and managerial excellence, has a vision of the role of the institution or enterprise in the overall energy system, and is capable of acting as an integrator of processes. An institution or initiative charged with an innovation mandate must have a visionary leader with excellent managerial as well as scientific/technical credentials. The additional complexities of the energy system require leaders to also have an exceptional understanding of the role of his or her institution in the overall system and an ability to integrate the different activities within and outside the institution.

As with the mission, leadership should not only have a systems perspective but also be adaptive and cognizant of the dynamic nature of markets and the innovation system. Leaders must recognize change and manage and structure their organization accordingly.

A leader with strong people-management skills can create an environment where people know the boundaries but are able to flourish. In other words, people must not feel actively managed; they must have freedom but without losing focus. As we will discuss shortly, leaders can also promote this “insulate but not isolate” management approach through different levels of the organization by fostering the right culture and putting in place the right structure.

Fortunately, President Obama has chosen Department of Energy (DOE) leaders whose experience prepares them for the holistic thinking required to manage energy innovation institutions, and this needs to be propagated throughout the energy innovation institutions.

Entrepreneurial culture that promotes competition but also collaboration and interaction among researchers. Cultivating the right culture is often forgotten in discussions about the management of government enterprises. But one cannot overstate the importance of having a culture of excellence that pervades the entire organization from the top to the bottom. A culture of technical excellence helps drive a virtuous circle in which the best will be attracted to the important research in energy-technology innovation partly because that is what the best are working on.

Researchers and managers should operate in an entrepreneurial environment that encourages personal initiative and creativity and the search for new ways of solving problems. This environment should be characterized by openness and healthy competition and a culture for attacking the most pressing and challenging problems. The principle of insulate but not isolate is also useful in describing the balance to which these entities should aspire. Researchers must have independence to test and experiment, and at the same time be accountable for their efforts. In other words, they must have freedom to explore new paths while keeping in mind and being motivated by the long-term questions.

Exploration of multiple approaches should be welcome and encouraged, especially during the early stages of research. Using several approaches, in particular for high-priority projects, increases the chances of overcoming problems and finding a more optimal solution and may reduce product development times. However, the value of pursuing multiple approaches will be realized only with the right culture of collaboration as well as entrepreneurship.

Management procedures and organizational structures that promote independence, and yet give primacy to performance and accountability. Structurally, it is important that the management do all it can to break down the all-too-common separation between basic and applied research and among disciplines. The greater the integration between those doing basic and applied research, the faster technologies will incorporate relevant new fundamental knowledge about processes and materials. As former IBM executive Lewis Branscomb has discussed, corporate laboratories such as the T.J. Watson Research Laboratories of IBM do not attempt to distinguish between basic and applied research; their work is simply referred to as “research.” Energy innovation institutions must also eliminate this divide.

The independence of directors and managers at different levels of the organization and the creation of a critical mass of researchers for each undertaking will also be essential to make progress at the speed required by the energy-related challenges. Research directors and managers must have a high level of independence to adjust their actions at their discretion by allocating resources and people according to new information as it becomes available, while focusing on the long-term mission of the organization. At the same time, the director of the innovation institution needs to have regular access to the Secretary of Energy for strategic reasons and to ensure the nimbleness of the organization to circumvent bureaucratic barriers.

A critical mass of researchers is essential to combine sufficient expertise and points of view to arrive quickly at the best possible outcomes. Intellectual competition with exploration of multiple pathways is critical for success; thus, the structure and the culture of the organization are tightly interrelated.

Recruitment of the best talent is a key to success in innovation and requires the attention of top management. This sends a signal of the importance of recruiting and allows those with deep technical knowledge and breadth of vision to bring on board the appropriate kind of talent for the long-term direction of the organization.

The management of energy-technology innovation institutions should also expend significant effort in developing a cadre of technically skilled managers who can nurture creative scientists, inventors, and problem-solvers and have the capacity and knowledge to carry out meaningful performance reviews of personnel and programs. This ensures that those entering the institution are acculturated appropriately, that there will be a pipeline of trained managers for the continuity of the culture of excellence, and that researchers and managers will continue to bring value to the organization by avoiding knowledge stagnation.

Stable and predictable funding that allows a thorough and sustained exploration of technical opportunities and system-integration questions. Stable, sufficient, and predictable funding is another important requirement for a successful energy innovation institution. R&D projects tend to need several years (although the specific time scale may vary from area to area) before a decision can be made about whether to move forward. R&D managers need to have certainty that funds will be available to meet their goals and to produce enough information to enable managers or directors to make decisions about the future of each project. Block funding at each appropriate management level is important.

The annual appropriation process, at least in the form it has taken in recent years, is not suitable to support research in energy innovation. The year-to-year percentage change in funding over time for fossil energy, energy efficiency, and renewable energy at the DOE serves to illustrate the point about stability. Figure 1 shows the year-to-year DOE funding variation for six fossil energy and energy-efficiency research areas from 1978 until 2009. The average standard deviation of the variation across these six programs was 27%. This means that on average every year there was a one-in-three chance that these programs would receive a funding change (increase or decrease) greater than 27%. The gas program had the largest standard deviation of the year-to-year funding variation (36%), whereas the vehicle technologies program had the lowest standard deviation (18%). Figure 2 illustrates the large year-to-year variation for five renewable energy areas from 1992 to 2008.

Although some changes in overall funding amounts from year to year are expected, the high volatility of the DOE budget reflects department-wide rapid change in priorities and directions. Any good strategy should be flexible enough to be adjusted and improved with time as more information becomes available, but the changes in the DOE’s funding reflect the difficulty of securing block amounts of funding for specific purposes over time. Funding should have second-level, as opposed to first-level, fluctuation constants, by which we mean that after setting a general direction, which may be revised when new analysis and information are available, funding changes should not disrupt a program’s main operations, although minor adjustments may be allowed. Block funding enables research managers to make rapid adjustments within their programs over a couple of years without having to go back to Congress. As with research performance, decisions about funding should be subject to review. The needs for performance reviews and funding stability are intricately linked, because without one, the other one will not work.

Overall, we think that it is useful to think about the fact that there are certain features of the management of innovation institutions that are important at every scale—from the individual researcher, to a research group, to divisions, to the overall institution. They can be thought of as fractal characteristics that are present at different scales or levels. The most important principles to think about at several levels within an innovation institution are the need to insulate but not isolate, to allow independence but with review, to provide funding at appropriate scales, and to focus on people.

The outside world

In addition to the internal factors essential for an institution to successfully perform or fund energy innovation, there are other factors external to the institutions that policymakers should heed.

The first is the need for independence from outside interference, which in the case of public agencies includes the political process. Bureaucratic interference should be minimized because it often collides with the five elements discussed above. The notion of insulate but not isolate is as true for institutions as it is for research groups and individual researchers.

The coupling of research and funding institutions to the external environment has to be delicate. Although their priorities and programs obviously have to be governed by and respond to societal needs, they should not be shaped by political exigencies or issues that are the “flavor of the month.” Like researchers and R&D managers, the innovation institutions themselves need independence, tempered by accountability.

The second main external factor that innovation institutions should manage is the evolving nature of innovation. As Chesbrough and his colleagues have argued, networks and linkages are becoming increasingly important, because in some sectors knowledge is widely dispersed across a variety of settings from garages to national labs. Institutions must be designed to function within the overall system and to evolve with it. They should incorporate an understanding of the role of the private sector, regulators, and consumers, as well as an openness to a certain degree of coupling with these other actors.

To conclude, the much-needed technology-led transformation of the U.S. energy system is unlikely without a transformation of its public energy innovation institutions. To be effective, the internal design of these institutions has to be given careful thought, with particular attention to the five elements mentioned above to nurture research and couple it to application. But the best-functioning institution can become ineffective in the absence of an appropriate external political and bureaucratic environment and appropriate linkages with other actors. Therefore, just as an energy innovation institution is itself a delicate ecosystem, it should also be seen as part of a larger ecosystem, with the understanding that the effectiveness of the overall innovation system depends on the functioning of the components of the system, the relationships among them, and a connection to the broader environment in which it is embedded.

From the Hill – Fall 2009

House approves climate and energy bill, Senate begins work

In a 219-212 vote on June 26, the House passed the American Clean Energy and Security Act (H.R. 2454), a bill to cap greenhouse gas emissions and transform the nation’s energy supply. The bill establishes a cap-and-trade program to reduce emissions 17% below 2005 levels by 2020 and 83% by 2050. It requires electric utilities to meet 20% of their electricity demand through renewable energy sources and energy efficiency by 2020; mandates new energy-saving standards for buildings, appliances, and industry; and authorizes billions of dollars in investments for clean energy technology, including the creation of eight clean energy innovation centers, which are partnerships of universities, nongovernmental organizations, and state institutions with a focus on energy research and commercialization.

Meanwhile, the Senate continued its own deliberations on climate change, examining familiar sticking points such as the impact of the legislation on the U.S. economy, how permits to emit greenhouse gases under a cap-and-trade system would be distributed, and how the proceeds from the auction of permits would be used.

Members of the Obama administration lent their support to efforts to enact legislation at a July 7 hearing of the Senate Committee on Environment and Public Works (EPW) and a July 22 Committee on Agriculture, Nutrition, and Forestry hearing. Officials urged the Senate to enact a bill that would reduce dependence on foreign energy, create green jobs, and protect future generations of Americans from the effects of climate change.

Members of the EPW and Agriculture Committees examined the effect of climate mitigation policy on the agricultural sector, a subject that was much discussed during the House debate. At a July 14 EPW hearing, Republican members claimed that the House bill would increase costs for farmers, but most witnesses disagreed. Bill Hohenstein of the U.S. Department of Agriculture and Bill Krupp of Environmental Defense testified that farmers, through an offset program, stand to profit from climate change legislation. Witnesses at both the July 14 EPW and the July 22 Agriculture Committee hearings said that the agriculture and forestry sectors, with proper incentives, could sequester 20% of all U.S. greenhouse gas emissions.

At a July 8 hearing, the Senate Committee on Finance examined the implications of a climate change bill on U.S. economic competitiveness. Witnesses recommended short-term options, such as free allocation of carbon permits to vulnerable industries, to address competitiveness concerns. However, witnesses emphasized that the best solution to reduce carbon emissions and protect U.S. industries would come through an international multilateral agreement. Members of the committee and witnesses expressed serious concern about measures such as border tax adjustments, which could trigger retaliation and trade sanctions from other countries. President Obama has cautioned against “protectionist” measures included in H.R. 2454, which would tax countries that do not take steps to reduce carbon emissions. However, a border tax appealed to 10 Senate Democrats who sent a letter to President Obama in early August urging the inclusion of measures to protect domestic manufacturing, including a border adjustment mechanism.

At a July 16 hearing, the EPW Committee examined competitiveness from a different angle: whether climate mitigation efforts could lead to new business ventures. John Doerr, a partner at Kleiner Perkins Caufield & Byers, a venture capital firm, testified that a cap on emissions is needed for firms to invest in new energy technologies and spur domestic production of these technologies. The current lack of a policy that would encourage these investments, he said, is “stifling American competitiveness.” Harry C. Alford, president and CEO of the National Black Chamber of Commerce, disagreed, pointing out that a recent study showed that climate legislation would cost 2.5 million jobs.

At an August 4 hearing, the Finance Committee examined how to distribute allowances and use the proceeds. Members examined which scenarios, ranging from auctioning all the permits to giving them away, would best protect consumers from price increases while ensuring that carbon reduction targets are met. Members also examined whether a “price collar”—a minimum and maximum allowance price—would provide greater cost certainty and therefore lower costs to businesses by reducing volatility.

Senate Majority Leader Harry Reid (D-NV) has asked committees with jurisdiction to report out climate bills by late September, signaling a flurry of activity to come in the fall.

NIH finalizes stem cell guidelines

On July 6, the National Institutes of Health (NIH) released its final guidelines on human stem cell research, opening the door to expanded federal funding of research involving embryonic stem cells. During the April-May public comment period, NIH received nearly 50,000 opinions on the draft guidelines that it had released in response to President Obama’s executive order lifting restrictions on stem cell funding.

Perhaps the most important change from the draft guidelines was NIH’s decision to make previously derived stem cell lines that follow the spirit of the new ethical guidelines, if not the exact documentation requirements, eligible for funding. Various scientific groups had expressed concern that because informed consent standards have changed over time, the NIH draft might have excluded stem cell lines that had been ethically derived according to prevailing research standards in place at the time.

Now, according to the final guidelines, an NIH advisory panel will evaluate older stem cell lines, and the NIH director will make the ultimate determination on whether they will qualify. The guidelines also state that NIH will develop a registry of eligible stem cell lines.

House finishes appropriations, Senate making progress

The House passed all 12 of the fiscal year (FY) 2010 appropriation bills before leaving for its August recess; the Senate has completed just four. The Energy and Water, Homeland Security, Legislative Branch, and Agriculture bills have passed both chambers, paving the way for the conference process to begin in September, the last month of FY 2009.

The Senate version of the Agriculture bill includes $1.23 billion for the Agricultural Research Service and $1.306 billion for the National Institute of Food and Agriculture, whereas the House version allocates $1.19 billion and $1.253 billion, respectively.

On the Energy bill, both chambers allocated the Department of Energy’s Office of Science $4.9 billion, up 2.6% or $126 million from FY 2009 and more than the 1% increase proposed by the president. The House would provide the Advanced Research Projects Agency-Energy (ARPA-E), the budding program intended to fund transformative energy research, with $15 million, but the Senate version makes no mention of ARPA-E.

The Senate version of the Homeland Security bill includes $994.9 million for science and technology, a 6.7% or $62.3 million increase; the House version includes a smaller increase to $968 million.

The House-passed Labor, Health and Human Services, and Education appropriations bill includes $31.3 billion for NIH, a 3.1% or $942 million increase and 1.6% or $500 million more than the president’s request. Approved by voice vote was an amendment by Rep. Darrell Issa (R-CA) to no longer fund three peer-reviewed NIH grants related to HIV/AIDS prevention, a move strongly opposed by scientific and medical organizations. The bill would also eliminate funding for grants to public and private organizations to encourage teens to abstain from premarital sex. Democrats have argued that there is little scientific evidence that the programs work. The bill renews prior restrictions on the use of funds for research that creates or destroys human embryos. The Senate Appropriations Committee approved its version of the bill, with a 1.5% increase to $30.8 billion for NIH.

National science education standards supported

The National Governors Associations (NGA) and the Council of Chief State School Officers (CCSSO) in July announced a project to establish a set of common core education standards in English and math. Meanwhile, Sen. Chris Dodd (D-CT) and Rep. Vernon Ehlers (R-MI) reintroduced legislation to promote a voluntary effort by states to adopt standards in science, technology, engineering, and math (STEM) education.

The NGA and CCSSO’s Common Core State Standards effort involves 49 states and territories. The groups plan to develop and implement “research and evidenced-based math and English internationally benchmarked standards” to better prepare students academically. With U.S. students underperforming in international assessments, the NGA and CCSSO hope the core standards will bring U.S. math and language skills up to par with countries it lags behind, as well as addressing regional disparities that exist because of the patchwork of existing state standards.

The Dodd-Ehlers Standards to Provide Educational Achievement for Kids (SPEAK) Act would create national standards for STEM fields, similar to the standards that are to be created by the NGA and the CCSSO. The legislation would amend the National Assessment of Educational Progress Authorization Act, which currently only includes math and English, to include science as part of the assessments. The bill would also direct the NGA board to develop course content for grades K-12 and would provide grants to states if they adopt the voluntary standards and alter teacher certification criteria in order to meet the standards.

Meanwhile, other bills to bolster STEM education have also been introduced. The STEM Coordination Act (H.R. 1709 and S. 1210), sponsored by House Science and Technology Committee Chairman Bart Gordon (D-TN) and Sen. Edward Kaufman (D-DE), would establish a committee within the White House Office of Science and Technology Policy’s National Science and Technology Council (NSTC) to synchronize STEM education efforts across federal agencies. The legislation, which passed the House and is now being considered by the Senate Committee on Commerce, Science, and Transportation, calls for a new NSTC STEM Committee to evaluate all federal STEM education programs every five years. The evaluation would establish long- and short-term goals, create metrics to assess the new goals, and review past efforts to determine program effectiveness. All federally sponsored STEM education activities would be inventoried.

The Enhancing Science, Technology, Engineering, and Mathematics Education Act of 2009, sponsored by Rep. Mike Honda (D-CA), would, in addition to creating an NSTC STEM Education Committee similar to the one in Gordon’s bill, direct the committee to promote efforts to improve federal and state collaboration. It would also create an Office of STEM Education within the Department of Education to coordinate and evaluate STEM education efforts within the department.

To further examine methods of collaboration and coordination, the House Research and Science Education Subcommittee held a hearing that showcased partnerships in Chicago that have improved science education. Michael Lach of the Chicago public school system said that success depended on the assistance and partnership of local community groups, colleges and universities, museums and laboratories, and the federal government. Examples of partnerships included a National Science Foundation grant to create a program at local universities to improve teacher education courses in science and mathematics; a series of course curriculum materials developed by the University of Chicago for local schools to use to enhance STEM education; and a series of after-school clubs involving local science museums and botanical gardens to enrich and retain students’ interest in key scientific disciplines.

Although Lach said that much more needs to be done, he emphasized that the success of the partnerships to date depended on a coherent strategy and centralization of the system. He stated that “systems that foster innovation and entrepreneurship push decisions and resources closest to schools and classrooms, and when they are coupled with strong accountability systems, local communities can easily gauge success.”

Biologic drug regulation examined

As part of the overall debate on health care reform, Congress is examining the regulation of generic biologic drugs. Congress wants to lower the cost of biologic drugs for patients, but not at the cost of stifling innovation by drugmakers.

Biologic drugs are the fastest-growing and most expensive element of prescription drug costs in the United States. Brand-name biologics can cost between $48,000 and $120,000 a year. Biologic drugs are made from living organisms or in living cells and are much more structurally complex and environmentally sensitive than small-molecule pharmaceuticals. A biologic drug is often unique to the specific processes used to produce it, which makes it extremely difficult to produce an exact copy of a biologic drug. Consequently, companies have begun developing “biosimilar” drugs or follow-on biologics.

At issue in the biologics debate is the length of a data-exclusivity period for developers of a drug. Data exclusivity, which is separate from patent protection, is the protection of clinical test data that is provided to the Food and Drug Administration (FDA) for the approval process. Under the 1984 Hatch-Waxman Act, generic drug applicants are not required to duplicate the clinical testing of drugs already approved by the FDA. Applicants need to show only that the generic drug is chemically the same as the original drug and can use the clinical test data from the original drug for this process once its protections expire.

Rep. Anna Eshoo (D-CA) has introduced the Pathway for Biosimilars Act (H.R. 1548), which provides 12 years of data exclusivity to brand-name biologic drug companies. The bill has 142 cosponsors. At a July 14 House Judiciary Committee hearing, representatives from the Biotechnology Industry Organization (BIO), the National Venture Capital Association, and the American Intellectual Property Law Association backed Eshoo’s bill. Witnesses testified that complexity and loopholes in the patent system require a longer data-exclusivity period as a “necessary backstop” to ensure that novel drugs are protected from competition and that innovation is encouraged. In addition, because minute differences in the structure of a biologic can sometimes cause major differences in the efficacy of the drug, BIO and other groups believe that they should not be approved without additional clinical testing.

Also present at the hearing were opponents of a long data-exclusivity period. Bruce Leicher of Momenta Pharmaceuticals testified that a longer exclusivity period will slow innovation and provide a disincentive to companies to create true generic replicas instead of mere biosimilars. Patient advocacy groups stressed that a long data-exclusivity period will hinder the creation of the generic biologic market for the creation of cheaper alternatives to name-brand drugs.

Rep. Henry Waxman (D-CA) has introduced the Promoting Innovation and Access to Life-Saving Medicine Act (H.R. 1427), which calls for five years of exclusivity and is supported by the Generic Pharmaceutical Association, Consumers Union, American Association of Retired Persons, and the AFL-CIO. Waxman’s bill models the current generic pathway for small-molecule drugs, which have a five-year data-exclusivity period.

In the Senate, a long data-exclusivity period has won approval from the Health, Education, Labor and Pensions Committee. The Committee voted 16-7 to approve an amendment by Sens. Orrin Hatch (R-UT) and Mike Enzi (R-WY) that would grant 12 years of exclusivity to brand-name biologics.

The Executive Branch is examining the issue as well. The Federal Trade Commission released a report concluding that a data-exclusivity period is not necessary at all. The Obama administration has announced its support for a compromise period of 7 years.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Cloning DARPA Successfully

Confronted with the growing threat of global climate change, today’s policymakers face the challenge of how to create an energy system that emits less carbon and is more efficient, affordable, and secure. Although technologies exist whose immediate adoption would help reduce carbon emissions in the short to medium term, innovations in new technologies will eventually be necessary to stabilize the climate. A carbon price or carbon portfolio standard will clearly be necessary to ensure market demand for such new technologies; however, it alone will not be sufficient to create the needed market. Further, creating a market for carbon reduction solves only half the problem. New technologies are needed to meet that market demand.

The United States has a long, if mixed, history of government efforts to support technology development. In addition to traditional measures such as facilitating technology investment through tax policy, subsidies, and funding for basic research, the government has experimented with a broad suite of technology policy options. Although departures from the traditional policy repertoire are attacked as misguided efforts by high-level government bureaucrats to choose technology winners, a closer look at history shows that choosing winners is not the only technology policy option. One particularly successful and durable innovation policy alternative, which I call bottom-up governance, has been used for over 50 years by the military in the form of the Defense Advanced Research Projects Agency (DARPA).

Born as ARPA in 1958 as part of the response to the Soviet launching of Sputnik, DARPA’s mission was to prevent future technological surprises. Many blamed the U.S. failure to launch a satellite before the Soviets on the rivalry between the military services, and ARPA was designed to bypass that rivalry. ARPA’s first priority was to oversee space activities until the National Aeronautics and Space Administration (NASA) was established. By 1960, all of ARPA’s space programs had been transferred to NASA and the individual military services. Done with space, ARPA focused its energies on ballistic missile defense, nuclear test detection, propellants, and materials. As observed by historian Alex Roland, it was during this period that ARPA took on the role of nurturing ideas that other segments of the nation would not or could not develop and carrying them to the proof-of-concept stage. Roland notes that it was also in the 1960s that ARPA established its critical organizational infrastructure and management style. Specifically, ARPA decided against doing its own research. Instead, it empowered its program managers—scientists and engineers on loan from academia or industry for three to five years—to fund technology developments within the wider research community. Within this environment, there was little to no hierarchy. To fund a project, program managers needed to convince only two people: their office director and the ARPA director.

Today, DARPA’s success is legendary. Staffed with roughly 100 program managers and an annual budget of about $3 billion, DARPA has been credited with founding everything from the Internet to the personal computer to the laser. In recent years, many federal agencies have begun programs intended to replicate DARPA. In 1998, the intelligence community formed the Advanced Research and Development Agency (ARDA). In 2002, the Department of Homeland Security formed HS-ARPA. In 2006, the intelligence community replaced ARDA with I-ARPA. Then in 2007, Congress directed the Department of Energy (DOE) to create ARPA-E. However, it is not clear how much these DARPA wannabes have incorporated from their avowed model besides its initials. Particularly questionable is whether they have copied the critical factors that are responsible for DARPA’s apparent success.

The program manager as key

Many past government and industry observers have written about the DARPA model and its transferability to other contexts. Some have argued that there is no single DARPA model. Others have focused on DARPA’s organizational culture and structure. Richard Van Atta has identified the following key characteristics of the DARPA model: It is independent from service R&D organizations; it is a lean, agile organization with a risk-taking culture; and it is idea-driven and outcome-oriented. These themes are echoed in DARPA’s self-described 12 organizing elements, along with two additional themes: a focus on hiring an eclectic, world-class technical staff and the importance of DARPA’s role in connecting collaborators. Those who argue against the transferability of DARPA to other contexts have focused on what is unique about DARPA’s role for the military. Berkeley’s David Mowery and others underscore the significance of the military’s long-term needs, both in direction-setting and in ensuring researchers an early niche market for their technologies. But more persistent over the decades than DARPA’s organizational structure or promise of immediate markets are the mechanisms used by DARPA’s program managers for governing technology directions in the United States.

DARPA is often portrayed in the popular press as the military’s radical risk-taking venture capitalist. But this image of hit-and-run investment couldn’t be further from the truth. Rather, the little-studied key to DARPA’s success lies with its program managers. Each program manager, who is temporarily on leave from a permanent position in the academic or industrial research community, is given tremendous autonomy to identify and fund relevant technologies in his or her own field that are relevant to specific military purposes. To carry out their roles, program managers must execute four interrelated tasks: learn about current or forthcoming military challenges; identify emerging technologies that have the potential to address those challenges; grow the community of researchers working on these emerging technologies; and be sure, as this community evolves, to transfer responsibility for the further development and eventual commercialization of these technologies either to the military services or the commercial sector. These four tasks are no small undertaking, but the DARPA program managers don’t go it alone. Instead, they set their directions as much from listening to the voices of active researchers, and nudging them in common directions, as they do from listening to their military customers. The result is a symphony of new research activities that can change the technological direction of the nation.

A host of mechanisms exist by which DARPA program managers and the researchers they fund learn about the challenges facing the military and brainstorm about emerging technologies that may address these challenges. Program managers are in regular contact with senior military officers. In addition, since 1968, DARPA has used the DARPA-Materials Research Council, which was later renamed the DARPA-Defense Science Research Council (DSRC), to bring together top researchers. The DARPA-DSRC assembles 20 to 30 of the country’s leading scientists and engineers, along with 20 or so program managers from the DARPA technical offices, for nearly a month each July. The goal of this meeting is “to apply their combined talents in studying and reviewing future research areas in the defense sciences.” The technical direction of the summer conference is chosen by a steering committee composed of seven representative members of the DSRC, who work with DARPA management to select the relevant technical topics. In addition to the technical meetings, DSRC members engage in activities to better understand military challenges, such as visiting base camps, observing training exercises, and engaging in wargame exercises. After the summer conference, smaller task forces are established on specific topics throughout the year. This conference and subsequent task forces serve many purposes. They act as settings in which lead technical researchers in the nation become aware of military challenges and are able to jointly brainstorm about new technologies for meeting these challenges and thus potential new directions in a field. At the same time, these meetings serve for DARPA program managers as one of many settings in which to hear and identify directions for future funding solicitations.

DARPA program managers do not, however, end their emerging technology identification activities at a series of brainstorming sessions with elite scientists. Program managers continually travel around the country to meet with individual members of the research community and learn about their emerging projects and capabilities. During these activities, they not only identify new research directions but also encourage research in those directions by seed-funding researchers working on common themes. Here, the program managers explicitly do not pick technology winners. Instead, they bet on people. In some cases, they will fund disconnected researchers working on the same technologies. At other times, they fund researchers working on competing technologies aimed at solving the same problem. Once they receive funding from DARPA, researchers are then required to share their work with each other in workshops. These mandatory presentations increase the flow of knowledge among disconnected researchers. As a consequence, these researchers begin to develop common ideas, and new research communities evolve around these directions. In making funding decisions, program managers seldom fund individual technologies. Instead, they envision the suite or pyramid of technologies necessary to meet a particular military goal. In the case of computing, this suite included the materials, processing tools, individual chip designs, software, and system architectures all necessary for significant advance. Finally, Federally Funded Research and Development Laboratories (FFRDCs) are not allowed to apply for DARPA solicitations. Instead, DARPA issues contracts to R&D laboratories such as Lincoln Laboratories to help prepare the solicitations, including mapping long-term technology needs common to the military and commercial sectors.

DARPA as a model for ARPA-E

Although DARPA is a defense-sector institution, there is no reason why its mechanisms for bottom-up technology governance can’t become a model for innovation policy in other, civilian-centered sectors. Take, for example, the energy and automotive industries, both of which need innovation. In an attempt to bolster innovation in energy, Congress approved ARPA-E in August 2007. The new organization was funded in the February 2009 American Recovery and Reinvestment Act, but because those funds are equal to the funds for only a single office at DARPA, ARPA-E so far has only a single program manager and no director. Most important, there are no signs that ARPA-E will necessarily adopt the bottom-up technology identification and community-building processes used by DARPA.

So what would a bottom-up governance approach to the energy and automotive crises look like, if led by an institution like an ARPA-E? First, the director of ARPA-E and the respective office directors would be on short-term rotations from Department of Energy (DOE) government labs, industry, or academia. The offices and their respective directors would be organized around technical areas relevant to the energy problem, and each office would be filled by a suite of program managers who are experts in the respective pieces of that area. Program managers would be leading technical specialists in their fields, recruited to work for three to five years. For an office within ARPA-E engaged on the automotive crisis, these program managers could, for example, be technical leaders from the research labs at General Motors and Ford, as well as professors from the nation’s top mechanical, electrical, and materials engineering departments.

Second, ARPA-E program managers would neither act as venture capitalists nor attempt to pick technology winners. Instead, they would engage in the technology identification and technology community-building activities typical of program managers within DARPA. As done at DARPA, they would need to be careful to fund the full suite of technologies necessary to achieve particular goals. For example, if new power train technologies were being funded, they would need to fund emerging common and competing solutions in materials, fuels, metrology, components, and so on so that all the technology advancements required to revolutionize the power train system would be covered.

Redundancy within the existing noncentralized technology funding model is a strength, not a weakness.

Third, ARPA-E should use DARPA-DSRC as a model for an Energy Science Research Council (ESRC), which would bring together 20 to 30 of the country’s leading scientists and engineers and 10 to 20 ARPA-E program managers for several weeks to brainstorm on future research areas in energy sciences. As with the DSRC, the technical direction of the ESRC conference would be chosen by a steering committee composed of several representative members of the council, who would work with ARPA-E management to select the relevant technical topics. In planning the initial conference, it will be particularly critical to bring in the very best technical minds in the country in order to help establish the prestige associated with participating. During the first conference, technical challenges should be identified that require additional, smaller task force meetings. One obvious early task force would focus on personal transportation and the automotive industry. Lessons for ARPA-E’s efforts in the automotive industry could be taken from DARPA efforts in the 1980s to revitalize the semiconductor industry against the perceived threat of Japan. Among other actions, DARPA took the controversial step of channeling the initial matching funds to SEMATECH, the nonprofit industry-founded research consortium in semiconductor manufacturing.

Fourth, ARPA-E should not fund the DOE government labs but instead, as DARPA does with Lincoln Labs, leverage the labs’ expertise to improve solicitations and identify common technology needs across the different energy-related sectors.

The need for a market

Those who question the feasibility of extending the DARPA model beyond military contexts suggest that without the immediate high-paying market for new technologies provided by the military, the DARPA model cannot work. However, during the course of DARPA’s history, the military has not always been the primary market or even the primary beneficiary for DARPA-funded technologies. The computing industry, which benefited enormously from DARPA funding and had the government as a primary market in its early years, offers an example. Although the government was the primary source of demand for the infant computing industry, the balance of demand in the industry has in the recent decades changed dramatically. In 1960, when DARPA began funding computing, 1,790 mainframes were sold, and the majority of computers were owned by the government. In contrast, by the 1990s, innovation in commercial information technology was outstripping advances being developed for military applications, and defense secretary William Perry encouraged the military to figure out how to adapt existing commercial products. Despite the fact that government was not the dominant source of demand during this period, DARPA still managed to provide seed funding for new technologies. The military-related work it funded in computer processing led to the development of microprocessors that were used widely in personal computers.

The energy sector does, however, face a critical challenge in instituting an ARPA-like organization. Knowledge of the crisis is much less centralized. In the military, DARPA program managers talk to officers from the Army, Navy, and Air Force and visit military installations to experience military needs in real time. In the energy crisis, there is no single clear customer. To understand national energy concerns, program managers would need to talk to government and academic experts in energy and the environment, national security, and industrial and economic growth to even begin to identify the nature of the country’s need. For the more specific case of the automotive industry, program managers would need to talk to consumers, auto manufacturers, auto suppliers, and environmental leaders to identify existing and emerging challenges in personal transportation. Although an ESRC and an associated ESRC Automotive Task Force could identify technologies to meet pre-identified national needs, an additional separate Energy Crisis Council (with annual meetings and rotating members) may be necessary to identify and benchmark the evolving nature of the crisis itself. Whereas the ESRC would consist of the nation’s leading technologists, the Energy Crisis Council would need to consist of academic, government, and industry leaders intimately familiar with the nature of the problem itself.

Although I do not believe that the military is required as an early customer for ARPA-like organizations, some market is required for new technologies to be commercialized. The military may provide an early market for some energy technologies, as could other government agencies through the procurement of vehicles, heating and cooling equipment, lighting, and other energy-related products. However, for larger industrial and consumer markets to develop, the government will need to find ways, such as carbon prices or carbon portfolio standards, to incorporate the external costs of carbon emissions into the price of carbon-based fuels. Unless this is done, little demand may exist for new carbon-reducing technologies. It will be important for ARPA-E staff to work with the Energy Crisis Council to stay closely attuned to regulatory trends to ensure that they are responding to national needs for which regulatory action has also created markets. Although it would be politically dangerous for ARPA-E to be involved in regulatory debates, the Energy Crisis Council may be able to act as a go-between to regulators.

Should we be using the DARPA model of bottom-up governance to solve all national challenges requiring technological innovation? Of course not. There are many arms of government currently involved in innovation, each of which serves its own unique role in the overall system. DARPA’s mechanisms for bottom-up governance simply add to the suite of options.

Recently, there has been a renewed call for centralized coordination of the U.S. government’s technology-funding activities. Such centralization is the wrong road. Redundancy within the existing noncentralized technology funding model is a strength, not a weakness. The beauty of the existing system is that if a new alternative power train technology isn’t funded by DARPA, it may be funded by the DOE, the Office of Naval Research (ONR), or any of the other funding institutions. And if ONR funds a technology today, DARPA just may fund it tomorrow, and DOE in the future. This seeming redundancy across funding organizations has always been and will continue to be part of the nation’s strength, as innovation across common technical challenges happens where it should: at the level of the individual researcher.

Climate Change and U.S. Competitiveness

The Obama administration and Congress have been grappling with how to craft legislation that addresses the looming threat of global warming while reducing U.S. dependence on foreign energy sources. In June 2009, the House passed a climate change bill that would establish an economy-wide cap-and-trade system, aimed at reducing the amount of greenhouse gas emissions produced by economic activity in the United States. The Senate is now working on its own bill.

Naysayers about the threat of climate change aside, the House legislation faced opposition from a number of Democrats, not just Republicans, worried about its potentially harmful economic effects on U.S. businesses, workers, and communities. Similar concerns are being expressed in the Senate debate. A climate bill that places a charge on carbon-based greenhouse gas emissions would drive up fossil fuel energy prices, with most of the increase being passed along to energy users. In theory, this would encourage a shift to lower carbon-based energy generation and greater energy efficiency throughout the economy. But many in Congress fear that these higher energy costs could impose new economic burdens on their constituents and hurt the competitiveness of U.S. businesses, resulting in the loss of jobs in their states and districts. These concerns particularly resonate in the wake of the current financial crisis and recession.

On the other hand, climate policy supporters cite studies that project very modest effects on the overall economy from the enactment of cap-and-trade legislation: perhaps a couple of percent slower economic growth and smaller industrial output and employment levels as compared to a no-policy, business-as-usual baseline. More significantly, according to supporters, there could be substantial economic gains that ultimately accrue from making the transition to a highly energy-efficient, low-carbon industrial base, not to mention the benefits of reducing U.S. dependence on foreign energy sources and mitigating global warming.

Environmentalists and other “green” economy proponents have long touted new business opportunities and jobs that could be created by investing in renewable energy and energy-efficiency technologies. By sending a strong price signal that would significantly ramp up demand for these technologies—and diminish the economic advantages of fossil fuels—a climate policy would hasten the growth of these emerging industries. According to the American Council for an Energy-Efficient Economy, investments in energy productivity gains would yield both the largest and most cost-effective mitigation of greenhouse gas emissions and strengthen the competitiveness of the U.S. economy.

Recognizing the opportunity to revitalize the U.S. manufacturing base and strengthen the middle class that could be created by such investments, the Obama administration made simulating growth in clean energy development and infrastructure repair and modernization a key goal of the American Recovery and Reinvestment Act (ARRA), enacted in February 2009. The White House estimated that the act would generate 541,000 jobs. The Apollo Alliance—a coalition of labor, environmental, and community leaders—is more optimistic, estimating that the act could create or retain more than one million green-collar jobs.

Nevertheless, there remain unresolved questions about the pending climate change legislation, especially the full extent of the effects on the U.S. industrial base and whether measures in the bill can sufficiently mitigate these effects or foster the gains hoped for by climate policy proponents. A key question is how a climate policy would affect the competitiveness of critical manufacturing industries and what policy and technology options might be needed to mitigate these effects. A recent study that we conducted shows that U.S. trade-sensitive, energy-intensive manufacturers could experience substantial cost pressures from energy prices driven upward by a climate policy, ultimately threatening their mid-to long-term economic viability. The study finds that there are options that could mitigate these effects but that additional policy measures will also be needed to preserve the competitiveness of primary manufacturing industries in the United States, while also reducing greenhouse emissions.

Climate and competitiveness

The potential effect of climate policy on U.S. manufacturing has become an important concern in the climate debate. The discussion is taking place against the backdrop of a long-term decline in manufacturing capacity and jobs, accompanied by a ballooning trade deficit, a situation made worse by the recession and financial meltdown. Since 1998, the manufacturing sector has shed well over 5 million jobs, or one-quarter of its workforce. Foreign competition has been a major factor in the shrinking and restructuring of U.S. manufacturing during the past few decades. U.S. firms across the manufacturing spectrum have lost significant market shares to cheaper foreign imports. As a result, the United States has for many years experienced a substantial and growing trade deficit, rising to more than $700 billion in 2007.

According to the U.S. Energy Information Administration (EIA), the industrial sector (all of the materials-processing and goods-processing industries, inclusive of manufacturing) consumes about one-third of the total delivered energy in the U.S. economy and produces about 28% of total carbon dioxide (CO2) emissions. Manufacturing accounts for an estimated 90% of the energy consumed and 80% of the emissions generated in the industrial sector. Among the nation’s 21 major manufacturing sectors, five (petroleum and coal products, chemicals, paper, primary metals, and non-metallic mineral products) use most of the energy consumed by U.S. manufacturing. These five sectors contain almost all of the most energy-intensive industries in the economy.

A CLIMATE POLICY THAT PUTS A PRICE ON CARBON-BASED GREENHOUSE GASES IN THE ECONOMY COULD SUBSTANTIALLY AFFECT THE COMPETITIVENESS OF THE U.S. ENERGY-INTENSIVE INDUSTRIES DURING THE NEXT TWO DECADES.

Although manufacturing is a major consumer of energy and emitter of greenhouse gases, most analyses have shown that climate policies would have only modest effects on manufacturing costs, profits, and outputs. This reflects the sector’s low energy intensity on aggregate: Only about 3% of operating expenditures are for energy. But among the energy-intensive industries (steel, aluminum, chemicals, paper, and cement), the percentage is much higher: for example, 9% for iron and steel and 27% for primary aluminum.

Consequently, the concern of many business, labor, and political leaders that climate policies may contribute to the continued erosion of U.S. manufacturing competitiveness is particularly acute for the energy-intensive, basic materials manufacturing industries, which are especially vulnerable to rising energy prices and highly sensitive to global competition. U.S. energy-intensive manufacturers have struggled for many years to remain competitive in domestic and international markets against foreign producers advantaged by low-cost labor and poor labor standards; lax environmental regulations; government subsidies; and, some would argue, unfair trade practices. These include manufacturers from major developing nations, especially the so-called BRIC nations (Brazil, Russia, India, and China). Although the European Union and Japan remain major producers and trade partners with the United States, the BRIC nations have been rapidly building up their capacities in these industries to support their own industrial development, but also as export platforms.

The main fear is that the added energy costs from a climate policy would drive many firms out of business or offshore to lower-cost locations. Consequently, as climate legislation has been drafted, industrial and labor groups have lobbied Congress hard for policies to mitigate the cost effects of increased energy prices from a cap-and-trade program and to level the playing field for U.S. producers competing against foreign manufacturers in these sectors that are currently unburdened by comparable measures to limit carbon emissions in their countries. Their concern is to preserve and strengthen the capacity of these critical industries and maintain their competitiveness in global markets.

Despite waves of restructuring, consolidations, plant closures, offshore movements, and large-scale job losses during the past three decades, the United States remains one of the world’s largest producers in all these industries. Energy-intensive manufacturing forms the cornerstone of the nation’s manufacturing base. It is the beginning of the supply chain for all other manufacturing industries, supplying the primary materials used in tens of thousands of intermediate industrial goods and end-use consumer products throughout the economy. It therefore would be sadly ironic that even if the climate bill were to live up to its potential of fostering the creation of new domestic manufacturing jobs producing renewable energy products, windmills made in Ohio or other heartland states, for example, were made from steel produced in China!

Some environmentalists also acknowledge the importance of keeping these industries healthy and onshore to prevent “carbon leakage.” Because the energy-intensive sector accounts for the bulk of energy consumption and greenhouse emissions in the U.S. industrial sector, if U.S. climate policies encourage energy-intensive plants to move offshore, they would succeed only in shifting the generation of greenhouse gases to other locations in the world that may have few if any carbon-constraining regulations. The United States therefore might still record gains in cutting emissions, but emissions levels for the world as a whole—where it really matters—would not improve, and U.S. manufacturing would suffer further losses.

The climate-manufacturing study

In response to concerns about manufacturing competitiveness and also carbon leakage, legislators have attempted to incorporate measures in cap-and-trade bills that aim to reduce greenhouse gas emissions while also mitigating economic effects on vulnerable industries. These include cost containment features such as “safety valve” prices (caps on emission allowance prices), carbon offsets (allowance credits for investments in domestic or international emissions reduction projects by regulated entities), and allowance banking (covered entities holding onto unused allowances for future sales at higher prices); and economic mitigation measures such as free allowance allocations, border adjustment mechanisms, and international compliance provisions (see below).

Because of the strong interest of labor and industry stakeholders, the Washington, DC–based National Commission on Energy Policy of the Bipartisan Policy Center sponsored a two-year study to examine these issues. The study was conducted by High Road Strategies (HRS) and the Millennium Institute (MI), both in Arlington, Virginia, culminating in the report, Climate Policy and Energy-Intensive Manufacturing: Impacts and Options. National labor organizations and several large industrial associations (from the iron and steel, aluminum, chemicals, and paper sectors) participated throughout the study.

The HRS-MI study set out to address two broad questions: What are the effects of a U.S. climate policy on the economic competitiveness of domestic energy-intensive manufacturing industries? What are the best policy options for maintaining manufacturing competitiveness and retaining jobs in these industries, while also cutting greenhouse gas emissions?

The HRS-MI team especially wanted to investigate the policy options that would best mitigate the cost effects of a climate policy, while also enabling and encouraging industry investments in new energy-saving technologies. The ultimate objective was to inform policies that will help U.S. manufacturers move to the next generation of production technologies and to achieve the twin goals of low carbon intensity and global competitiveness. (See box for details on what the study covered and how it was carried out.)

The results of the study suggest that a climate policy that puts a price on carbon-based greenhouse gases in the economy could substantially affect the competitiveness of the U.S. energy-intensive industries during the next two decades. Specifically, enacting a climate policy that imposes a modest to high cost on carbon-based energy sources could increase most of the energy-intensive industries’ production costs, reduce operating surpluses and margins, and shrink domestic market shares. These results assume that no investments or actions are made to mitigate or offset the additional cost effects.

Production costs for all industries would be driven upward, but these effects would vary widely, depending on their energy intensities, the mix of energy sources they rely on, and how energy is used in production activities. The iron and steel industry would see the greatest real production cost increases, partly because of its high use of metallurgical coal and coke in production, rising to more than 11% above the business-as-usual case by 2030, driving up the energy cost share of production costs from 15% in 2006 to more than 20% by 2030. Chlor-alkali and paper and paperboard costs would grow at a comparable rate, although primary aluminum costs would grow somewhat more modestly.

The extent to which policy-driven production cost increases translate into profit declines in these industries would depend on the degree to which manufacturers could pass along these costs to customers. Although passing along the costs might be likely for some industry segments under certain market conditions, the study found that these industries typically are constrained in their ability to pass through their costs, especially if the increases apply only to U.S. producers. These industries tend to measure their competitiveness (and preserve profit margins) by their ability to keep their costs low relative to prevailing global market prices.

Assuming the worst case scenario that the industries do not pass along their costs, the study found that every industry would see an operating surplus decline relative to the reference business-as-usual case. Not surprisingly, the industries with the greatest production cost increases would also suffer the largest operating surplus and margin declines. These include iron and steel, paper and paperboard, and chlor-alkali, followed by primary aluminum. Moreover, the model projected that operating surplus reductions for the industries would grow to such an extent, ranging from more than 5 to 25% in 2020 and 20 to 40% by 2030, cutting into profits and even into fixed costs, that some manufacturers probably would feel pressure to take actions to reduce their costs and prevent their profitability from decreasing to undesired levels. Depending on market and other economic conditions, this could include investing in energy-saving processes, although some manufacturers could decide to cut production or, more troubling, move production offshore.

Technology investment and policy options

Despite the potentially troubling economic effects of a climate policy on energy-intensive industries, the study found that technology investments and policy options exist that could mitigate these effects, improve energy efficiency, and ultimately enhance economic performance. The adoption of both readily available and longer-term, cutting-edge technology could offset increased costs and generate additional profits. Over the short run, however, energy-saving technology options might be limited, because many of the industries have invested over the years in substantial energy-efficiency gains. On the other hand, relatively low-cost incremental improvements remain available in the near to medium term that could offset some of the added costs from a climate policy. These include combined heat and power; relined boilers; enhanced heat recovery; improved sensors and process controls; more efficient motors, pumps, and compressed air systems; and improved recycling, among other measures.

However, the study’s estimates of energy-efficiency gains required to offset these costs over the longer term suggest that much larger gains could be necessary over time for most of the industries, requiring investments in advanced low- or no-carbon production processes. The industries have been supporting R&D on advanced production and process technologies that could result in significant energy savings. But several barriers to their commercialization and deployment remain. It may be many years before some of them are technically and commercially feasible and cost effective, even with the incentive of higher energy costs. These technologies mostly involve the installation of large, expensive pieces of equipment, requiring substantial infusions of new capital investments by industries that chronically complain about a lack of capital. Finally, the vintage of existing equipment and facilities in these industries will dictate when manufacturers would be able to replace aging production capacity with new, more energy-efficient technologies—perhaps as much as a decade or more.

The HRS-MI study also showed, though, that allocating allowances to offset higher energy prices under a climate policy would substantially mitigate the economic effects on energy-intensive industries, at least through 2025. In its simulation of such a measure, production cost increases and operating surplus reductions would be cut by 90% in 2012, falling gradually to a 54% reduction in these effects by 2030, as compared to not offering the allocation offset. This would buy time for the industries to make adjustments and energy-saving technology investments required for maintaining their domestic production capacity and competitiveness. However, if the industries do not invest early enough, making use of the time window provided by the allowance allocation, they could face even harder adjustments after 2025.

Policy implications

From the point of view of the energy-intensive industries, the main policy concerns associated with a climate bill are cost containment and mitigation measures that help manufacturers adopt more energy-efficient technologies. The HRS-MI study showed that the most serious cost effects of climate policy on these industries could be substantially mitigated by a free allocation allowance that companies can treat as a financial asset, which offsets most of these effects. Of course, these allocation offsets must be designed to diminish over time, so that manufacturers have increasing incentives to invest in energy-saving technologies and practices and replace older, less efficient equipment.

A version of the allowance allocation measure was incorporated into the House-passed climate bill. This measure would compensate eligible trade-exposed energy-intensive facilities with allowance rebates proportional to the costs of their direct and indirect greenhouse emissions. It was designed to preserve incentives to make performance improvements, giving companies more time to prepare for longer-term investments in more advanced, low-carbon, energy-efficient process technologies, even as they introduce incremental improvements, while avoiding undue windfalls. The bill would distribute up to 15% of the total amount of allowance permits to the energy-intensive sector shortly after the cap-and-trade system goes into affect, diminishing steadily every year after.

THE STUDY SHOWED THAT ALLOCATING ALLOWANCES TO OFFSET HIGHER ENERGY PRICES UNDER A CLIMATE POLICY WOULD SUBSTANTIALLY MITIGATE THE ECONOMIC EFFECTS ON ENERGY-INTENSIVE INDUSTRIES.

Despite differences between the free allocation allowance policy to offset higher energy prices evaluated in the HRS-MI study and the House bill’s output-based allocation provision (the former is directly tied to energy price increases; the latter is keyed to emissions allowance costs) the end results would be roughly the same: The economic effects on energy-intensive firms would be mitigated, at least in the short to medium term, buying more time for affected manufacturers to adopt low-carbon heat and power and process technologies. More study, though, would be required to examine the relative effectiveness of output-based allowance rebates for the energy-intensive sector under the House bill as compared to other allocation options.

Although the HRS-MI study demonstrated the importance of cost mitigation features such as this, it also concluded that additional policies will probably be needed to support timely investment in energy efficiency and the retrofitting of less advanced production facilities. These measures might include:

  • Tax incentives and credits for installing new equipment
  • Accelerated capital stock recovery for encouraging the retirement of older, less efficient equipment
  • Support for research, development, and demonstrations of cutting-edge energy-savings process innovations
  • Financial support for the adoption of new equipment
  • Technical assistance, especially to help medium and smaller manufacturers (under 500 employees) adopt cleaner, leaner energy technologies and practices, such as through the Manufacturing Extension Partnership (MEP).

Some of these measures are being considered and a few have been incorporated in the current legislation. For example, Senator Brown’s Investments for Manufacturing Progress and Clean Technology (IMPACT) Act of 2009, which was merged into the House bill, would establish a $30 billion Manufacturing Revolving Loan Fund to help small- and medium-sized manufacturers retool, expand, or establish domestic clean energy manufacturing operations. The fund would also expand and focus the MEP’s programs to help manufacturers access clean energy markets and adopt innovative, energy-efficient manufacturing technologies.

Another more controversial proposal is the climate change border adjustment mechanism. Border adjustment mechanisms would require fees to be added to foreign- made imports from nations that have not adopted a carbon emissions mitigation policy. The border adjustment assessment would be based on the emissions associated with a good produced overseas. The purpose is to level the playing field for U.S. producers who are burdened by higher energy prices imposed by a climate policy and compete in global markets against foreign firms not subject to a comparable emissions regulatory system.

The HRS-MI study did not evaluate this type of policy. However, extrapolating from the study’s findings, border mechanisms applied to energy-intensive manufactured goods might enable U.S. producers to more easily pass through additional climate policy-driven energy costs. Meanwhile, a border adjustment provision would require foreign importers to raise their prices to a comparable extent within the U.S. market. A border mechanism provision—the International Reserve Allowance Program—has been incorporated into the House bill. It would kick in some time after 2018, subject to a presidential determination about the effectiveness of the emissions allowance rebate policy mitigating cost effects for primary energy-intensive producers and the extent of foreign competitors’ compliance with comparable greenhouse gas limiting polices. It may be necessary, however, to provide rebates of these costs to U.S. exporters of these goods to prevent lower-cost overseas producers from gaining an unfair advantage because of higher U.S. production costs resulting from climate policy.

The low-carbon conversion challenge

The HRS-MI study sought to assess the extent to which an economy-wide cap-and-trade system aimed at reducing greenhouse gas emissions would impose economic costs on critical, trade-sensitive, energy-intensive manufacturing industries. Although it found that such a policy could over time threaten the competitiveness of manufacturing firms in these sectors, creating pressures for some to cut production or move to cheaper offshore locations, it also found that providing free allocation allowances could substantially mitigate these effects.

But this alone will not necessarily be sufficient to make the “business case” for energy-dependent manufacturers to make long-term investments in costly, advanced low-carbon production equipment, which will be required to keep their energy costs low as fossil fuel prices rise. Therefore, other policies incorporated by or supplemental to cap-and-trade climate legislation will be needed to encourage and enable the conversion of energy-reliant companies to next-generation, energy-efficient production technologies.

To achieve this goal, a comprehensive set of policies will be required, including mitigation and various technology policies and perhaps level-the-playing-field measures such as border adjustment. However, attention needs to be paid at the top levels of government to enable this conversion. For example, the Apollo Alliance has called for a Presidential Task Force on Clean Energy Manufacturing to bring together a range of federal agencies to make the manufacturing of clean energy systems and components a national priority.

The resurrection of Flambeau River Papers, a paper mill located in the heart of a northern Wisconsin forest, both exemplifies and illustrates the potential for meeting this challenge. In 2006, the town of Park Falls, with 3,000 residents, was in trouble. Its major employer, a paper and pulp mill located along the Flambeau River, had closed, costing 300 jobs. Originally built in 1896, the plant’s equipment was antiquated, and it used an expensive and outmoded process to make pulp. In recent years, higher energy prices combined with rising international competition and stagnant demand forced owners of this mill into bankruptcy.

Two years later, with the help of state loans and private investors, the mill reopened, its restart enabled by investments in new biomass-energy boilers, making it the first fossil fuel–free, energy-independent, integrated pulp and paper mill in North America. It also reemployed almost all of the workers originally laid off, at the same previous pay and benefits. Moreover, the Flambeau River mill, with help from a U.S. Department of Energy grant, is moving toward becoming the first modern U.S.-based pulp mill biorefinery to produce cellulosic ethanol. Not only would the new biorefinery have a positive carbon effect of about 140,000 tons per year, it would create an additional 100 new jobs in the Park Falls area.

As its name coincidentally implies, Flambeau River Papers is a beacon pointing us in the right direction for making the transition to a prosperous low-carbon, energy-efficient and globally competitive industrial base. It also suggests the critical role government must play in this transition—ideally in partnership with industry, over and above the market signals a climate policy would create. But this conversion will not be easy or cost-free. It will require strong policies that provide necessary supports and incentives for energy-intensive manufacturers to shed their reliance on carbon fuels, while retaining their competitiveness. But in the end, the resulting benefits to our economy and the environment would be very great.

High-Performance Computing for All

The United States faces a global competitive landscape undergoing radical change, transformed by the digital revolution, globalization, the entry of emerging economies into global commerce, and the growth of global businesses. Many emerging economies seek to follow the path of the world’s innovators. They are adopting innovation-based growth strategies, boosting government R&D, developing research parks and regional centers of innovation, and ramping up the production of scientists and engineers.

As scientific and technical capabilities grow around the world, the United States cannot match the traditional advantages of emerging economies. It cannot compete on low wages, commodity products, standard services, and routine or incremental technology development. Knowledge and technology are increasingly commodities, so rewards do not necessarily go to those who have a great deal of these things. Instead, rewards go to those who know what to do with knowledge and technology once they get it, and who have the infrastructure to move quickly.

These game-changing trends have created an “innovation imperative” for the United States. Its success in large measure will be built not on making small improvements in products and services but by transforming industries; reshaping markets and creating new ones; exploiting the leading edge of technology creation; and fusing diverse knowledge, information, and technology to totally transform products and services.

The future holds unprecedented opportunities for innovation. At least three profound technological revolutions are unfolding. The digital revolution has created disruptive effects and altered every industrial sector, and now biotechnology and nanotechnology promise to do the same. Advances in these fields will increase technological possibilities exponentially, unleashing a flood of innovation and creating new platforms for industries, companies, and markets.

In addition, there is a great and growing need for innovation to solve grand global challenges such as food and water shortages, pandemics, security threats, mitigating climate change, and meeting the global need for cheap, clean energy. For example, the energy and environmental challenges have created a perfect storm for energy innovation. We can move to a new era of technological advances, market opportunity, and industrial transformation. Energy production and energy efficiency innovations are needed in transportation, appliances, green buildings, materials, fuels, power generation, and industrial processes. There are tremendous opportunities in renewable energy production, from utility-scale systems and distributed power to biofuels and appropriate energy solutions for the developing world.

Force multiplier for innovation

Modeling and simulation with high-performance computing (HPC) can be a force multiplier for innovation as we seek to answer these challenges and opportunities. A simple example illustrates this power. Twenty years ago, when Ford Motor Company wanted safety data on its vehicles, it spent $60,000 to slam a vehicle into a wall. Today, many of those frontal crash tests are performed virtually on high-performance computers, at a cost of around $10.

Imagine putting the power and productivity of HPC into the hands of all U.S. producers, innovators, and entrepreneurs as they pursue innovations in the game-changing field of nanotechnology. The potential exists to revolutionize the production of virtually every human-made object, from vehicles to electronics to medical technology, with low-volume manufacturing that could custom-fit products for every conceivable use. Imagine the world’s scientists, engineers, and designers seeking solutions to global challenges with modeling, simulation, and visualization tools that can speed the exploration of radical new ways to understand and enhance the natural and built world.

These force-multiplying tools are innovation accelerators that offer an extraordinary opportunity for the United States to design products and services faster, minimize the time to create and test prototypes, streamline production processes, lower the cost of innovation, and develop high-value innovations that would otherwise be impossible.

Supercomputers are transforming the very nature of biomedical research and innovation, from a science that relies primarily on observation to a science that relies on HPC to achieve previously impossible quantitative results. For example, nearly every mental disease, including those such as Alzheimer’s, schizophrenia, and manic-depressive disorders, in one way or another involves chemical imbalances at the synapses that cause disorders in synaptic transmission. Researchers at the Salk Institute are using supercomputers to investigate how synapses work (see http://www.compete.org/publications/detail/503/breakthroughs-in-brain-research-with-high-performance-computing/). These scientists have tools that could run a computer model to produce a million different simulations to produce an extremely accurate picture of how the brain works at the molecular level. Their work may open up pathways for new drug treatments.

Farmers around the world need plant varieties that can withstand drought, floods, diseases, and insects, and many farmers are shifting to crops tailored for biofuels production. To help meet these needs, researchers at DuPont’s Pioneer Hi-Bred are conducting leading-edge research into plant genetics to create improved seeds (see http://www.compete.org/publications/detail/683/pioneer-is-seeding the-future-with-high-performance-computing/). But conducting experiments to determine how new hybrid seeds perform can often take years of study and thousands of experiments conducted under different farm management conditions. Using HPC, Pioneer Hi-Bred researchers can work with astronomical numbers of gene combinations and manage and analyze massive amounts of molecular, plant, environmental, and farm management data. HPC speeds up obtaining answers to research problems from times of days and weeks to a matter of hours. HPC has enabled Pioneer Hi-Bred to operate a breeding program that is 10 to 50 times bigger than what would be possible without HPC, helping the company better meet some of the world’s most pressing needs for food, feed, fuel, and materials.

Medrad, a provider of drug delivery systems, magnetic resonance imaging accessories, and catheters, purchased patents for a promising interventional catheter device to mechanically remove blood clots associated with a stroke (see http://www.compete.org/publications/detail/497/high-performance-computing-helps-create-new-treatment-for-stroke-victims/). But before starting expensive product development activities, they needed to determine whether this new technology was even feasible. In the past, they might have made bench-top models, testing each one in trial conditions, and then moved to animal and human testing. But this approach would not efficiently capture the complicated interaction between blood cells, vessel walls, the clot, and the device. Using HPC, Medrad simulated the process of the catheter destroying the clots, adjusting parameters again and again to ensure that the phenomenon was repeatable, thus validating that the device worked. They were able to look at multiple iterations of different design parameters without building physical prototypes. HPC saved 8 to 10 months in the R&D process.

Designing a new golf club at PING (a manufacturer of high-end golf equipment) was a cumbersome trial-and-error process (see http://www.compete.org/publications/detail/684/ping-scores-a-hole-in-one-with-high-performance-computing/). An idea would be made into a physical prototype, which could take four to five weeks and cost tens of thousands of dollars. Testing might take another two to three weeks and, if a prototype failed to pass muster, testing was repeated with a new design to the tune of another $20,000 to $30,000 and six more weeks. In 2005, PING was using desktop workstations to simulate some prototypes. But one simulation took 10 hours; testing seven variations took 70 hours. PING discovered that a state-of-the-art supercomputer with advanced physics simulation software could run one simulation in 20 minutes. With HPC, PING can simulate what happens to the club and the golf ball when the two collide and what happens if different materials are used in the club. PING can even simulate materials that don’t currently exist. Tests that previously took months are now completed in under a week. Thanks to HPC, PING has accelerated its time to market for new products by an order of magnitude, an important benefit for a company that derives 85% of its income from new offerings. Design cycle times have been cut from 18 to 24 months to 8 to 9 months, and the company can produce five times more products for the market, with the same staff, factory, and equipment.

At Goodyear, optimizing the design of an all-season tire is a complex process. The tire has to perform on dry, wet, icy, or snowy surfaces, and perform well in terms of tread wear, noise, and handling (see http://www.compete.org/publications/detail/685/goodyear-puts-the-rubber-to-the-road-with-high-performance-computing/). Traditionally, the company would build physical prototypes and then subject them to extensive environmental testing. Some tests, such as tread wear, can take four to six months to get representative results. With HPC, Goodyear reduced key product design time from three years to less than one. Spending on tire building and testing dropped from 40% of the company’s research, design, engineering, and quality budget to 15%.

Imagine what we could do if we could achieve these kinds of results throughout our research, service, and industrial enterprise. Unfortunately, we have only scratched the surface in harnessing HPC, modeling, and simulation, which remain largely the tools of big companies and researchers. Although we have world-class government and university-based HPC users, there are relatively few experienced HPC users in U.S. industry, and many businesses don’t use it at all. We need to drive HPC, modeling, and simulation throughout the supply chain and put these powerful tools into the hands of companies of all sizes, entrepreneurs, and inventors, to transform what they do.

Competing with computing

The United States can take steps to advance the development and deployment of HPC, modeling, and simulation. First, there must be sustained federal funding for HPC, modeling, and simulation research and its application across science, technology, and industrial fields. At the same time, the government must coordinate agency efforts and work toward a more technologically balanced program across Department of Energy labs, National Science Foundation–funded supercomputing centers, the Department of Defense, and universities.

Second, the nation needs to develop and use HPC, modeling, and simulation in visionary large-scale multidisciplinary activities. Traditionally, much federal R&D funding goes to individual researchers or small single-discipline groups. However, many of today’s research and innovation challenges are complex and cut across disciplinary fields. For example, the Salk Institute’s research on synapses brought together anatomical, physiological, and biochemical data, and drew conclusions that would not be readily apparent if these and other related disciplines were studied on their own. No matter how excellent they may be, small single-discipline R&D projects lack the scale and scope needed for many of today’s research challenges and opportunities for innovation.

Increasing multidisciplinary research within the academic community should overcome a host of barriers such as single-discipline organizational structures; dysfunctional reward systems; a dearth of academic researchers collaborating with disciplines other than their own; the relatively small size of most grants; and traditional peer review, publication practices, and career paths within academia. Federal policy and funding practices can be used as levers to increase multidisciplinary research in the development and application of HPC, modeling, and simulation.

Third, the difficulty of using HPC, modeling, and simulation tools inhibits the number of users in academia, industry, and government. And since the user base is currently small, there is little incentive for the private sector to create simpler tools that could be used more widely. The HPC, modeling, and simulation community, including federal agencies that support HPC development, should work to create better software tools. Advances in visualization also would help users make better use of scientific and other valuable data. As challenges and the technologies to solve them become more complex, there is greater need for better ways to visualize, understand, manage, monitor, and evaluate this complexity.

Fourth, getting better tools is only half the challenge; these tools have to be put into the hands of U.S. innovators. The federal government should establish and support an HPC center or program dedicated solely to assisting U.S. industry partners in addressing their research and innovation needs that could be met with modeling, simulation, and advanced computation. The United States should establish advanced computing service centers to serve each of the 50 states to assist researchers and innovators with HPC adoption.

In addition, the nation’s chief executives in manufacturing firms of all sizes need information to help them better understand the benefits of HPC. A first step would be to convene a summit of chief executive officers and chief technical officers from the nation’s manufacturing base, along with U.S. experts in HPC hardware and software, to better frame and address the issues surrounding the development and widespread deployment of HPC for industrial innovation and next-generation manufacturing.

If it takes these steps, the United States will be far better positioned to exploit the scientific and technological breakthroughs of the future and to fuel an age of innovation that will bring enormous economic and social benefits.