Time to Act on Health Care Costs

Popular discussions of the long-term fiscal challenges confronting the United States usually misdiagnose the problem. They typically focus on the government expenses related to the aging of the baby boomers, with lower fertility rates and longer life expectancy causing most of the long-term budget problem. In fact, most of the long-term problem will be driven by excess health care cost growth; that is, the rate at which health care costs grow compared to income per capita. In other words, it is the rising cost per beneficiary rather than the number of beneficiaries that explains the bulk of the nation’s long-term fiscal problem.

One can see this phenomenon manifesting itself even in the next decade: Figure 1 shows the Congressional Budget Office’s (CBO’s) projections for spending on Social Security, Medicare, and Medicaid through 2017. As Figure 1 shows, Social Security rises by about 0.5% of gross domestic product (GDP), from 4.2 to 4.8%, over that period. Spending on Medicare and the federal share of Medicaid rises from 4.6 to 5.9% of GDP—an increase of 1.3%, or roughly twice as much as that for Social Security.

If one looks further into the future, the basic point is accentuated. Figure 2 portrays a simple extrapolation in which Medicare and Medicaid costs continue to grow at the same rate over the next four decades as they did over the past four decades. (Fortunately, even with no change in federal policy, there are reasons to believe that this simple extrapolation overstates future cost growth in Medicare and Medicaid. The CBO has recently released a long-term health outlook that presents a more sophisticated approach to projecting Medicare and Medicaid costs under current law, but this simple extrapolation is adequate to illustrate the key point.) Under this scenario, Medicare and Medicaid would rise from 4.6% of the economy today to 20% of the economy by 2050. To appreciate the scale of this increase, all of the activities of the federal government today make up 20% of the economy.

FIGURE 1
Spending on Medicare and Medicaid and on Social Security as a percentage of GDP, 2007 and 2017

The most interesting part of Figure 2 is the bottom line, which isolates the pure effect of demographics on those two programs. The only reason that the bottom line is rising is that the population is getting older and there are more beneficiaries of the two public programs. The increase between today and 2050 in that bottom dotted line shows that aging does indeed affect the federal government’s fiscal position. But that increase is much smaller than the difference in 2050 between the bottom line and the top line. In other words, the rate at which health care costs grow—whether they continue to grow 2.5% per year faster than income per capita, or 1%, or 0.5%—is to a first approximation the central long-term fiscal challenge facing the United States.

Conventional wisdom tells us that the sooner we act, the better off we are, and convention certainly has it right in this case. Figure 3 shows that if we slow health care costs’ excess growth from 2.5 to 1% per year starting in 2015 (which would be extremely difficult if not impossible to do, but is helpful as an illustration), the result in 2050 would be that federal Medicare and Medicaid expenditures would account for 10% rather than 20% of GDP.

On its face, this challenge looks pretty daunting. And it is further complicated by the fact that it is implausible that we will slow Medicare and Medicaid growth unless overall health care spending also slows. The reason is that if all one did was, say, reduce payment rates under Medicare and Medicaid, and then tried to perpetuate that over time without a slowing of overall health care cost growth, the result would probably be that fewer doctors would accept Medicare and Medicaid patients, creating an access problem that would be inconsistent with the underlying premise and public understanding of these programs. One therefore needs to think about changes in Medicare and Medicaid in terms of the impact that they can have on the overall health care system.

FIGURE 2
Total federal spending for Medicare and Medicaid under assumptions about the health cost growth differential

FIGURE 3
Effects of slowing the growth of spending for Medicare and Medicaid

From that perspective, this long-term fiscal challenge appears to present a very substantial opportunity: the possibility of taking costs out of the system without harming health. Perhaps the most compelling evidence underscoring this opportunity is the significant variations across different parts of the United States that do not translate into differences in health quality or health outcomes, as explained by Elliott Fisher in the accompanying article.

The question then becomes, why is this happening? To me, it appears to be a combination of two things. One is the lack of information specifically about what works and what doesn’t. And the second thing is a payment system that gives neither providers nor consumers an effective incentive to eliminate low-value or negative-value care.

On the consumer side, and despite media portrayals to the contrary, the share of health care expenditures paid out of pocket, which is the relevant factor for evaluating the degree to which consumers are faced with cost sharing, has plummeted over the past few decades, from about 33% in 1975 to 15% today. All available evidence suggests that lower cost sharing increases health care spending overall. The result is that collectively we all pay a higher burden, although the evidence is somewhat mixed on the precise magnitude of the effect.

This observation leads some analysts to argue that the way forward is more cost sharing and a health savings account approach, and that can indeed help to reduce costs. But there are two things that need to be kept in mind in evaluating this approach. The first is that there is a significant amount of cost sharing that is involved in existing plans. Moving to universal health savings accounts would thus not entail as much of an increase in cost sharing, and therefore as much of a reduction in spending, as one might think. Second, there is an inherent limit to what we should expect from increased consumer cost sharing, because health care costs are so concentrated among the very sick. For example, the top 25% most expensive Medicare beneficiaries account for 85% of total costs, and the concentration of health care costs among a small share of the population is replicated in Medicaid and in the private health care system. To the extent that we in the United States want to provide insurance, and insurance is supposed to provide coverage against catastrophic costs, the fact that those catastrophic costs are accounting for such a large share of overall costs imposes an inherent limit on the traction that one can obtain from increased consumer cost sharing. In sum, increased cost sharing on the consumer side can help to reduce costs, but it seems very unlikely to capture the full potential to reduce costs without impairing health quality.

That leads us to the provider side, where the accumulation of additional information and changes in incentives would be beneficial. There is growing interest in comparative effectiveness research, and the original House version of the State Children’s Health Insurance Program legislation had some additional funding for comparative effectiveness research. Policymaker interest in expanding comparative effectiveness research is encouraging, but we need to ask some hard questions about what we mean by comparative effectiveness research and how it would be implemented.

The first issue is what kind of research is undertaken and what standard of evidence is used. As Mark McClellan, the former administrator of the Centers for Medicare and Medicaid Services, has noted, comparative effectiveness research will very probably have to rely on nonrandomized evidence. The reason is that it seems implausible that we could build out the evidence base across a whole variety of different clinical interventions and practice norms using randomized control trials, especially if we want to study subpopulations. On the other hand, economists have long been aware of the limitations of panel data econometrics, where one attempts to control for every possible factor that could influence the results and typically falls far short of perfection. There is thus a tension between using statistical techniques on panel data sets (of electronic health records, insurance claims, and other medical data), which seems to be the only cost-effective and feasible mechanism for significantly expanding the evidence base, and the inherent difficulty of separating correlation and causation in such an approach.

In terms of the budgetary effects from comparative effectiveness research, much depends on both what is done and how it is implemented. If the effort involves only releasing the results of literature surveys, the effects would probably be relatively modest. If new research using registries or analysis of electronic health records is involved, there may be somewhat larger effects. The real traction, though, will come from building the results of that research into financial incentives for providers. In other words, if we move from a “fee-for-service” to a “fee-for-value” system, where higher-value care is awarded with stronger financial incentives and low- or negative-value health care is penalized with smaller incentives or perhaps even penalties, the effects would be maximized. The design of such a system is very complicated and difficult to implement, but that is where the largest long-term budgetary savings could come.

My conclusion is that the combination of some increased cost sharing on the consumer side with a substantially expanded comparative effectiveness effort linked to changes in the incentive system for providers offers the nation the most auspicious approach to capturing the apparent opportunity to reduce health care costs with minimal or no adverse consequences for health outcomes.

Ideas for a New Industrial Policy

William B. Bonvillian’s essay, “Encompassing the Innovation Panoply” (Issues, Winter 2022), is itself a panoply of ideas about how the United States can pursue a new industrial policy. These ideas are nested in the essay’s review of the intellectual history of the nation’s innovation policy, its stops and starts, and the deficiencies in the current US innovation system. Bonvillian offers pragmatic solutions empirically grounded in this history.

One of the author’s proposed innovations is to recenter the intellectual history of innovation policy to focus on John R. Steelman, an Alabama college professor who became director of the US Department of Labor and later special assistant to President Truman, rather than on Vannevar Bush as the key policy entrepreneur here. Bush is already known, albeit only to cognoscenti, for his work on science policy and the report Science, the Endless Frontier. Steelman, as Bonvillian notes, directed a much more comprehensive study of the federal government’s role in research and development, which called for more public funds to be devoted to R&D and for various federal agencies to play key roles in allocating funding. In this way, Bonvillian writes, “Steelman more than Bush may thus be the true architect of American science policy.”

Steelman directed a much more comprehensive study of the federal government’s role in research and development, which called for more public funds to be devoted to R&D and for various federal agencies to play key roles in allocating funding.

A second, related innovation of the essay is to look at innovation as a dynamic process, unlike Bush’s linear model in which innovation occurs in a series of sequential stages. Bonvillian’s proposed solutions build upon this insight to call for a “systems of innovation approach” that recognizes innovation is a “multidirectional system, not a one-way street.”

Overall, the ideas that Bonvillian presents, and the historic resistance they have faced, bring to mind another system: “the antiseptic system” devised by the British surgeon Joseph Lister. Noticing how many surgical patients were dying of sepsis or gangrene, Lister challenged the idea that infection came from “bad air” and instead came from germs introduced during surgery. The medical establishment of the day was highly skeptical of this theory and approach. Lister introduced what he called “antiseptics” into surgery, and infection plummeted. Though it didn’t happen immediately, surgical practice eventually changed.

Bonvillian’s insights and evidence in this largely historical essay are similarly irrefutable. But will implementation of his policy solutions necessarily follow? I believe the answer is yes—but I am much less certain that the United States will be the country where this takes place. There are other countries such as China, Japan, Taiwan, and South Korea that are more open to evidence, unlike much of the US economic establishment, which resembles nothing so much as the pre-Listerian surgical establishment in England.

Contributing Editor

American Affairs

Mathemalchemy

"Mathemalchemy," 2021, mixed media
Mathemalchemy, 2021, mixed media

What happens when a fiber artist meets a world-renowned mathematician? In a word: mathemalchemy.

In 2019, the mathematician Ingrid Daubechies, who the New York Times dubbed the “godmother of the digital image” because of her work with wavelets and the role that it played in the advancement of image compression technology, visited an art exhibit entitled Time to Break Free. The installation, a quilted, steampunk-inspired sculpture full of fantastical, transformative imagery, was the work of fiber artist Dominique Ehrmann. Seeing the installation made Ingrid wonder whether art could similarly bring the beauty and creativity of mathematics to life. She contacted Dominique, and after much discussion, a collaborative project was born. Over several months, many workshops, and the challenges of a pandemic, the collaborative grew to include 24 core “mathemalchemists” representing a diverse spectrum of expertise. The result is a sensory-rich installation full of fantasy, mathematical history, theorems, illuminating stories of complexity, and even a chipmunk or two.

"Mathemalchemy," 2021, mixed media

What happens when a fiber artist meets a world-renowned mathematician?

The artists and mathematicians work in fabric, yarn and string, metal, glass, paper, ceramic, wood, printed plastic, and light; they depict or employ mathematical concepts such as symmetry, topology, optimization, tessellations, fractals, hyperbolic plane, and stereographic projection. Playful constructs include a flurry of Koch snowflakes, Riemann basalt cliffs, and Lebesgue terraces, all named after mathematicians. Additionally, the exhibition pays homage to mathematicians and mathematical ideas from many different origins and backgrounds, ranging from amateur mathematician Marjorie Rice to Fields Medalist and National Academy of Sciences member Maryam Mirzakhani.

"Mathemalchemy," 2021, mixed media

Mathemalchemy is on display at the National Academy of Sciences in Washington, DC, from January 24 through June 13, 2022. More information about the exhibit and the collaboration can be found at mathemalchemy.org.

Focusing on Connectivity

Maureen Kearney’s article, “Astonishingly Hyperconnected” (Issues, Winter 2022), is first and foremost about connections: between the global climate and biodiversity crises, between organisms, and between humanity’s future and that of the rest of the living world. Its focus resonates strongly with the fabric of life on earth, emphasizing humans’ ancient and deep entanglement with all other living organisms, locally and remotely. In Kearney’s words, “Because life is astonishingly hyperconnected on scales much larger than we thought … the fate of any species in the face of environmental change is intertwined with the fate of many others.” Here I add a few reflections as to what this hypoconnectivity entails for science and policy.

Connectivity among disciplines. First, I agree on the need for convergent research among the natural sciences. I would add to this the need for more convergence between the biological and physical sciences on one hand, and the social sciences and humanities on the other. This is indispensable because the present environmental crises are manifested in the atmosphere and the biosphere but their roots and therefore their potential solutions are deeply social, economic, and political. The biophysical sciences are clearly not enough to deal with them. For researchers, this certainly entails extra layers of difficulty in bridging vastly different methods, categories, and epistemologies. For funding agencies, this involves a mindset shift concerning risk adversity, budget allocations, timetables, and researcher evaluation criteria. For example, what the literature call “boundary work,” which binds together the different disciplines in an integrated project, needs more time and money. Related, the judging of curricula vitae needs to consider that most researchers are venturing into uncharted waters and avoid penalizing the lack of an extensive trajectory in the new subject area.

The present environmental crises are manifested in the atmosphere and the biosphere but their roots and therefore their potential solutions are deeply social, economic, and political.

Connectivity among policymakers and researchers. I also agree on the need for policymakers and researchers to work in a more integrated way in addressing global climate and biodiversity challenges. In this, it is important to abandon the complexity-adverse attitude that has predominated so far. For example, in the preparation of the new post-2020 Global Biodiversity Framework of the United Nations Convention on Biological Diversity, the emphasis has been on particular aspects of biodiversity, often the easiest to communicate and monitor, yet not necessarily the most effective ones. The fact that the fabric of all life is interwoven and complex does not mean intractable, but defies “easy” targets such a setting aside X hectares under legal protection or promoting Y species from threatened to nonthreatened. The new goals for nature need to be clearer and bolder than ever before, but at the same time they must focus on connections and be themselves interconnected in a safety net.

Connectivity among institutions. One key point, probably the most difficult, is that indeed we are astonishingly hyperconnected in many ways, but yet astonishingly disconnected in others. The institutions that deal with different knots in the fabric of life are often disconnected or misaligned, each setting its rules, incentives, monitoring indicators, and standards in isolation from, or in contradiction with, the others. This happens, for example, in the regulation of water and wild animal populations across municipal, regional, and international borders. It is also rife among bodies acting at the same spatial scale but on different sectors, such as road maintenance and nature restoration, or urban planning, food sovereignty, and biodiversity protection.

The transformative change being called for by all the recent international assessments has to be not only bigger and deeper than ever before; it also has to shift the focus toward much more connectivity. We have been trying to handle an astonishingly hyperconnected earth with a set of astonishingly disconnected set of narratives, mindsets, and institutions. We can afford this no longer.

Professor of Ecology

Córdoba National University

Senior Member of the National Research Council of Argentina

Maureen Kearney extends an important conservation about climate change and biodiversity. As context, I have a stack of books on my desk that tackle this fraught topic, most of them dealing with loss of diversity, but some addressing the possibility of recovering species through de-extinction. A sample includes: Second Nature by Nathaniel Rich; Thor Hanson’s Hurricane Lizards and Plastic Squid; Strange Natures by Kent Redford and William Adams; Elizabeth Kolbert’s Under a White Sky; and the most recent arrival is Ben Rawlence’s The Treeline. All add in various ways to the increasingly clear conclusion that climate change is negatively affecting earth’s biodiversity and that we need to think hard about how to mitigate such an outcome.

Kearney agrees with that conclusion. In an important way, however, she goes further, exploring the thesis that biodiversity and climate change are not just connected, but “hyperconnected,” meaning they are inextricably intertwined. Her message is that we cannot solve the problem of declining biodiversity without solving the challenge of our changing climate, which is itself a complex function of earth’s biodiversity. Each influences the other in deep and important ways.

Others have gone down this road, but Kearney makes a strong case for the intersection of these areas, which makes a lot of sense. She first reviews how biodiversity contributes to an array of ecosystem services that benefit humans, making it clear how we rely on the organisms that surround us. And in a review of biotechnology approaches to managing environmental change, she appropriately urges that this possible set of solutions must be approached cautiously in light of possible unintended consequences.

Climate change is negatively affecting earth’s biodiversity and that we need to think hard about how to mitigate such an outcome.

Still, there are some people who flatly reject biotechnology approaches to mitigating climate change, or manipulating the environment in general. It would have added to Kearney’s perspective to comment on this view, and how she feels it can or cannot be incorporated into her call for integrating climate change and biodiversity.

The subtitles of the books mentioned earlier echo Kearney’s arguments. Rich uses Scenes from a World Remade to suggest how humans are altering ecosystems and the responsibilities that follow. Hanson uses The Fraught and Fascinating Biology of Climate Change to emphasize that organisms adapt to climate change and do not just suffer its effects. Redford and Adams use Conservation in the Era of Synthetic Biology to highlight their analysis of how the tools of gene editing will shape a future world along with the responsibilities that accompany such use. For Kolbert, The Nature of the Future invokes the ways in which humans have altered earth’s systems, raising questions along the way about what “nature” will look like in the future. Rawlence’s The Last Forest and the Future of Life on Earth uses the iconic boreal forest as a study system for analyzing the intersection of biodiversity and climate change.

Kearney joins these authors to highlight a topic—the intersection of biodiversity and climate change—that needs our increased understanding. In a discussion of “planetary futures” she emphasizes that “natural systems are climate solutions on par with greenhouse gas reductions and other objectives.” That sounds right. Her call for research on biological complexity to understand much better than we do the reciprocal interaction of climate change and biodiversity is also the right one in an era of global change that will call for adaptation as well as mitigation. It is a call we should join her in pursuing.

Virginia M. Ullman Professor of Natural History and the Environment

School of Life Sciences

Arizona State University

Rethinking Benefit-Cost Analysis

In “New Ways to Get to High Ground” (Issues, Winter 2022), Jennifer Helgeson and Jia Li make a persuasive case for augmenting benefit-cost analysis (BCA) with multidisciplinary approaches when conducting resilience planning under climate change. Among other correctives, they suggest the need for nonmonetary data and the use of narratives as a way for community members to articulate key values. These are appropriate suggestions, but we can go further.

Climate risks are imposed on an already complex social landscape, populated by groups with distinct strengths and vulnerabilities, including differences in their sensitivity to environmental stressors and their access to information and resources. The basis for such differences may track the familiar cleavages of income and ethnicity, but many more specific factors may be at work in shaping vulnerability and resilience, which can best be identified through collaborative inquiry.

Disaster planning in New Orleans before Hurricane Katrina offers a case in point. Few provisions were made for evacuating residents without cars. As David Eisenman and colleagues have documented, lack of transportation prevented many poorer residents from leaving. Other residents with cars felt unable to leave because of economic constraints (insufficient money to pay for meals and lodging, fear of job loss), social constraints (responsibility for extended kin networks), or countervailing risks (health problems, fear of looting if they abandoned their homes).

Many more specific factors may be at work in shaping vulnerability and resilience, which can best be identified through collaborative inquiry.

Awareness of such risk factors, drawing on both quantitative and qualitative data, could have improved the response to Hurricane Katrina. Here an ethnographic approach is particularly valuable. It brings together a cultural account—capturing people’s own interpretations of their circumstances—with a delineation of local social systems, including the relations based on affinity, place, and power that shape people’s lives. Such multidimensional accounts provide a better basis for constructing risk pathways, showing how climate change stressors interact with local socioenvironmental conditions to affect individuals and communities.

Helgeson and Li noted some of the familiar weaknesses of BCA, including the indifference to both equity and the complex character of community resilience. Combining a collaborative approach with ethnographic inquiry can compensate at least partially for these weaknesses. Since BCA presumes that all benefits and costs can be monetized, losses of the rich will almost always outweigh losses of the poor. In contrast, an ethnographic account is well suited to a capabilities approach, in which losses of both rich and poor can be assessed on a common scale, the capacity to maintain a good life (the proper measure of resilience). The flood-driven loss of a modest car may be far more devastating for a poor family than the loss of a luxury car for a rich one. The poor family may lack the capacity to replace the car, and as a result lose access to employment, lose income, and fail to meet kin obligations. The rich family faces an inconvenience, which assets and insurance will soon make good.

Research Professor of Anthropology

University of Maryland, College Park

As our world changes, socially and climatologically, having tools at hand that are better able to support decisionmaking that reflects those changes is necessary. Jennifer Helgeson and Jia Li do an excellent job laying out the need for benefit-cost analysis (BCA) to evolve.

As a resilience specialist, I appreciated the explicit mention of needing capacity for considering co-benefits and multiple objectives within BCAs. Climate change adaptation can provide an opportunity for communities to proactively think about what they want, from amenities to industries to social justice. To support innovative and transformative adaptation, we need tools just as flexible and multifaceted. Related, Helgeson and Li’s point that equity and justice issues are either ignored or exacerbated by BCAs is critical. Reliance on “standard tools” that emphasize previously made investments is a clear example of systematically maintained inequities and injustices.

I also appreciated their acknowledgement of narratives and storytelling in decision-making. Researchers have been showing data on climate change for over a century, but the reality is that storytelling and emotional connection are how we see and process the world around us. Recognizing the need for BCAs to both accept inputs from these stories and enhance the ability to tell stories is a powerful reminder.

Reliance on “standard tools” that emphasize previously made investments is a clear example of systematically maintained inequities and injustices.

However, I think Helgeson and Li missed providing some important context: that is, the practical aspects of operationalizing BCAs, particularly at the local level. They highlighted the need for more data collection—overall and at the front end of a BCA being conducted. But they do not specify who would do this work, and I think it should be acknowledged that BCAs in their current form are already complex, challenging, and out of reach for most average to small municipalities. I have been working with communities to see if they could use the “plug-and-play” BCA tools that exist and have found that the input data requirements are still an almost insurmountable barrier. Without the time my team dedicated to helping them, it seems unlikely that those BCAs would have occurred. The TurboTax analogy the authors cite was well taken, but the difference is that with TurboTax the input data arrive in the mail, with labeled boxes that can be referenced to identify what data need to be entered and when.

Given this gap, I think an essential next step is building capacity so that more versatile BCAs do not become another source of inequity, where only communities with means can pursue them. The climate and weather enterprise has been building workforce capacity around resilience, focusing on stand-alone resilience positions, and integrating resilience into the skill sets of other professionals. I think BCAs require similar efforts. This includes helping groups in the private sector that support the public sector (e.g., engineering firms) and the referenced boundary spanners understand and develop practical skills for enhanced BCAs. The tools and supporting materials around enhanced BCAs also need to improve so they can be integrated into municipal and state officials’ tool kits without requiring an inordinate amount of time or money.

Coastal Climate Resilience Specialist

Mississippi State University Extension Service and Sea Grant

The Time for Strategic Industrial Policy Is Now

The strong thread that runs through technological leadership, economic success, and national security has never been more evident than it is today, so I read with great interest the article by Bruce R. Guile, Stephen Johnson, David Teece, and Laura D. Tyson, titled “Democracies Must Coordinate Industrial Policies to Rebuild Economic Security” (Issues, April 14, 2022).

The paradigm of free-market capitalism that has driven the global economy forward for the past 200 years is being challenged by a new vision for a managed economy operating at a global scale. Traditional models of international trade are breaking down, and the perils of using trade as a diplomatic tool have been cruelly exposed as nations now rush to rid themselves of dependencies on Russian products. A new approach to global collaboration in trade and commerce—one that preserves and advances the interests and values of liberal democracies—is sorely needed.

Guile and his coauthors are right to focus on industrial policy. The liberal democracies do collaborate well on publicly funded science. Perhaps not as much as they should, but the recently launched James Webb Space Telescope is a great example of a (literally) far-reaching joint endeavor from my own domain of expertise, space.

A new approach to global collaboration in trade and commerce—one that preserves and advances the interests and values of liberal democracies—is sorely needed.

Similarly, the private sector is very adept at working across international borders in the commercial domain. Large global enterprises with complex international supply chains are now the norm heading up all major industry sectors.

However, for the area in between, where the outputs of science are still being matured into promising new technologies for the future, the situation is quite different. Here the split of responsibilities between public and private is more ambiguous, and nations vary in their approaches. These differing views on how to use public resources to support industry advancement in new technologies—that is, industrial policy—are a source of friction in international trade relations, rather than harmony.

This makes it very hard to collaborate internationally in the maturation of new technologies, except in areas such as the European Union where common models of state aid and subsidy control are in place. Yet this is precisely the area where such collaboration is most needed. The technologies emerging now—beyond 5G communications, autonomous transport, and the commercialisation of space, to name just a few—will go on to define the twenty-first century. And those nations that bring the solutions, and define the standards that the world adopts, will reap the economic and geopolitical rewards.

For all its advantages, the free market is opportunistic and not strategic. The new threat, however, is highly strategic, operating at similar or larger scale, increasingly competent, and underpinned by the state at every stage. If the liberal democracies are to continue with their leadership in technological and economic advancement, then the time for coordinated and strategic industrial policy is now.

Chief Executive Office

Satellite Applications Catapult

Enabling Economic Growth Through Energy

In “Fixing the Disconnect Around Energy Access” (Issues, Winter 2022), Michael Dioha, Norbert Edomah, and Ken Caldeira contrast the tale of two communities in Nigeria to highlight the daunting challenge of bringing universal energy access to low-income countries in a financially sustainable way. Although the article focuses on two communities in Nigeria, it speaks to a broader issue across the African continent.

In a recent World Bank book that I coauthored, Electricity Access in Sub-Saharan Africa: Uptake, Reliability, and Complementary Factors for Economic Impact, we addressed this very issue and laid out ways to think about electrification in sub-Saharan Africa. We reported an example similar to that of Kigbe, one of the authors’ case studies. In this case, the community of Gabbar, Senegal, implemented an off-grid solar energy system to help in producing onions for export to cities across the country. Elsewhere, we have also seen financially strained communities trying to get off a $7 per month installment contract they signed to acquire a solar home system—only to realize that they cannot afford the cost a few months down the road.

We also argued, as do Dioha and coauthors, that all electrification efforts should start with viewing it as a means to a greater end rather than an end in itself. This perspective is even more important in poorer countries that may lack the means to plan, fund, and excuse rapid electrification. It also requires understanding that although energy is crucial to most modern productive economic activities, it is still an input that needs complementary investments to turn access into impact.

Although energy is crucial to most modern productive economic activities, it is still an input that needs complementary investments to turn access into impact.

The question is, why is this seemingly straightforward logic broken? Dioha and coauthors provide an excellent diagnostic of the problem, but they do not address the why. Understanding the main reasons this is happening can help pave the way to better global development policies in areas beyond energy. In the mid-1970s, the British economist Charles Goodhart coined the Goodhart’s law, stipulating that “When a measure becomes a target, it ceases to be a good measure.” The United Nations Sustainable Development Goals (SDG), and in this particular case SDG’s Goal 7, which calls for ensuring access to affordable, reliable, sustainable, and modern energy for all, has fallen prey to Goodhart’s law. Counting the number of households that gained some form of access to modern energy from one year to the next has become an end in itself.

How can this challenge be addressed at the global level? The successor of the SDGs, if any, should focus on fewer targets centered on prosperity and let the local contexts determine how to get there. Alternatively, the SDGs should be much more ambitious. The Modern Energy Minimum produced at the Energy for Growth Hub, listed as a recommended reading by Dioha and coauthors, is an excellent example of rethinking the SDG’s Goal 7. This kind of effort should extend beyond energy to rethink more broadly a new approach to setting global targets for development.

Senior Fellow, Munk School of Global Affairs and Public Policy

University of Toronto

Senior Fellow, Clean Air Task Force

Fellow, Energy for Growth Hub

Michael O. Dioha, Norbert Edomah, and Ken Caldeira highlight that electricity access programs too often fail to deliver “much-needed outcomes in pace, scale, and improvements in quality of life.” Drawing on two Nigerian mini-grid case studies, the authors argue that in order to transform lives, energy access interventions must be paired with economic empowerment. While they focus specifically on community-level interventions, their three core messages also apply to larger, national-scale efforts.

First, Dioha and coauthors argue that community-level energy access programs must focus on more than connecting individual households to electricity; they must be paired with support for broader economic activity. This is equally important at larger scales. The primary international metrics for defining electricity access and success toward eradicating energy poverty focus principally on power consumption at home. These metrics drive much of the global energy development agenda, placing a political premium on achieving universal household access. But globally, 70% of electricity is consumed outside the home, where it powers economic activity and job creation. Energy development efforts, including electrification programs, need to balance connecting households with targeted investments in energy for businesses, manufacturing, and industry. These larger consumers not only power economic activity and job creation, but also serve as anchors for a more diversified and financially sustainable system.

Energy development efforts, including electrification programs, need to balance connecting households with targeted investments in energy for businesses, manufacturing, and industry.

Second, the authors stress the need to consult with affected communities, making the essential point that energy is a social challenge, not just a technical or economic one. At the community level, people gaining access to electricity for the first time need the “the opportunity to imagine what they would do with electricity access and how they might use it to change their lives.” This is equally true at the macro-level. Efforts to support large-scale energy systems development—especially those driven by outside funders and partners—need to better account for national development plans and industrialization goals. This means, first of all, listening to what communities, states, and nations want to achieve with energy—and then helping figure out how to power it. The reverse approach, of having a technological solution and then looking for a place to sell it, is unfortunately all too common.

Finally, the authors rightly point out that many energy access programs have focused too heavily on electricity supply, rather than on the broader enabling infrastructure that ensures power can be distributed and consumed. At a macro-scale, investing in modern grid infrastructure is crucial and often overlooked. Solving this bottleneck will become even more relevant as countries work to build flexible, resilient systems with greater shares of variable renewable power.

While we do see progress in each of these areas, there is much work left to do. The authors have done a service in highlighting these important issues and recommending a path forward.

Executive Director

Policy Director

Energy for Growth Hub

From Medical Malpractice to Quality Assurance

Every decade or so, the United States is seized with a fervor to reform medical malpractice. Unfortunately, this zest is typically motivated by circumstances that have little to do with the fundamental problems of medical malpractice, and the proposed changes to the system do not address the true flaws. A well-functioning malpractice system should focus not only on how to compensate patients for medical errors but also on how to prevent these errors from occurring in the first place.

The United States has faced a medical malpractice “crisis” three times since 1970. Each of these crises was precipitated by conditions that created a “hard” market: decreased insurer profitability, rising insurance premiums, and reduced availability of insurance. And each time the crisis became a polarized battle between trial lawyers on one side and organized medical groups and insurers on the other. On the one side, stakeholders link the crisis to “runaway juries” and “greedy lawyers.” On the other are those who blame interest rates and possibly insurer pricing practices. If one attributes the crisis to falling interest rates and bad investments in the stock market, the policy implications are markedly different than if soft-hearted and cognitively limited juries and ambulance-chasing lawyers are blameworthy.

In the end, calm is returned, but the situation of patients is not improved. We are left with a system in which most victims of medical error are not compensated for their losses and in which the overall quality of care is not what it might be.

As a first step in tackling the real problems of medical errors and mediocre quality assurance, we need to debunk the popular misconceptions about the problems with the medical malpractice system. Once these ferocious but ultimately pointless conflicts are defused, we can begin to think about fundamentally reconstructing the system with an eye toward improving the quality of care by giving practitioners effective incentives to deliver the services that people need. There are a variety of options for reform; one of them, called enterprise insurance, has the potential to provide the initiative for systemic change.

Pervasive myths

Many myths about medical malpractice dominate the public discourse. These myths reinforce misinformation and are used to justify statutory changes that benefit certain stakeholders but are not in the broader public interest. Five of the most common are: medical care is costly because of malpractice litigation; only “good” doctors are sued; there are too many medical malpractice claims; dispute resolution in medical malpractice is a lottery; and medical malpractice claimants are overcompensated for their losses.

The high cost of personal health services in the United States is frequently attributed to litigation and the high cost of malpractice insurance. This assumes that premiums and outlays for awards have risen appreciably and constitute a major practice expense. The data, however, do not show appreciable increases over long time periods. Between 1970 and 2000, mean medical malpractice premiums went from 5.5 to 7.5% of total practice expenses. This is not the case for damage awards; payment per claim has increased substantially since the mid1990s. However, relationships between medical malpractice premiums, claims frequency, mean payment size, and total payments are complex and assumptions should not be made based on a single indicator.

Some critics of medical malpractice contend that being at the cutting edge technologically makes a physician more vulnerable to being sued. There is no empirical evidence that being sued is an indicator of superior performance. However, there is evidence that physicians with no claims histories were rated by their patients as being, or at least appearing to be, more understanding, more caring, and more available. Overall, it is untrue that only good doctors are sued, but at the same time, being sued is not a marker of being a bad doctor either.

The myth that there are too many malpractice claims is a bit more complex. There are two path-breaking studies showing that there are both too many and too few malpractice claims. The first of these studies was conducted in California in 1974. The second, the Harvard Medical Practice Study, was conducted in New York in 1984. In both studies, surveys of medical records of hospitalized patients were conducted to ascertain rates of adverse events attributable to provision of medical care to these patients and rates of adverse events due to provider negligence, termed “negligent adverse events.” The California study revealed that of the 5% of patients who experienced an adverse health event while in the hospital, 17% suffered a negligent adverse event. In New York, the corresponding rates were 4% for adverse events, of which 28% were negligent adverse events. The authors found that “invalid” claims, those not matching the study’s determination of liability, outnumbered valid claims by a ratio of three to one. However, they also found that only 2% of negligent adverse events resulted in medical malpractice claims. There were 7.6 times as many negligent injuries as there were claims. Thus, there were errors in both directions: Individuals filed too many invalid claims and not enough valid claims.

The public’s view of juries leads to the inference that outcomes of litigation are often random. Actual data, however, leads to the opposition conclusion: Outcomes are not random. There is a definite relationship, albeit an imperfect one, between independent assessments of liability and outcomes of legal disputes alleging medical malpractice. One study estimated that payment is made in 19% of malpractice claims when there is little or no evidence of errors. In contrast, when the evidence of an error is virtually certain, payment occurs 84% of the time. Using the results of this study, claims not involving errors accounted for 13 to 16% of the system’s total monetary cost. The way one views this percentage (substantial or small) depends on where one draws the line between error and no error. Unfortunately, the New York study conclusions do not stress or even mention that the estimates of error are subject to a very high degree of uncertainty.

Similar to the myth that malpractice claims are decided without regard to evidence of negligence, the myth that most plaintiffs are overcompensated for their injuries is pervasive. However, a comparison between the cost of injuries incurred by claimants and compensation actually received revealed that medical malpractice claimants on average are undercompensated. In one study, compensation exceeded cost by 22% for claimants who received compensation at verdict, whereas 26% percent received no compensation at all. On average, including those cases for which no compensation was received, compensation amounted to about half of monetary loss. Even including compensation for nonmonetary as well as monetary loss, compensation fell far short of injury cost. Nevertheless, this does not eliminate the possibility that compensation was excessive in selected cases.

Reconstructing the system

In principle, medical malpractice should be a quality-assurance mechanism; in practice, it falls far short of achieving this goal. For one thing, there is no empirical evidence that the threat of medical malpractice makes health care providers more careful. Also, meting out compensation is very expensive. Sadly, medical malpractice “tort reform” has aimed to save medical malpractice premium dollars rather than make it an effective mechanism for assuring quality and efficiently compensating injury victims. For example, a popular but misguided tort reform, caps on damages, has worked to reduce payments by medical malpractice insurers and create premiums below what they otherwise would have been, but caps have not altered the incentives, except perhaps to discourage attorneys from representing medical malpractice plaintiffs, even those with valid claims. If there is a benefit to caps, it is in redistributing income from injury victims and their attorneys to health care providers rather than in improving quality of care or markedly reducing rates of unnecessary tests and health care costs more generally. It seems unlikely that any savings in medical malpractice insurance premiums would accrue to patients as taxpayers and health insurance premium payers. Organized medicine plausibly supports caps primarily as a response to pressures from its constituency for financial relief.

Although the current system has many flaws, there is also a brighter side. First, contingency fees for plaintiffs’ attorneys give patients who are unsatisfied with outcomes a mechanism for addressing their grievances that may not be possible through other channels. The regulatory apparatus, which has a responsibility for safeguarding the quality of personal health services, is sometimes controlled or substantially influenced by health care providers and health care regulators who may be unresponsive to patients’ complaints. Second, the U.S. jury, despite its limitations, gives ordinary citizens a role in the dispute-resolution system. Although jurors are only rarely scientists, physicians, or other health care professionals, they reflect society’s values. Third, even during the crises when substantial increases in malpractice premiums occurred, the premiums remain a tiny component of total health care costs. Viewing long-term secular trends in medical malpractice payments and premiums rather than the short time periods during which there has been substantial growth in premiums reveals that increases in payments and premiums are rather modest, only slightly higher than the changes in prices in general. Finally, the current malpractice system does a good job of identifying some real errors.

However, the current system has serious deficiencies, just not the same as those typically depicted in the media. First, unlike other fields of personal injury tort, there is no empirical evidence that the threat of medical malpractice lawsuits deters injuries. This is a very serious deficiency, particularly because injury deterrence is typically listed as a goal, perhaps the primary goal, of tort liability. Second, tort liability focuses on the mistakes of individual providers, but errors frequently reflect simultaneous omissions or misjudgments on the part of several individuals. Third, most medical errors do not result in malpractice claims. As a result, the signal from tort to health care providers is insufficiently precise or even wrong. Fourth, compensation to injured patients is typically less than what they deserve based on the loss attributable to their injuries. Litigation is an extremely inefficient system for compensating injury victims. Various types of insurance, such as health and disability insurance, are much more efficient in distributing compensation to persons who have incurred a loss from receiving less than appropriate care.

Sadly, medical malpractice “tort reform” has aimed to save medical malpractice premium dollars rather than make it an effective mechanism for assuring quality and efficiently compensating injury victims.

Finally, health care providers in the United States largely reject the view that medical malpractice has a constructive role to play in health care delivery. Providers generally see no link between medical malpractice litigation and provision of low-quality care. Much commentary in assessments of medical malpractice and patient safety see medical malpractice as part of the problem rather than part of the solution. This misconception is an important roadblock because malpractice claims often arise from deficiencies in care.

Thus, medical malpractice does badly on injury deterrence, improved patient safety, and compensation of persons with medical injuries. Its strongest features are giving injury victims a day in court and making professionals accountable to ordinary citizens.

Patient safety and medical malpractice are inextricably linked. However, neither market forces nor the threat of tort liability seem to provide sufficient incentives for quality assurance. An important reason that the threat of lawsuits has not improved patient safety is that medical malpractice insurance shields potential defendants from the financial burden of being sued. Such insurance tends to be complete; there are no deductibles or coinsurance, and liability limits of coverage are rarely exceeded. Medical malpractice premiums tend not to be based on a physician’s own history of lawsuits. Thus, a physician with many past lawsuits may pay the same premium as a colleague who has never been sued.

Shortcomings in medical malpractice are not wholly responsible for shortcomings in the quality of health care in the United States. U.S. airlines have implemented very effective safety procedures. In other sectors, market forces provide some guarantee of quality. For example, there is no quality crisis in the hotel market. To the extent that consumers demand high-quality hotel rooms, this is provided by the market.

However, there are few means for consumers to inquire about the quality of a hospital or doctor, let alone demand high-quality health care. Employers often speak about quality assurance, but with few exceptions, medical care is not their principal business. Given the limitations of market forces in pressuring providers to supply high-quality care, there is indeed a role for government regulation and private regulatory mechanisms, such as peer review and tort liability. These mechanisms are not substitutes for the market but rather complements to it.

Options for reform

Meaningful tort reform should take account of the fact that many medical errors are not simply errors of individuals; they are errors of systems. Also, health care providers must have financial incentives to exercise care and implement meaningful quality-assurance mechanisms.

Overall, what have been called tort reforms have been short-term fixes, which do not improve system performance. In recent years, the reform most favored by physicians, hospitals, and insurers has been caps on damages. Caps have the effect of lowering payments per paid claim and probably discourage some trial lawyers from representing some medical malpractice plaintiffs. But they do not fundamentally change how medicine is practiced.

Scholars, other experts, and some policy analysts have proposed more sweeping reforms of the current system. They include no-fault insurance, health courts, alternative dispute resolution, private contracts, scheduled damages, enterprise liability, and enterprise insurance. Each proposal has advantages and disadvantages, and no one reform provides an exclusive remedy to the problems with the medical malpractice system. Of these options, however, enterprise insurance has, in our view, the potential to initiate systemwide change.

No-fault insurance. No-fault approaches are designed to be substitutes for tort, providing compensation regardless of fault. Currently, no-fault is widely used as a substitute for tort in auto liability and workers’ compensation. Medical no-fault has been implemented in only two states, Florida and Virginia, and for only a few medical procedures. The low administrative expense and faster payment of damages makes no-fault insurance an attractive alternative. But in the Florida and Virginia the programs were implemented to achieve savings in medical malpractice premiums rather than to distribute compensation to a larger number of medical injury victims. Revenue for these programs comes from physicians and hospitals. But if the system is truly no-fault, why should physicians and hospitals be the only parties taxed to fulfill a broad social obligation to compensate those with misfortunes? It seems more appropriate to tax the public at large, and no U.S. state has agreed to do this.

At least as interesting an alternative, not under active discussion, is private no-fault insurance. A hospital with an effective quality-assurance program could offer no-fault insurance to its patients for a reasonable premium, with an even lower premium for patients who agreed to forego filing a tort claim in the event of an injury. To the extent that the hospital had an effective quality-assurance program, the savings in premiums could be passed through to its patients.

This type of voluntary no-fault program would offer several important advantages to hospitals and their medical staffs. First, it might relieve providers of the threat of tort. Second, offering no-fault benefits would be a signal to consumers that the hospital has an effective patient-safety program and low rates of medical errors. Third, to the extent that injury victims value quick payments with little involvement of attorneys, this too would increase demand for the hospital’s services.

Although hospitals may anticipate some savings, it is essential that such no-fault coverage extend to a large number of conditions. When exclusions from coverage are necessary, they should be broad and easily understood by patients. Very narrow thresholds are difficult for patients to assess in advance of injuries. A few very costly procedures may be excluded from coverage, but it would be important that these be listed and described in understandable terms in advance.

Complete substitution of no-fault for tort is infeasible; however, a system in which patients would contract for no-fault coverage well in advance of receiving care at the hospital is more reasonable. Contracting in advance is essential to avoid situations in which a patient is faced with the option of contracting for no-fault at point of service, which could be interpreted as an adhesion contract, a standard form or boilerplate contract entered into by parties with unequal bargaining power. One way to partially avoid the unequal bargaining power is to allow employees to designate whether or not they wish to substitute no-fault for tort when they are choosing their health insurance plans. Surcharges for no-fault (if surcharges are imposed on patients) could then be built into the premium charged. In the case of voluntary no-fault, because insured patients would agree not to sue under tort, the savings in tort payments would offset at least part of the cost of the no-fault plan. No-fault plans would require prior regulatory approval, depending on the applicable regulatory authority. Regulators would pay attention to the method of enrolling persons into the plan, pricing, and issues bearing on plan solvency.

Health courts. Medical care is a technical subject, and proponents of health courts argue that judges and juries are often not well positioned to deal with the complexities. In addition to providing victims with consistent, fast, and relatively easily obtained compensation when warranted, health courts are also intended to reduce cost by streamlining the process, maintaining consistent medical standards, and capping or scheduling damages.

Full-time judges are a major feature of the health court proposals. The judges would deal only with malpractice cases, and there would be no jury. In one proposal, specialized judges would shape legal standards for medical malpractice, creating a body of science-based common law that healthcare providers could rely on when making treatment decisions. In theory, a body of science-based common law seems valid and useful, but it raises issues of its own. In the context of health courts, standards for medical practice would develop under state law. Yet, without federal regulation, each state would be free to develop its own standards, allowing for variable legal and medical practice standards among states.

Although concerns about the inadequacies of juries to decide technical matters and the inexperience of judges in medical matters provide the main rationale for health court proposals, what is not acknowledged in the policy debate is that the concerns about lay juries and judges apply to the much larger issue of the use of scientific evidence in the courts. Health courts represent only one of several overlapping alternatives for addressing this issue. Other alternatives include use of court-appointed experts, bifurcated trials, use of special masters, specially convened expert panels, blue-ribbon juries, and alternative dispute resolution.

We agree that health courts have attractive features but are reluctant to give this option our enthusiastic endorsement. Preserving juries in some form, even if they are blue-panel juries, would provide a broader representation of perspectives and values than would sole reliance on a narrow group of professionals to make judgments on specific cases. Even a judge with health expertise will not be able to be expert on the full range of issues health courts are likely to confront. In the end, it is important that plaintiffs as well as defendants view health courts as legitimate. If the court consists entirely of or is dominated by physicians and other health professionals, buy-in by plaintiffs, and society more generally, seems highly improbable.

Alternative dispute resolution. Dispute resolution under the trial-by-jury system is extremely costly. Thus, alternative approaches that streamline the process seem attractive. Generally, alternative dispute resolution is made up of any means of settling disputes outside of the courtroom. The two most frequently used forms are arbitration and mediation. Arbitration is a simplified version of litigation (there is no discovery and the rules of evidence are simpler). Mediation uses an impartial third party to facilitate an agreement in the common interest of all the parties involved.

The main advantage to alternative dispute resolution is that the process tends to be speedier than a trial. The advantages of arbitration include lower cost, private proceedings, more flexibility than in a court, and when the subject of the dispute is highly technical, the appointment of arbitrators with the appropriate degree of expertise. There are also disadvantages. The parties need to pay for the arbitrators, and the rule of applicable law is not compulsory. With binding arbitration, the decision reached is comparable to a jury verdict and can be overturned only if there is evidence of malfeasance in the process of reaching a decision. In mediation, sessions are not decided in favor of one party or another, and the parties are not bound to resolve their dispute and may pursue litigation if dissatisfied with the results of mediation. Although speed in dispute resolution and lower cost are advantages, there is some empirical evidence that this actually leads to more lawsuits.

Enterprise insurance is an attractive solution because it provides those in the best position to improve care with an incentive to introduce patient-safety measures.

Private contracts. The rationale for private contracts between health care providers and patients, as a substitute for tort, is that tort liability determines compensation based on standards of care that may differ from those that patients might prefer. Private contracts might set out specific circumstances in which providers might be responsible for compensating injury victims, scheduling damages, and specifying alternative dispute resolution mechanisms when disputes arise.

The strength of private contracts is that they can reflect preferences of individuals. Individuals with higher willingness to pay for safety pay more for such care. However, individual choice opens the door to adverse selection. That is, persons who are prone to suffer an injury because their health is more fragile may be more willing to pay for contracts offering extra precautions.

Opponents of private contracts point out that the relationship between the patient and provider is not one of equal power. A hospitalized patient or even an outpatient may not be well positioned to negotiate with a physician. Courts have overturned contracts reached at the point of service for this reason. But this is not when contracting would occur. Rather, as with a voluntary no-fault plan, contracts could be options offered to persons at the time they enroll in a health plan. Agreement to a lower standard of care or a less generous schedule of damages would result in a lower premium.

Scheduled damages. Rather then set a limit on the maximum size of an award, establishing a schedule of damages sets payment criteria for all awards, not only the large ones. Because scheduling affects the whole distribution, it is conceptually superior to flat caps on grounds of equity of payment to claimants with very severe injuries relative to those with less severe injuries. The trial bar opposes this approach because it would limit the ability of plaintiffs to make a case for their special circumstances. Such flexibility, however, must be measured against the vertical inequities of caps; that is, caps limit large awards for the relatively serious injuries but do not directly affect payment for more minor ones.

An anticipated objection to scheduled damages is that they limit jury discretion in awarding damages. However, there is a tradeoff between complete individualization of awards and reducing volatility and increasing predictability of awards. It would be appropriate for states to review the instructions that are provided to juries in order to assess whether guidelines for determination of monetary loss should be developed. In the end, however, even though scheduled damages are preferable to caps, the link to quality assurance is at best indirect.

Enterprise liability. Enterprise liability is a means of aligning the incentives of providers and of accounting for the fact that many errors arise from defects in systems rather than in individual providers. Because many medical injuries occur during receipt of hospital care, it makes sense to start the alignment process with hospitals and the physicians who work there. Under enterprise liability, when the receipt of care is in a hospital setting, the hospital would be named as the defendant in medical malpractice lawsuits. Separate suits against individual physicians would not be filed. If the hospital were the only named defendant, it would have a greater incentive to adopt quality-assurance measures, including for outpatient care.

Left unsaid in general discussions of enterprise liability is how the burden of hospital premiums would be shared. It would be advisable that physicians bear some part of the premium burden to provide some incentive to avoid claims. Hospitals could implement their own systems of surcharging physicians with many medical malpractice claims. Of course, hospitals, or medical staffs operating on their behalf, would retain the option of removing from their staffs physicians with adverse claims experience or those who do not comply with hospital patient-safety regimens. In fact, hospitals would have a greater incentive to monitor physician performance and remove physicians with adverse claims experience.

With hospital enterprise liability, the deterrent would be internalized to the hospital, establishing a clear financial incentive for quality improvement and error reduction. These organizations could impose a combination of financial and nonfinancial incentives for individual physicians to prevent injuries, coupled with increased surveillance measures. Also, the hospital and physicians at the hospital collectively would have an incentive to promote patient safety, because the enterprise’s premiums would depend on future anticipated losses from medical malpractice claims.

There are several possible objections to enterprise liability. First, plaintiffs might view hospitals as rich and faceless institutions with deep pockets, thus increasing plaintiffs’ demands for compensation. Of course, under the current system, insurers presumably could be said to have deep pockets as well.

Second, enterprise liability may restrict patient choice of provider. Physicians may have to limit their admissions to the one hospital at which they receive medical malpractice coverage. Physicians frequently have privileges at more than one hospital. This potential concern can be largely remedied by limiting physician coverage to care delivered within the walls of the facility under the hospital’s policy. Thus, if a physician practiced at three hospitals, he or she would be covered under three hospital policies. In addition, the physician would need to obtain medical malpractice insurance for care delivered in the office, but such coverage would be at a greatly reduced premium.

Third, physicians already complain about their growing loss of autonomy, and enterprise liability would probably exacerbate this trend. Physician autonomy is important because it allows providers to use their professional skill and judgment in particular situations. Outsiders, such as hospitals, may not be well positioned to know all the details and considerations of a physician-patient interaction.

Fourth, inpatient care is shrinking as a share of the total personal health care dollar. Because more care is being delivered outside the hospital, using the hospital as the locus of liability may not be ideal. But this concern disregards another trend. Hospital-provided ambulatory care is growing, and hospital enterprise liability would encompass care at all sites at which the hospital organization or system provides care.

Nevertheless, enterprise liability addresses many current deficiencies, especially the insufficient incentives providers have to invest in patient safety. A major barrier to implementation is the lack of a political constituency at the federal and state levels. Health care consumers are not well organized, and providers appear to be concerned with the “deep pockets argument,” as well as the loss of professional autonomy that may accompany enterprise liability.

Enterprise insurance. Another approach, enterprise insurance, does not change the cause of action against physicians and hospitals nor does it change the named defendants. Rather, physicians who render services to patients in hospitals would obtain their malpractice insurance through the hospital. Large organizations could self-insure for medical malpractice. Because all members of the pool would stand to lose from the provision of substandard care, there would be organizational incentives to monitor quality and implement quality-improving systems of care. For example, a hospital whose obstetric staff is sued repeatedly would have a direct financial incentive to take actions to deal with the causes of the lawsuits.

This approach seems very promising, but it too faces obstacles. In particular, in the United States, in contrast to other high-income countries and professionals in U.S. industries such as airline pilots, hospital medical staffs have been largely independent of hospitals. Physicians have resisted being under the control of hospitals, for financial reasons and out of concern for loss of professional autonomy. Any proposal from the “outside” that would cede control of medical decisionmaking to hospitals is likely to be resisted by many physicians. The key will be to have active physician involvement in hospital-based enterprise insurance. Smaller hospitals would face special challenges because they might be too small to operate a medical malpractice insurance plan on their own. Such hospitals might join regional compacts.

Finally, accountability incentives alone are not likely to provide sufficient motivation for hospitals to create systems management of medical injuries. Hospitals and physicians have many non-liability objectives and concerns. Implementation of enterprise insurance alone may not lead to optimal levels of patient safety in hospitals. Still, enterprise insurance is an attractive solution because it provides those in the best position to improve care with an incentive to introduce patient-safety measures.

Enterprise insurance creates efficiency by combining patient-safety measures and insurance, including premium setting. Because the insurer, in this case the hospital, is better able to “poke inside” the clinical organization to understand the source of errors, the insurer may be less likely to raise premiums dramatically because it has a better sense of what is going on. As with enterprise liability, hospitals would have added incentives to be selective about the quality of physicians they admit to and retain on their medical staffs. In turn, medical staffs have a much more direct incentive to support adoption of patient-safety measures in order to reduce medical malpractice losses at the hospital, especially if the medical staff is placed at some risk for losses above a threshold value.

Enterprise insurance has its limitations, but it also has the potential to provide the initiative for systemic change. By combining the function of preventing injuries with that of insuring loss if and when injuries do occur, the way to injury prevention is combined with the willingness to do so. Nevertheless, the medical malpractice apparatus, with or without enterprise insurance, should be seen as only part of the quality-assurance process. It cannot do the job on its own.

Animal Migration: An Endangered Phenomenon?

Animal migrations are among the world’s most visible and inspiring natural phenomena. Whether it’s a farmer in Nebraska who stops his tractor on a cold March morning to watch a flock of sandhill cranes passing overhead or a Maasai pastoralist who climbs a hill in southern Kenya and gazes down on a quarter million wildebeest marching across the savanna, migration touches the lives of most people in one form or another. Although animal migration may be a ubiquitous phenomenon, it is also an increasingly endangered one. In virtually every corner of the globe, migratory animals face a growing array of threats, including habitat destruction, overexploitation, disease, and global climate change. Saving the great migrations will be one of the most difficult conservation challenges of the 21st century. But if we fail to do so, we will pay a heavy price—aesthetically, ecologically, and even economically.

The decline of migratory species is by no means a new problem. North America’s two greatest migratory phenomena—the flocks of passenger pigeons that literally darkened the skies during their spring and fall journeys in the East and the herds of bison that once stretched from horizon to horizon on the Great Plains—were snuffed out well over a century ago. (The passenger pigeon vanished completely in 1914; bison held on only because of last-minute conservation efforts.) Even as far back as the American Revolution, colonial leaders were alarmed enough about declines in Atlantic salmon to push legislation banning the practice of placing nets across the complete span of a river in order to catch every salmon heading upstream to spawn.

Yet the rate at which migratory species are declining seems to have accelerated in recent years. Ornithologists using radar to monitor the spring migration of songbirds across the Gulf of Mexico report that the number of nightly flights dropped by nearly 50% between 1963 and 1989. University of Montana ecologist Joel Berger has estimated that 58% of the elk migratory routes and 78% of the pronghorn routes in the Greater Yellowstone Ecosystem have been lost due to development. The American Fisheries Society has tallied more than 100 stocks of salmon in the Pacific North-west that have been driven to extinction because of dam construction, logging, water diversion, and other human activities. Meanwhile, in Michoacán, Mexico, illegal loggers are destroying the high-elevation fir forests where virtually all of eastern North America’s monarch butterflies spend the winter. These diminishing forests serve as a blanket for the overwintering monarchs, protecting them from cold weather, rain, and even snow.

North America is hardly the only place where migratory animals are in trouble. European scientists are deeply concerned that overgrazing and desertification in Africa’s Sahel are harming populations of songbirds that breed in Europe and winter in northern Africa. (These same birds are also shot and trapped by the tens of millions as they pass through the Mediterranean region during their spring and fall migrations.) In East Africa, the spread of agriculture is severing the migratory routes of many populations of zebra, wilde-beest, elephants, and other large mammals. In Finland, wild Atlantic salmon have disappeared from more than 90% of the rivers where they spawned historically; in France, they have vanished from nearly a third of their historic spawning rivers and are endangered in the remaining two-thirds.

To be fair, most of these species are in little danger of disappearing altogether. Few if any scientists are predicting the extinction of the wildebeest, Atlantic salmon, or monarch butterfly. But what is at stake is the continued abundance of these animals as they make their long-distance journeys through an increasingly human-dominated landscape.

Special vulnerabilities

The threats facing migratory species are not qualitatively different from those confronting nonmigratory species. But migratory animals seem especially vulnerable by virtue of the long distances they travel. Their populations can be harmed not only by the loss of breeding habitat but also by changes in their wintering grounds and stopover sites. The cerulean warbler, for example, nests in deciduous forests across a wide swath of eastern North America, from southern New England and southern Ontario west to Minnesota and south to Arkansas and Mississippi. It winters primarily in forests in the foothills of the eastern slope of the Andes, from Venezuela to Peru. By some estimates, the breeding population of cerulean warblers in North America has declined by as much as 80% during the past 40 years. This decline, evident to birdwatchers in the United States and Canada, probably reflects habitat destruction at both ends of the

warbler’s migratory route. Mountaintop-removal mining, an extraordinarily destructive practice in which the tops of mountains are scraped away to expose coal seems, has already destroyed hundreds of thousands of acres of breeding habitat in the Appalachians. Meanwhile, much of the warbler’s wintering habitat has been converted to cattle pastures, coffee and coca plantations, and other agricultural uses.

Moreover, many migratory animals aggregate at key places during certain times of the year, a habit that makes them vulnerable to overexploitation. Gray whales in the eastern Pacific largely escaped persecution until the mid-1800s, when whalers stumbled on the shallow lagoons in Baja California where most of the animals gather in the winter to mate and give birth. Within two decades, whaling operations had driven the gray whale close to extinction, although they subsequently rebounded because of protection. All of the world’s sea turtles are imperiled in part because adult females return year after year to the same beaches to lay their eggs; the slow-moving and defenseless turtles and their eggs are easily harvested at their nesting beaches.

Climate change, too, has the potential to disrupt the migratory patterns of a wide range of animals. Rising sea levels could submerge the nesting beaches of sea turtles and shore-birds. Songbirds breeding in the temperate forests of Eurasia and North America depend on a summer flush of insects, particularly caterpillars, to feed themselves and their off-spring. In some places, these caterpillars are emerging earlier and earlier in response to rising temperatures. In theory, the songbirds could simply push up their departure from their winter quarters in Central America, the Caribbean, or Africa to catch the earlier flush of insect prey. If, however, the birds are relying on a fixed cue such as increasing day length to decide when to head north, they may be unable to adjust the timing of their migration. Precisely this disruption in the timing of bird migration relative to the emergence of insect prey has been identified as the cause of a decline of 90% in populations of pied flycatchers in the Netherlands. In East Africa, where the movements of wildebeest, zebras, and other grazers are timed to the seasonal rains, any change in rainfall patterns due to global warming will probably produce concurrent changes in migratory routes. As land outside Africa’s existing game reserves is converted to villages and farm fields, it may be difficult or impossible for the mammals to adjust their migratory routes in response to the changes in rainfall. It’s possible, of course, that warblers and wildebeest will find ways to cope with the twin dangers of habitat destruction and climate change. But the opposite could also be true, with declines occurring even faster and deeper than we anticipate.

The decline of the world’s great animal migrations is clearly a major aesthetic loss. But it is also a major environmental and economic problem, given the important ecosystem services these species provide. Consider the case of salmon in the Pacific Northwest. They head for the ocean when they are young and small, taking advantage of the productivity of the seas to grow to full size. They then return to their natal streams where they spawn, die, and decompose. They are, in essence, self-propelled bags of fertilizer, gathering important nutrients such as nitrogen and phosphorus from the ocean and delivering them to the streams, where these same nutrients can then taken up by other aquatic species or carried onshore by scavenging eagles, bears, and other animals. As salmon runs across the Northwest have declined because of dams, overfishing, and habitat degradation, so too has the free delivery of nutrients. In the Columbia River, for example, annual salmon runs have dropped from roughly 9.6 to 16.3 million fish before the arrival of white settlers to about 0.5 million today. According to one estimate, the weight of carcasses in the Columbia has dropped from nearly 50,000 tons per year to 3,700 tons. Perhaps some of the nutrient deficits caused by the lack of salmon have been erased by fertilizer runoff or other human-created sources. But even if our overuse of fertilizers (with its attendant runoff) has somehow lessened the impact of the salmon shortfall, it has not helped the Northwest’s beleaguered fishing industry, which has lost jobs as a result of the drop in salmon populations. From 1990 to 2005, unemployment rates in British Columbia’s commercial fisheries averaged 17.2%, twice the rate for the province’s economy as a whole.

Migratory songbirds perform their own important ecosystem service by consuming vast numbers of caterpillars that would otherwise eat the foliage of trees and shrubs. As numbers of songbirds drop, one might predict an increase in insect damage to forests or, alternatively, an increase in pesticide use to counteract any increase in defoliation.

Twin challenges

Given the strong aesthetic, environmental, and economic reasons for protecting animal migrations, the question naturally arises: Why have we been so unsuccessful at conserving them? The answer may lie in the fact that conserving migratory animals poses two unique challenges. First, it demands coordinated planning across borders and boundaries that mean a great deal to us but nothing to the animals. A single Swainson’s thrush winging its way from Canada to Brazil may pass through 10 or more countries. Each of these nations must provide safe nesting, wintering, or refueling stops in order for the thrush to complete its journey. Bison in Yellowstone National Park face harassment or even death if they cross an invisible line separating the park from adjacent land managed by the U.S. Forest Service and the state of Montana. The bison need access to lower-elevation rangelands outside the park during harsh winters, when the snowpack prevents them from finding sufficient forage inside the park. However, ranchers in Montana fear that the bison will spread brucellosis, a disease that causes some cattle to abort their fetuses, to their livestock, and they have used their political leverage to force the federal government and the state to curtail the bison migration.

The second key challenge associated with conserving migrations is convincing agencies, institutions, and individuals to agree to protect these animals while they are still abundant. The United States and many other countries have a long tradition of protecting endangered species, usually when the plant or animal in question is teetering on the brink of extinction. But for the reasons cited above, this type of 11th-hour intervention is wholly unsuited to the task of saving migrations, where the goal should be to protect the species while they are still plentiful.

Fortunately, there are a number of examples of successful efforts to conserve migratory animals, and we can look to them for guidance on addressing these problems. By the early 1940s, commercial whaling operations had dramatically reduced populations of the great whales, many of which undertake lengthy migrations through international waters where no one nation has sovereignty. In response to these declines, the major whaling nations signed the International Convention for the Regulation of Whaling in 1946. This treaty created a scientific and administrative body, the International Whaling Commission (IWC), with the power to curtail commercial whaling operations. After many years of stalling, the IWC finally halted commercial whaling in 1982, resulting in increased whale populations. (Japan, Norway, and Iceland continue to hunt several species of whales by exploiting loopholes in the treaty, but at levels well below what prevailed in the heyday of whaling).

The success of the International Convention for the Regulation of Whaling is due in large part to the fact that it created an administrative body with regulatory teeth. In contrast, an even more ambitious treaty, the 1979 Convention on the Conservation of Migratory Species (also known as the Bonn Convention) was designed to protect migratory animals of all kinds, but it lacks a powerful administrative body. Instead, it creates a mechanism whereby groups of nations can come together to address problems facing particular migratory species. The treaty does not specify what conservation measures must be taken, leaving that task to the nations involved in the agreements. Because the Bonn Convention lacks a strong administrative body, it has had relatively few successes thus far.

In a promising new development within the United States, the Western Governors’ Association approved a policy resolution in February 2007 aimed at protecting “wildlife migration corridors.” Alarmed by losses of migratory routes for elk, deer, bighorn sheep, and other animals caused by energy development and sprawl, the governors of the western states have pledged to identify and protect migratory routes in a more aggressive, coordinated manner. They recognize that the administrative barriers among states or among agencies within a state can undermine conservation programs for migratory species.

To address the second big challenge associated with conserving migratory species—protecting these species while they are still common—the institutions charged with managing natural resources will need to embrace the idea that migration is fundamentally a phenomenon of abundance and must be protected as such. To that end, it would be useful to have a standardized early-warning system to identify migrations at risk. One approach would be to develop a threat-ranking scheme for migrations akin to the one now used by the World Conservation Union for endangered species. Under that approach, species are listed as critically endangered, endangered, or vulnerable based on quantitative criteria related to factors such as population size, amount of habitat, trends in population size, and trends in habitat. Similar criteria, emphasizing trends in numbers, could be developed for discrete populations of migratory species, such as runs of salmon, populations of pronghorn, and monarchs wintering in Michoacán, Mexico. A migration that declined by more than a certain percentage over a fixed period of time could be classified as endangered; a slightly lower rate of decline might place it in the less serious category of threatened. Even if the designation did not carry any immediate legal consequences in terms of habitat protection, restrictions on harvest, and so forth, it would nonetheless bring welcome attention to the issue. To some degree, consumers can also play a useful role in protecting migrations by virtue of what they buy or don’t buy. Places where coffee is grown under a canopy of native tropical trees, typically marketed as shade-grown coffee, provide suitable winter habitat for a variety of North American songbirds; places where sun-tolerant coffee is grown in sterile monocultures do not.

Finally, for any conservation program to succeed, it must be adequately funded. The North American Waterfowl Management Plan, a joint agreement between Canada, the United States, and Mexico to regulate hunting and protect the habitats of ducks, geese, and swans, has protected or restored millions of acres of wetlands. This accomplishment was made possible by a stable, secure funding source: a tax on the sale of guns and ammunition in the United States, plus mandatory purchase of an annual permit to hunt waterfowl. Hunters want waterfowl to remain abundant in order to enjoy longer hunting seasons and bigger bag limits, and they are willing to pay for that goal via the tax and permit. One would hope that the nation’s birdwatchers could be inspired to support a similar tax on binoculars, birdseed, and other tools of their trade, with the revenues going to support habitat protection and restoration programs. However, an attempt to enact such a tax in the 1990s floundered in the face of strong resistance from the affected industries, an antitax (and anticonservation) sentiment in Congress, and too little support from birdwatchers.

If we are successful at saving the world’s great animal migrations, we will have protected natural phenomena that provide us with inspiration, sustenance, recreation, and numerous ecosystem benefits. We also will have learned to take timely, cooperative action to solve a complex environmental problem. It is even possible that efforts to protect migratory animals will inform our efforts to address other environmental and social ills that similarly transcend artificial borders and boundaries. At the very least, we will have ensured that future generations can enjoy some of the same flocks of birds, schools of fish, and herds of mammals that have inspired and sustained us for thousands of years.

Forum – Winter 2008

Post-scientific society

In “The Post-Scientific Society” (Issues, Fall 2007), Christopher T. Hill correctly observes that science-based commercial innovations must increasingly satisfy users’ functional needs. This trend has increased with the power of software and services to customize product performance to those needs. Globalization and the Internet make this possible and competitively necessary. This is not news to entrepreneurs who triumphed in the cyber-bubble of the late 1990s. There was little new science in e-Bay, Amazon, Yahoo, or Google, but a lot of highly creative business model innovation.

To refer to these trends as a “post-science” innovation system may serve a useful purpose in smoking out the last of the science policy troglodytes and conservative politicians who might think that markets will automatically convert published research into new businesses without help from government. I do believe that the National Academies’ report Rising Above the Gathering Storm might have inadvertently reinforced those misconceptions, since the report claims to say how to make the U.S. economy more innovative, but it addresses only the vital necessity of improving public education, encouraging U.S. students to study science, and strengthening our research leadership. In this hugely influential report, the linkage between the generation of new ideas and the process of innovation is dealt with by an attention-grabbing list of ways in which our economy might fail to be internationally competitive. It leaves the role of enhancing the power to innovate to the private sector, except for one role for government: to revisit the world of intellectual property law.

The trend to which Hill calls our attention can be illustrated by the strategy that helped IBM recover from a serious profitability problem in the early 1990s. The productivity growth in electronic components, in large part due to Japanese and Korean engineers (using U.S. science) rendered these components commodities. Large manufacturers such as IBM were left to assemble their hardware from these parts, with diminished ways to create competitive, unique-value hardware. Where could IBM go from there? To services, leveraging their extensive knowledge of the needs of their traditional large customers, plus IBM’s best software, architecture, and assembled hardware. On top of that trend toward services across U.S. industry came globalization and the rise of networked firms operating collaboratively across national lines. This new global business structure opens up new avenues to market adaptation through services, software, and local market knowledge.

Hill says “I am not arguing for a reduction in the role of science” although I will be surprised if he has not received a lot of hostile mail. But I agree with the second half of that sentence, when he says “I am arguing that we must find new ways to make scientific and technological literacy a part of the education of all students who wish to play significant roles in the post-scientific society.” Indeed, I would go a lot further. I think all students—especially engineers but also scientists—should be steeped in the realities of how the global system for creating, exploiting, and rewarding innovations actually works. Second, we must have some new leadership in the executive branch that recognizes that a broad range of government policy directly affects the nation’s power to innovate: not only government’s role in U.S. science and education, but in transitional technology investments, economic policy, trade strategy, government procurement, standards policy, attention to hard and soft infrastructure, and cultivating the creativity culture in U.S. society. What we need is research and innovation policy.

LEWIS M. BRANSCOMB

Adjunct Professor, School of International Relations and Pacific Studies

Research Associate, Scripps Institution of Oceanography

University of California, San Diego

La Jolla, California


Ethanol food fight

I am responding to “Ethanol: Train Wreck Ahead?” by Robbin S. Johnson and C. Ford Runge (Issues, Fall 2007). Although the article is replete with misinterpretations and misrepresentations, I will touch on just a few of its unsubstantiated myths.

First, there is absolutely no basis for the allegation that “the current policy bias toward corn-based ethanol has driven a run-up in the prices of staple foods in the United States and around the world.” Corn prices have spiked higher, but that is due to supply and demand conditions for corn and nearly all other major crops that have tightened throughout the world, due in large part to inclement weather for the past five to six years. Add to that the fact the U.S. dollar has depreciated nearly 25% the past five years. These account for the vast majority of reasons for the current high prices.

Another myth is that U.S. corn prices have driven up the price of tortilla flour in Mexico. Tortilla flour is made from white corn, which is totally different and separate from the yellow corn that constitutes over 99% of U.S. corn production and is used to make ethanol. White corn prices are determined by the supply and demand conditions for white corn, mostly within Mexico.

The article’s authors also use a false argument in suggesting that “Filling the 25-gallon tank of a sport utility vehicle with pure ethanol would require more than 450 pounds of corn, enough calories to feed one poor person for a year.” The facts are that this year, the amount of ethanol consumed as a component of gasoline is less than 5% of the total. Consequently, taking the article’s example, a more appropriate amount would have been about 20 pounds of corn. Further, outside of Mexican white corn, very little corn is consumed directly in the human diet. The majority of yellow corn is used as feed in the meat sector, for which the demand has increased significantly due to increased world economic growth, particularly in countries such as China and India, which also have contributed to these higher prices.

Finally, I will touch on the energy efficiency argument. According to a study by the U.S. Department of Agriculture on energy output compared to energy input, ethanol adds 40% to the energy balance, a figure that continues to grow each year with improved plant efficiencies. Moreover, ethanol converts nontransport energy such as natural gas and coal to a higher-value product that can easily be used by motor vehicles.

Although we disagree on the issue, I think it is beneficial that Johnson and Runge have weighed into the discussion on the merits of ethanol. Such open dialogue is important in helping consumers uncover what is fact versus what is myth.

BOB STALLMAN

President

American Farm Bureau Federation

Washington, DC


On economic grounds, energy balance, land use, environmental impacts, and crop price consequences, Robbin S. Johnson and C. Ford Runge’s demolition derby leaves no doubt that the fuel is a loser—its support an embarrassment to rational policymaking. (Loath to cede the ethanol boom entirely to corn, domestic sugar producers are about to earn a federally guaranteed share of the market.) The wider reflection this insightful article prompts is how the heart-warming mantra of “renewables,” or any resource, can prove dangerously deceptive unless analyzed within a broad benefit/cost framework. Examples: Appalachian mountaintop removal rewards the coal industry with a free ride, given the burden inflicted on the environment. U.S. drivers spared payment of a congestion fee have little incentive to change driving habits that reduce their community’s mobility. Abroad, Indonesia’s breakneck palm-oil production pace is believed to seriously threaten that country’s forests.

Pervasive resort to energy subsidies especially undermines America’s pursuit of socially defensible outcomes; “subsidies” including not only direct government payments, tax relief, loan guarantees, or import protection but also the failure to charge for spillover effects for which a given activity should be held responsible. (Just the narrower definition translates into an estimated value of federal energy subsidies of at least $20 billion annually.)

Accounting for subsidies should thus be an important part of a comprehensive and comparative fuel cycle analysis. (Ethanol’s real cost becomes much more transparent once its domestic tax credit is highlighted.) Still, that more complete analysis, pointing up the comparative pros and cons of different energy systems, is more easily demanded than delivered. Valuing externalities can be particularly vexing. Notwithstanding progress in regulating some key air pollutants, numerous harmful environmental impacts (notably, from greenhouse gases) remain unaddressed, partly because there is controversy about the appropriate benefit/cost calculus.

In fairness, we might shed a tear for ethanol’s advocates, who understandably resent being singled out for their reliance on government support when practically every major energy constituency enjoys its own market-distorting largesse. Indeed, the “level playing field” refrain (conveniently invoked, rarely pursued) fundamentally complicates sound energy policymaking.

But ponder an utter perversity: One product’s real costs are obscured by a subsidy; a competing product demands equal treatment; both products are “overconsumed.” Some way to spur conservation!

There are innovative and promising energy paths deserving some measure of public support. (Some cite the “infant industry” analogy: government helping to jump-start manufacturing early in our history.) The Johnson and Runge idea of an oil price floor, providing hesitant entrepreneurs with investment incentives (and, in the event of an oil price collapse, government with revenues to finance basic R&D on things like cellulosic ethanol), seems worth debating. At the same time, we need to rethink a subsidy-dependent culture that’s so greatly out of synch with the nation’s genuine energy dilemmas. How about an updated, thorough assessment of how far we are from the level playing field ideal? We can never get there all the way. But Johnson and Runge’s eye-opening contribution is a useful prod for getting started.

JOEL DARMSTADTER

Senior Fellow

Resources for the Future

Washington, DC


Water worries

In his article on dam removal, James G. Workman draws much-needed attention to a U.S. conundrum: We have a lot of infrastructure, and a lot of it is very old (“How To Fix Our Dam Problems,” Issues, Fall 2007). Workman lucidly argues for a cap-and-trade system for dams. We have seen success in using this approach in other areas of environmental management, so why not apply it here?

Let’s take a slightly broader look at the problem. First, aging dams are only the beginning of what is to come. The U.S. population grew rapidly in the 20th century, but the rate of infrastructure expansion was far greater. Bridges, sewer pipes, airports, and dams were all primarily built in the mid-20th century, and we are entering the decades when these geriatric infrastructure systems will need to be reexamined. Although the public often ignores infrastructure, news of the past two years has been surprisingly dominated by it: failure of levees in New Orleans, bridge collapse in Minneapolis, steam pipe explosion in New York City. Recently it seems that our infrastructure, the sinews of society, is collapsing around us.

What now? I argue that a necessary first step is to assign clear ownership and associated liability and responsibilities for infrastructure. In addition to or instead of Workman’s cap-and-trade proposal, I suggest emulating a successful program for a completely different type of infrastructure: offshore oil and gas platforms. Under federal law, an offshore platform can be constructed and remain in place so long as the platform is producing oil and gas. If the lease becomes inactive, the platform must be removed within one year. Property rights for the platform are clearly assigned, and the federal law makes abandoning a platform clearly illegal. The effect of this law has been the construction of over 6,000 offshore platforms since 1947, but also the removal of over 2,000. This policy has encouraged the continued presence of platforms only at productive offshore leases, with a side effect being a market in platform removal. This is in stark contrast to the ubiquity of abandoned dams.

Dams should be dealt with in a similar manner. As part of inspection or licensing of a dam, agencies should require clear exit strategies and associated financial bonds for dam decommissioning and removal at the end of the proposed license term. If the dam remains in place beyond the license term, the owner should be fined and penalized. Through this policy, a market for dam removal would undoubtedly emerge, although created through an alternative mechanism than what Workman envisions. Dams that remain productive can be relicensed and continue to operate, but the bond for financing removal and/or future repairs also remains in place.

At a minimum, current dam problems should guide current policies. Specifically, new dams, such as those proposed by Governor Schwarzenegger in California, should require exit strategies that include clearly set-aside financing for removal.

MARTIN DOYLE

Department of Geography

University of North Carolina

Chapel Hill, North Carolina


James G. Workman’s article featured California Governor Arnold Schwarzenegger’s apparent inconsistency on the subject of large dams.

On the one hand, the governor, building off a Pulitzer Prize–winning editorial series in the Sacramento Bee and an Environmental Defense Fund report, Paradise Regained, unexpectedly commissioned a state study in 2005 to investigate the feasibility of removing the long-controversial O’Shaughnessy Dam in Yosemite National Park’s Hetch Hetchy Valley. That study even more remarkably motivated a sympathetic response from President George W. Bush’s administration and worldwide interest ranging from a feature story in a German men’s magazine, accompanied by beautiful pictures, to an Australian’s interest in a financial analysis of the Hetch Hetchy system, informed by his distaste for a large dam on a wild river in Tasmania.

AS PART OF INSPECTION OR LICENSING OF A DAM, AGENCIES SHOULD REQUIRE CLEAR EXIT STRATEGIES AND ASSOCIATED FINANCIAL BONDS FOR DAM DECOMMISSIONING AND REMOVAL AT THE END OF THE PROPOSED LICENSE TERM.

On the other hand, the Governor (and Senator Dianne Feinstein, O’Shaugh nessy’s most visible defender) are leading a concerted effort to access California taxpayers’ largesse to build three new large dams, prompted by a very dry 2006-2007 water year and by fears that future global warming will tax the ability of the thousands of dams Californians have already built to meet even current expectations of their performance, much less the needs of an expanded population.

Workman sees a way to reconcile this apparent tension and a broader international phenomenon that simultaneously includes huge commitments to new dam construction and to old dam removal. He proposes a cap-and-trade policy for dams. The general rubric he suggests is by now familiar to all who have even a passing academic interest in modern U.S. and international natural resource policy. Recognizing of course that many, if not most, dams generate positive economic values, he suggests that governments set caps on the negative effects of dams so that dam owners would be encouraged to innovate and to trade in figuring out ways to diminish all dams’ overall negative effects over time.

Although Workman perhaps underestimates some of the political and logistical impediments to implementation of his proposals, his basic idea is sound. Some dams (new and old) have more benefits than costs. Other dams (new and old) have more costs than benefits. All dams probably can be operated in ways that are more ecologically and socially friendly. Thus, and only thus, can one contemplate an economically rational set of government policies that would encourage the construction of some new dams while simultaneously decommissioning old ones.

Workman thinks that dam builders and owners themselves, who know the most about their own situations, if given the proper incentives, negative and positive, could best sort out which dams make sense, which don’t, and which would benefit from new operational regimes. That’s not such a farfetched assumption on which to begin to build a new dams policy for the 21st century.

THOMAS J. GRAFF

California Regional Director

Environmental Defense Fund

Oakland, California


The outcomes of removing all dams from the Baraboo River’s main channel in Wisconsin reinforce some points made by James G. Workman. However, additional experimentation and adaptive management are warranted before promulgating overarching policies such as cap and trade.

The Baraboo partnership worked with priorities: Meet safety standards, keep dam owners financially whole, integrate economic redevelopment with river rehabilitation, and conduct scientific study. Also, there has been a nearly complete recovery of the river’s fish species. The Baraboo River was removed from the state’s list of impaired waters.

Sand County Foundation takes pride in having played a coordinating role within that partnership. But Aldo Leopold’s sage counsel is relevant as environmental policy is being developed.

Leopold urged the use of experiment and trials to make better wildlife management policy. In 1933 he wrote in Game Management, “The detail of any policy is an evanescent thing, quickly outdated by events, but the experimental approach to policy questions is a permanent thing, adaptable to new conditions as they arise.”

Concerning dams, it is important to build the field with research and case studies before policy is mandated and constrains future options.

Several issues raised by Workman ought to encourage river conservationists to adopt the Leopold perspective. Consider sediment. Competing scientific explanations for the movement of sediments after dam removal have to be worked out. Engineering means for responsible management of toxic materials in the sediments need to be demonstrated, tested, and accepted.

Advancing the cause of river rehabilitation will be set back by a surge of contaminants into the public’s waters. Even a single incident would become permanent propaganda for anti-dam-removal forces.

THERE IS A GROWING CONCERN AMONG MEXICAN DECISION MAKERS THAT GLOBALIZATION HAS RESULTED IN VERY LARGE ECONOMIC ASYMMETRIES, GIVEN THE ECONOMIC GRADIENT THAT MEXICO IS SUBJECTED TO, WITH THE RICHEST COUNTRY ON EARTH AT ITS NORTHERN BORDER AND SOME OF THE POOREST DUE SOUTH.

Beyond sediment issues, there are essential matters that are best handled case by case, such as compensation, ownership, insurance, and exotic species invasion.

For broad policy improvement at this time, safety is the appropriate emphasis, not wholesale cap and trade. Most unsafe dams will not be worthy of repair, but a proponent for a new dam could construct a package deal too good for political interests to reject by picking up the tab to remove several other dams.

River rehabilitation proponents could insert themselves effectively into particular transactions. They could help deals get done without enduring delays such as those in the Water Resources Development Act of 2007.

The Workman proposal for cap and trade may have a certain appeal in California. But the dealings necessary to bring that kind of policy to life will not account for this bald fact. In California, the experience and policy to support voluntary water transactions necessary to accommodate demand and supply are not yet in place. With whose water would Governor Schwarzenegger fill $9,000,000,000 worth of new reservoirs?

BRENT M. HAGLUND

President

Sand County Foundation

Monona, Wisconsin


Mexican cha-cha

The title “Mexico’s Innovation Chacha” (Issues, Fall 2007) is most adequate to head the analysis by Claudia González-Brambila, Jose Lever, and Francisco Veloso, three well-known authorities on science and technology policy in Mexico. It depicts clearly what is further disclosed in the body of the paper, in terms of the timid, incipient, and never sufficiently ambitious Mexican programs to foster science and technology. Something very special must be happening in this natural resources–rich country, which refuses to align its national policy with the international trends. It is not so much that Mexico refuses to get richer or to do its own things better, whatever they may be. I believe there is a growing concern among Mexican decisionmakers that globalization has resulted in very large economic asymmetries, given the economic gradient that Mexico is subjected to, with the richest country on Earth at its northern border and some of the poorest due south.

Indeed, the Mexican northern border states, from Baja California to Tamaulipas, have all suffered enormous transformations in their development process since the North American Free Trade Agreement was signed 15 years ago. This has resulted in changes in government actions and in their political and power structures, given the enormous pressure on their populations to devise appropriate action plans to accelerate the integration into a North American economy. The attraction of North American ways of life is heaviest in the northern belt of the country. However, it is not the case in the central and southern Mexican regions, where life and culture have evolved very differently during many centuries. This evolution has left Mexicans poorer in the south than up north, which has pushed many young and promising boys and girls to seek fortune by emigrating as far north as they can.

This emigration, given the growing difficulty of entering the United States in large numbers, has resulted in Mexicans and many other Central Americans settling in the more developed northern Mexican states. Hence, a growing number of young workers, who take up the slack in the growing labor market, come from the south. They now receive much more money than they used to, and therefore they feel free to indulge in vices and practices that are not endemic in the northern regions. They naturally become more ambitious in material terms. Violence has grown, together with drug smuggling and prostitution of all sorts. Therefore, many Mexicans view these features, assigned to the accelerated rate of “development,” with contempt and disgust. Mexicans would very much like to preserve a precious past, calm and joyful. Why would we want, then, a stronger national effort in education, science, technology, and innovation?

JOSÉ LUIS FERNÁNDEZ-ZAYAS

General Coordinator of the Consultative Forum on Science and Technology, Mexico

Mexico City, Mexico


Improving Indian innovation

R. Chidambaram’s “Indian Innovation: Action on Many Fronts” (Issues, Fall 2007) provides an excellent illustration of the variety and customization of current programs that support innovation in India. The article correctly concludes that a challenge for India is to achieve a coherent synergy among diverse programs. I would add that some initiatives would benefit from rationalization and others from scaling up, with more third-party monitoring and international benchmarking.

From my vantage point at the World Bank, I would like to amplify some of the messages of the article, based on our just-released book Unleashing India’s Innovation: Toward Sustainable and Inclusive Growth. Broadly defined innovation, including both creating and commercializing new knowledge and diffusing and absorbing existing knowledge, is a key driver of growth and poverty reduction. This is a critical area for the World Bank Group to work together with India. Our current engagement with the Ministry of Science and Technology in the preparation of a new National Innovation Project signals the bank’s reentry into supporting innovation and productivity at the enterprise level in India, after a successful Industrial Technology Development Project in the early 1990s.

Let me emphasize three areas where India can do more to reach its full innovation potential. First, encouraging stronger competition among enterprises is particularly important. Since the Indian economy opened up in 1991, the vast majority of private-sector investments in R&D were in sectors most open to competition. India needs to move further in removing nonessential regulations in product, land, labor, capital, and infrastructure markets. India also must make it easier for enterprises to take risks and reallocate resources when new ventures don’t turn out as planned. Reforming exit policy through more efficient bankruptcy rules and procedures would help reduce the stigma of failure and contribute to increased experimentation and risk-taking.

Second, India needs to do more to help enterprises create and absorb knowledge. A key challenge in the first area is to strengthen incentives for enterprises to more systematically convert innovative ideas to commercial use. The three main civilian research agency networks (CSIR, ICAR, and ICMR) would benefit from a strategic assessment, including an independent evaluation and restructuring to take advantage of cross-institution synergies and increase their focus on commercialization, with a system-wide action plan to consolidate and transfer some R&D labs to the private sector so that their work programs are fully market-driven. The second area, to make better use of existing knowledge, is arguably even more important. Based on a recent survey of roughly 2,300 manufacturing enterprises in 16 Indian states, the output of the economy could increase more than fivefold if all enterprises could achieve national best practices based on knowledge already in use in India. Although India is implementing programs in both areas, they would benefit from greater leveraging of the strengths of the private sector in program design and management.

Third, India’s innovation system must better support inclusive innovation; that is, creation and absorption efforts that are most relevant for the needs of the poor. India needs to promote more formal R&D efforts for poor people and more creative grass-roots efforts by them, as well as improve the ability of informal enterprises to better use existing knowledge. The challenge here is to help extend the power of innovation to the common citizen in rural India.

PRAFUL PATEL

Vice President, South Asia Region

The World Bank

Washington, DC

Dealing with Disability

Between 40 million and 50 million people in the United States—at least one in seven residents—currently report having some kind of disability that limits their daily activities or restricts their participation in work or social life. Given current trends, this number probably will grow significantly in the next 30 years as the Baby Boom generation enters late life, when the risk of disability is the highest. But disability is not destiny for either individuals or the communities in which they live; it is not an unavoidable result of injury and chronic disease. Rather, disability results, in part, from choices society makes about health care, working conditions, housing, transportation, and other aspects of the overall environment. Positive choices made today not only can prevent the onset of many potentially disabling conditions but also mitigate their effects and help create more supportive physical and social environments that promote increased independence and integration for people with disabilities.

However, the nation has a less than stellar record in making such positive choices. The enactment in 1991 of the Americans with Disabilities Act (ADA), a landmark law, has contributed to a significant increase in the understanding of disability, its causes, and strategies that can prevent its onset and progression. Nonetheless, implementation and enforcement of the ADA have often been disappointing. Many barriers remain, in health care facilities, workplaces, public spaces, transportation, and elsewhere, that limit the extent to which people can live independently and be involved in their communities. Important federal programs, including Medicare and Medicaid, and many private health plans continue to employ outdated policies for covering technologies and services that can benefit people with disabilities. Research spending on disability remains inadequate. The result is a diminished quality of life for people with disabilities, increased stress for them and their families, and lost productivity. It is essential for the nation to take action, sooner rather than later, to avoid a future of harm and inequity and instead to improve the lives of people with disabilities.

OVER THEIR LIFE SPANS, THE MAJORITY OF U.S. RESIDENTS WILL EXPERIENCE DISABILITIES OR WILL HAVE A FAMILY MEMBER WHO DOES.

In April 2007, the Institute of Medicine (IOM) issued The Future of Disability in America, which lays out an array of recommendations that can inform the nation’s collective actions. The recommendations center on four general topics: disability monitoring, access to health care and other support services, public and professional education, and disability research.

What is a disability?

One step in developing and applying policies and programs that prevent or limit the impact of disability is to develop a definition of disability that helps in evaluating the extent of disability in the United States, monitoring change, and identifying continuing problems. In the past half century, the understanding of disability and the language used to describe it have changed dramatically. Certain language (for example, “handicapped worker”) has largely disappeared. And disability is increasingly being understood as an interaction between the individual and the environment. The ADA and various other public policies reflect this understanding, with varying degrees of success, as they seek to reduce or eliminate environmental and other barriers to independence and community integration. Still, the absence of universally accepted and understood terms and concepts with which to describe and discuss disability continues to be a major obstacle to consolidating scientific knowledge about the circumstances that contribute to disability and the interventions that can prevent, mitigate, or reverse it.

As the basis for a common language of disability, federal agencies involved in monitoring disability, including the National Center for Health Statistics, the Census Bureau, and the Bureau of Labor Statistics, should adopt the World Health Organization’s International Classification of Functioning, Disability and Health (ICF) as their conceptual framework. Developed by multiple stakeholders, including people with disabilities, the ICF attempts to provide a comprehensive view of health-related states from biological, personal, and social perspectives. Greatly simplified, the ICF starts with the concept of a “health condition,” a general term for a disease, disorder, injury, trauma, congenital anomaly, genetic characteristic; even aging. This condition forms the basis for the possible development of a disability in the form of an “impairment, activity limitation, or participation restriction.” The ICF then takes into account the dynamic interplay of the health condition with a richly interlaced set of environmental and personal factors; it is such interactions that help determine whether or not an impairment, limitation, or restriction occurs.

The ICF is by no means perfect, and the IOM report spells out various directions for improvement. For example, the framework should incorporate “quality of life” considerations, more fully develop the personal and environmental factors that influence the outcome of potentially disabling health conditions, and better depict functioning and disability as a dynamic process. The Interagency Subcommittee on Disability Statistics of the Interagency Committee on Disability Research should coordinate federal efforts to develop, test, validate, and implement new measures of disability that correspond to the components of the ICF. But even as a work in progress, the ICF would enable the various agencies to standardize how they describe and measure different aspects of disability, which would help improve the clarity and comparability of research findings and strengthen the base of scientific knowledge that guides public policies and health practices.

On a broader scale, the government should develop a national disability monitoring (surveillance) program. Today, disability statistics must be patched together from multiple, often inconsistent, surveys in order to cover people of all ages and in all living situations; and even then, gaps often remain. Lack of a comprehensive monitoring program, first pointed out by the IOM in a 1991 report and again in a 1997 report, remains a serious shortcoming in the nation’s health statistics system. The National Center for Health Statistics, which is part of the federal Centers for Disease Control and Prevention (CDC), should spearhead development efforts. As one priority, the agency should develop a new panel survey of disability as a supplement to the current yearly National Health Interview Survey. Panel surveys monitor the same individuals over several years or even decades, making them particularly useful in understanding the dynamic nature and natural course of disability, including risk factors for the onset of disability and factors that influence recovery from the disability. The survey should include people living at home and in institutional settings, and should oversample children and younger adults in order to help fill the current major gaps in knowledge about disability among these age groups.

Among its benefits, improved monitoring would provide researchers and policymakers with better insight into trends in disability. Trend data can provide a barometer of the nation’s achievements in terms of disability prevention, such as the elimination of environmental barriers to participation in various daily activities. Also, when trend data include measures of social, medical, and environmental risk factors, they can point policymakers to effective strategies for future interventions that will prevent or limit disabilities.

As another monitoring priority, better data are needed on the employment status of and economic opportunities for people with various kinds of limitations. Employment and economic security are central issues for the independence and community integration of people with disabilities. Thus, it is important for the Bureau of Labor Statistics to include core disability measures in its Current Population Survey and for the Census Bureau to include such measures in its re-engineered survey in the Dynamics of Economic Well-Being System that is slated to replace the Survey of Income and Program Participation.

Improving access

In recent decades, a variety of technological innovations and advances in biomedicine, coupled with shifts in attitudes about disability and legislative and regulatory changes, have helped to reduce or mitigate some of the environmental barriers that can hinder a person’s opportunity to participate in everyday community life and that thus create disability. For example, getting around the community and traveling beyond it are becoming easier for many people with disabilities because of the ADA’s barrier removal and accessibility requirements and other policies. Technological advances and reforms of telecommunications regulations have made it easier for people with vision, hearing, and other impairments to communicate electronically with clients, coworkers, friends, family, and others. Various other types of assistive technologies are making it easier for people to maintain or increase their functional capabilities. More attention is also being paid to strategies for designing universal and accessible mainstream technologies that aim to create, from the outset, physical environments and products that are easily used by and accessible to as wide a range of potential users as practicable. Despite progress, however, substantial environmental barriers remain, and their persistence will only become more serious as the number of people at the highest risk of disability grows substantially in coming decades. Ironically, many barriers still persist in hospitals and physicians’ offices, which too often lack equipment and services suitable for people with physical mobility, sensory, or other impairments. Thus, it is crucial to improve the accessibility of health care facilities and strengthen implementation of the ADA related to such facilities. Both public and private groups have a role to play. The Department of Justice should strengthen enforcement of the ADA and publicize effective settlements, and it should offer health care providers better guidance about their responsibilities under the act. Similarly, the Joint Commission and other organizations that accredit health care facilities should consider a facility’s level of compliance with accessibility standards and guidelines in their accreditation decisions. In addition, Congress should direct the Architectural and Transportation Barriers Compliance Board (known as the Access Board) to develop standards for accessible medical equipment and to see that the appropriate federal agencies disseminate information about the standards and enforce their implementation.

The government should also act to reduce barriers to health insurance for people with disabilities and to make needed assistive technologies and services more available. Although people with disabilities are slightly more likely than others to have health insurance, often through public programs, access to insurance is not universal, especially among working-age individuals. As one step, for working-age people who are newly qualified for Social Security Disability Insurance (SSDI), Congress should reduce or eliminate the current 24-month waiting period before they can start receiving Medicare benefits. Many such individuals have no insurance during this period and face financial or medical ruin. In addition, Congress and administrative agencies should continue to test modifications in SSDI and Supplemental Security Income rules that would encourage people who are able to return to work to do so without losing Medicare or Medicaid coverage.

To foster increased use of assistive technologies and services, federal policymakers should eliminate or modify the current Medicare requirement that durable medical equipment must be “appropriate for use in the home” in order to be covered. This provision keeps many people from obtaining wheelchairs or scooters that would enable them to navigate reliably and safely outside the home. Criteria for covering such technologies should consider their effects on an individual’s independence and participation in the community, including employment. Policymakers also should rethink narrow and outdated definitions and regulations that authorize Medicare coverage only for “medically necessary” items and services. These definitions and rules have often proved troublesome for people with disabilities seeking coverage of assistive technologies and personal care services. For example, they may be invoked to deny payment for nonmedical services, such as assistance with bathing, or for products, such as bathroom grab bars, that help people manage daily life efficiently and safely. Denials of claims for assistive technologies and services based on the failure to meet medical necessity criteria are disheartening and confusing and reduce people’s ability to function at home and in the community.

Gaining access to health services often proves especially difficult for young people with disabilities as they move from pediatric or adolescent to adult health care. This transition can be a complex process that is influenced by the characteristics of the young person, his or her family, and, in particular, the larger environment of policies and organizational arrangements that affect the availability and coordination of health care services, the sharing of health care information, and the support provided by schools and social services available in the community. To smooth the transition, policymakers, professional societies, public and private payers, and educators should work to align and strengthen incentives in public and private health care programs to support coordinated care and transition planning and to expand the use of integrated electronic medical records for chronic disease management. One particular challenge that young people face is their loss of eligibility for public or private health coverage at age 18, in many cases, or sometimes at age 21. Congress should extend Medicaid and State Children’s Health Insurance Program (SCHIP) coverage through age 21 for young people with disabilities and specify that such benefits cover their transition assessment, coordination, and management services. Among other actions, Congress also should fund the Maternal and Child Health Bureau so that it can expand its work to develop and implement medical home and other services for young people with special health care needs who are over 21 and who need continued transition support.

Promoting education

Although people with disabilities often receive care from rehabilitation specialists, they also depend on other health care professionals for primary care and various other services. But these professionals are not necessarily well informed about proper primary care for people with disabilities, the problems these people face as they age, the barriers that the current health care system creates, and the ways in which assistive technologies can enhance people’s independence and productivity. Schools of medicine, nursing, and allied health should respond to this deficit by providing their students with better education about disability and care for patients with disabilities. Again, a well-educated health care workforce will become ever more critical with the expected growth in the numbers of people aging with disability or aging into disability. The building of a knowledge base should begin early in a clinician’s education and training and be reinforced through direct clinical experience with people with disabilities. Even health professionals who do not plan to routinely care for patients with disabilities still need a basic foundation of knowledge and skills. They also need support systems to guide them in providing timely and accessible preventive care services, help them in preventing secondary medical problems that arise from a patient’s primary health condition, and encourage them to properly refer patients with disabilities to experts with more specific knowledge when appropriate.

It is clear, too, that physicians and other health professionals would welcome evidence-based reviews and well-crafted, evidence-based guidelines for practice that will help them maintain and update their knowledge and skills for managing patients with disabilities, especially as their patients age or develop secondary health conditions. Toward this end, educators and health care professionals, working with people with disabilities and their families, should develop education modules and other curriculum tools to help educate professionals throughout their careers to care for people with disabilities. These groups should also develop consensus competency standards that accreditation and licensing boards can use to evaluate health professionals’ skill levels.

FEDERAL AGENCIES SHOULD INVEST MORE IN DEVELOPING, TESTING, AND DISSEMINATING PROMISING INTERVENTIONS THAT WILL HELP PEOPLE MAINTAIN THEIR INDEPENDENCE AND ABILITY TO FUNCTION IN COMMUNITY LIFE.

Educational efforts are also needed to inform health care professionals, as well as consumers, about the range of assistive technologies and accessible mainstream technologies now available, and the benefits they offer. Health care professionals do not have to become experts in the technologies; rather, they need to know, in general, what exists that might help their patients or clients and what basic features of a technology are important for a given patient. Given the current knowledge gap, the CDC, working with the National Institute on Disability and Rehabilitation and Research, should launch a major public health campaign to increase professional and consumer awareness and acceptance of assistive technologies and accessible mainstream technologies that can benefit people with different kinds of disabilities.

The consumer component of the campaign not only would impart knowledge about the various technologies, but also would help people assess whether they have developed functional deficits for which helpful products exist. It would also provide guidance on finding sources of financial assistance for purchasing products. In some cases, evidence suggests that although people may be aware of certain products, they consider the products unattractive or stigmatizing, which can be a major barrier to their use. A large-scale, long-term public media campaign may help publicize more appealing technologies and convey that it is normal to use “smart” technologies to make life better. Promotions might show celebrities using technologies and natural-looking aids. Another strategy might be to persuade the producers of popular television programs to show the unobtrusive routine use of assistive technologies. The idea would be to help people feel more comfortable using technologies that may enable them to live independently longer or to stay with their families longer by reducing the amount of informal caregiving needed. If the campaign identifies product design as a continuing problem, then that knowledge also can guide contacts with designers and manufacturers about how to modify the products to reduce this barrier.

Expanding research

Despite the personal and societal impact of disability, the federal government’s budget for research in this area remains modest at best, falling far short of needs. Funding for the National Institute on Disability and Rehabilitation Research, the National Center for Medical Rehabilitation Research, and the Veterans Health Administration Rehabilitation Research and Development Service has barely increased over the past decade and is a miniscule portion of the federal research budget. As an overarching goal, the government should commit to funding a program of clinical, health services, social, behavioral, and other disability research that is commensurate with the need.

The government also should strengthen the management and raise the profile of disability research within federal research agencies. To do so, the government should consider elevating the National Center for Medical Rehabilitation Research to the status of a full institute or freestanding center within the National Institutes of Health, with its own budget. Similarly, establishing an Office of Disability and Health in the director’s office of the CDC would help to more fully integrate disability issues into the CDC’s programs. In addition, the government should take stronger steps to ensure that the various agencies involved in disability research coordinate their activities in order to reduce wasteful duplication of effort and better identify neglected research issues.

Among the priority areas for expanded research efforts, federal agencies should invest more in developing, testing, and disseminating promising interventions that will help people maintain their independence and ability to function in community life. For example, investigators have identified a number of risk factors related to the onset of disability at birth and throughout the life span, as well as promising interventions to overcome these factors. In childhood, risk factors include living in socioeconomically disadvantaged families and in households with exposures to environmental toxins. In late life, risk factors include low frequency of social contacts, a low level of physical activity, smoking, and vision impairment. Still, many gaps remain in the knowledge base for practices and programs to reduce environmental barriers that contribute to disability, and translating the findings of social and behavioral research into practice remains a formidable challenge.

THE GOVERNMENT SHOULD ACT TO REDUCE BARRIERS TO HEALTH INSURANCE FOR PEOPLE WITH DISABILITIES AND TO MAKE NEEDED ASSISTIVE TECHNOLOGIES AND SERVICES MORE AVAILABLE TO THEM.

Government agencies also should support additional research to identify better strategies for developing and bringing to market new or improved assistive technologies and accessible mainstream technologies. Research needs to focus not only on high-tech technologies but also on more common low-tech equipment such as improved walkers. In formulating this research, government should involve consumers, manufacturers, and medical and technical experts, among other interested parties. Additionally, research is needed on the role of legislation, including existing policies such as the ADA and the Rehabilitation Act, in providing incentives to industry by enlarging the market for acces sible technologies.

Making choices

Given current demographic, societal, and disability trends, how will the nation make the choices that will help define the future of disability? In the coming decades, as the number of people living with disabilities continues to increase, costs for health care and related services will increase across the board. Concurrently, society will face pressure from other sources of increasing health care costs, which have consistently grown faster than the gross domestic product. Individuals and families will bear a significant share of the increasing costs and of the noneconomic costs as well. Rising costs will also stress federal and state governments, which are responsible for Medicare, Medicaid, SCHIP, and other critical programs.

Projections of future spending increases for government programs raise the prospect of difficult tradeoffs, such as reduced funding for other purposes, or higher taxes, or both. Reducing inefficiency and the inappropriate use of health care services will help, but such reductions are unlikely to eliminate the need for the difficult choices that policymakers recognize need to be made but are, in large measure, delaying.

How the nation ultimately makes choices about future spending will reflect collective fundamental values about the balance between community and individual responsibility. Still, both society and individual citizens should recognize that health, social, and other policies that assist people with disabilities do not represent only current transfers of resources from those without disabilities to those with disabilities or from mostly younger people to mostly older people. Over their life spans, the majority of U.S. residents will experience disabilities or will have a family member who does. People may not realize it, but the support they give today for policies that affect future funding for disability-related programs is a statement about the level of support they can expect to receive at later stages in their own lives. In such a light, providing adequate funding for these programs seems only a prudent investment.

Sharing the Catch, Conserving the Fish

The mid-1990s were tough times to be a Pacific rockfish fisherman on the West Coast of the United States or a groundfish fisherman in Canada’s British Columbia. Fish populations in both regions were on the decline. Fishermen were working harder for smaller catches and smaller paydays, and talk of even stricter catch limits and fewer days at sea haunted the docks. Environmentalists and the public were almost as distressed. Today, British Columbia’s groundfish stocks are at healthy levels and fishermen enjoy profitable businesses and fish throughout the year, whereas U.S. stakeholders continue to battle over how to restore still-depleted rockfish populations and fishing seasons remain limited to a few weeks or months a year. Why the difference?

Veterans of fish fights in both countries legitimately point to complicated factors, but the key reason for these disparate outcomes is policy. In 1997, Canada’s Department of Fisheries and Oceans changed complex rules constraining how fishing was to be practiced (rules, the agency had hoped, that would indirectly achieve conservation goals) and instead held fishermen directly and individually accountable for meeting a vital conservation target: ensuring that fish catches stay within scientifically determined levels. That is, fishermen were given a “share” of the total allowable catch and given the flexibility and the accountability for meeting it. As a result, groundfish stocks rebounded and so did the fishermen. Meanwhile, fishery managers in the western United States have yet to make this key transition. Consequently, comparatively little rockfish recovery has happened off of Washington, Oregon, and California, and the fishermen have been left with declining profits.

Many other ocean fisheries in the United States continue to operate in the same away as the Pacific rockfish fishery. But federal and state fishery managers can end the inherent incentive to overfish that is created by exclusive reliance on indirect measures such as limiting how and when fishermen can work. Better systems of management that change fishermen’s behavior by giving them a share in, and responsibility for, the fishery’s take—called “catch share” programs—are the best way to end the urgent problem of overfishing in the United States.

There is an increasing interest in catch shares across the country. In 2002, the legal moratorium on some types of catch share programs was lifted. In 2006, Congress took the further step of enacting new rules to guide implementation of catch shares, and now six of the eight federal regions are working to develop catch share programs. The Bush administration has also taken some actions in support of the programs. These steps are positive, but more needs to be done. We believe that all U.S. fishery management plans must examine whether catch share programs can end overfishing faster and with less collateral damage to the environment and to fishermen that the management plans in place today.

The effects of overfishing

Even environmental experts are often surprised to learn the extent of the damage that overfishing already has caused in the oceans. Overfishing is defined as fishing that unsustainably depletes fish stocks and nonfished species or that damages the ocean environment. The term encompasses overexploitation of a target species, killing of nontargeted species (bycatch), and habitat destruction in which important physical features of the ocean environment are damaged. Globally, 90% of large fish are already gone. During the past 40 years, as stocks have disappeared, bigger boats have gone farther and deeper to find new fish. This unsustainable fishing effort has extended to the furthest reaches of the globe and down the food chain. The effects are being felt as people have less access to this important source of protein and as fish-consuming species such as seabirds and whales lose out in an intense competition with humans.

The United States manages one of the largest ocean areas of any country, and the effects of overfishing in these waters are dramatic. Of 230 fish stocks (individual species or groups of related species) under federal management, 94 are known to be unsustainably exploited. For example, cod, long the staple of many diets and a main driver of North American exploration, are severely depleted. Atlantic halibut have been hunted to commercial extinction. Bocaccio, one of several highly depleted Pacific rockfish species, have been reduced to less than 10% of their historical population size in West Coast waters. Large predatory fish, including tuna, sharks, marlin, and swordfish, are largely gone. In the Gulf of Mexico, whitetip shark populations are at 1% of what they were in the 1950s. Most of the several species of abalone in California have been harvested to near extinction. This mismanagement of fishery resources has resulted in boomand- bust cycles in individual fisheries and economic dislocation as catches collapse and regulations are tightened to protect stocks.

Additionally, overfishing has broad ecosystem impacts. For example, bottom trawling, in which boats drag gear and nets along the seafloor, can damage deepwater corals, sponges, and other features important for commercial and noncommercial species. Some types of fishing gear also cause very high bycatch, including juvenile fish and threatened or endangered animals such as whales, sea turtles, and seabirds. Large-scale biomass removals by fishermen can have unpredictable effects on ocean food chains. Ecological research suggests that kelp forest food chains have been totally changed by fishing.

Two major blue ribbon commissions, the U.S. Commission on Ocean Policy and the Pew Oceans Commission, concluded that the United States faces an ocean crisis. And although climate change is a serious threat to future ocean productivity, overfishing has had a bigger impact. The United Nations–mandated Millennium Ecosystem Assessment, the most thorough look at Earth’s ecosystems ever, concluded that overfishing is “having the most widespread and the dominant direct impact on food provisioning services, which will affect future generations.”

Few regulatory rewards

Since 1976, the beginning of modern U.S. fisheries regulations, the government has attempted to control the total capture of target and nontarget fish in individual fisheries by controlling one or more of several factors at the whole-fishery level. These factors include the amount of time that fishermen can spend fishing and the type and effectiveness of the fishing gear used. The idea is that by controlling how fishermen fish, conservation can be achieved. These indirect approaches are also coupled with actions such as closing a fishery when regulators conclude that the amount of fish caught by all boats exceeds a “total allowable catch” or too many nontarget animals have been killed.

Thirty years of overfishing later, experience shows that success in applying this approach is elusive. Even when fish population goals are met, there can be high costs for fishermen as well as marine ecosystems more broadly. The reason is simple: When regulators control catch at the level of the fishery, fishermen figure out how to maximize their individual shares of the total take. Each boat catches as fast as it can, or it will be left with too little product to sell. As the saying goes, haste makes waste, and the consequences are overharvest, bycatch, habitat destruction, bad economics for fishing businesses, unsafe working conditions for fishermen, and fishing industry resistance to conservation measures.

One of the most dramatic examples of the failure of traditional management was the Alaska Pacific halibut fishery in the 1980s and 1990s. The federal North Pacific Fishery Management Council, one of eight regional councils established by Congress, aimed to achieve a total allowable catch target by repeatedly shortening the fishing season. In 1980, regulators limited the fishing season to 65 days. But with each of 333 vessels fishing without individual limits on catch, the total catch was at 115% of the level scientists thought was safe. In response, regulators continually adjusted the season downward to account for the fishermen’s skill in catching fish. In 1990, regulators posted a six-day season. But fishermen responded by putting 100 more vessels to work, and the catch was still 106% of the safe level. By 1991, the season had shrunk to just two frantic days of “derby’’ fishing. The derby brought dramatic unintended consequences. The value of the catch dropped dramatically as the market was flooded for two days with the entire year’s catch of halibut, and fishing became incredibly dangerous as exhausted fishermen worked overloaded boats in order to grab their take before the close of the season.

In 1995, regulators changed the rules of the game and implemented a catch share program—one of the first in the United States. Under this system, fishermen can catch halibut nine months a year—as long as they catch only their share of the total allowable catch. Compliance with catch limits is now nearly 100% for individual fishermen, and overall the fishery usually winds up erring on the safe side and is under its catch limits. The success of the halibut catch share system has inspired others; there are now nine catch share programs in the United States.

Experience in other publicly owned natural resources areas, such as national forests, the atmosphere, and the electromagnetic spectrum, demonstrates that resource stewardship improves when regulators include some form of incentive-based management that better aligns the economic incentives of the users with the public policy objectives, including conservation, that must go hand in hand with granting private access to that resource. In the case of fisheries, catch limits, bycatch controls, and habitat protections—the traditional management tools—must continue. But these controls work better to promote sustainable fisheries when fishermen are accountable for catching only a dedicated percentage of the catch. Further, the value of the catch goes up when a fisherman can take his boat out when the price is right, rather than when every other boat goes out.

Worldwide, programs that incorporate a right to a share of the catch have been implemented in various guises for more than 30 years, under names such as individual fishing quotas, individual transferable quotas, and territorial use rights in fishing. Their key feature is that fishermen (individually or in cooperatives) are assigned either a percentage share of the total allowable catch or of the fishing concessions in a given bay, bank, reef, or other ocean area (a system referred to as “territorial use rights for fishing”). Where such programs are used—in most fisheries in Iceland and New Zealand and in some fisheries in Australia, Canada, Mexico, Chile, and the United States—compliance with key limits improves and fishermen have been able to make a better living. In places where catch shares have been used to address overfishing, stocks have improved.

As with some traditionally managed fisheries, catch share fisheries have a cap—the total allowable catch—that is based on a scientific assessment of the sustainable yield from the fishery. Fishermen are each allocated a specified percentage of the allowable catch, and their take is monitored. The catch shares are usually tradable, with some restrictions. Because the value of the percentage shares increases when stocks improve and managers raise the total allowable catch, catch shares create an incentive for fishermen to steward the resource. Equally important for conservation, especially in the near term, these approaches give fishermen a way to benefit financially immediately from fishing “cleaner”; that is, with less impact on the ecosystem. Consider bycatch, for instance. Bycatch saps profits. It is expensive to buy, deploy, and retrieve gear. The greater the ratio of target to nontarget species is, the higher the profits. When fishermen are not fishing against the clock, they can, and do, take the time to figure out how to increase that ratio. This is good for their bottom lines and for the marine ecosystem.

Catch share programs can be fairly simple, such as one adopted in 2007 for red snapper in the Gulf of Mexico. Fishermen voted overwhelmingly in favor of the program, aimed at saving the gulf ’s most important commercial reef fish, which had been reduced to about 3% of its historic population size. Under the program, commercial fishermen were allocated a percentage share, in pounds per year, based on their historical catch. Fishermen with greater landings received a correspondingly greater share. This means that if a fisherman’s share is 50,000 pounds, he can decide when to fish, based on the price of materials (such as gasoline), the weather, and dockside prices for his fish. When he brings fish to the dock, he must have shares in his quota account to cover the landings. He also can lease out his annual allocation, letting someone else catch his share, or buy more shares from another boat to boost his catch.

The results from the first year of the red snapper catch share program were dramatic. In just one year, there was at least a 40% drop in the waste from dead or dying snapper tossed overboard. In previous years, regulations required fishermen to throw away fish caught at the wrong time or that were the wrong size, even though most red snapper did not survive. Also, the price fishermen are getting is up 30% because fishermen are bringing fish to market when they are at their highest value. The price for a quota share reflects about three times the price of fish, showing that fishermen are optimistic that the value of the fishery will be going up.

As managers get more experience, they can deal with fisheries in which different species are caught together. The British Columbia groundfish catch share program covers 27 different species such as flounder or cod that live on or near the bottom. Fishermen need to have or buy a quota to cover all of the fish they catch. If they are short on quota, professional quota brokers are available to organize real-time trades between vessels. Because the catch share program in their fishery has reduced the race for fish, the captains of these trawlers, which may range from 50 to 100 feet in length, have been able to develop creative new approaches to maximize the value of their landed product, avoid expensive entanglements with deepwater corals and sponges, and increase the selectivity of their fishing practices for the target species.

Looking at what works

Environmental Defense recently conducted a study to examine how well catch share programs have performed in the 10 fisheries in the United States and British Columbia in which they have been implemented. (The Redstone Strategy Group performed the quantitative evaluations of the industry as a whole, as well as each existing catch share program.) The fisheries analyzed were Alaska halibut, Alaska pollock, Alaska sablefish, Alaska king crab, mid-Atlantic surf clam and ocean quahog, South Atlantic wreckfish, Pacific whiting, British Columbia sablefish, British Columbia halibut, and British Columbia groundfish caught by trawling. Our findings support earlier studies showing that catch share programs effectively end the persistent overshooting of scientific targets for safe levels of fishing while cushioning the impacts on fishermen. The study also reveals five key areas in which catch shares improve management outcomes.

Overharvesting. Under traditional management, a fishery may have an official catch limit, but fishermen do not. Because they are not held directly accountable for exceeding a limit, they will fish until told to stop. Real-world time lags in reporting and enforcement lead to overages in collective catch targets. Regulators must then allow less catch the following year if the fish population is to recover. For the group of stocks analyzed, before catch shares were introduced, annual catch targets were exceeded in half of all fishing seasons. Sometimes the excess was small, but for some British Columbia groundfish species the caps were exceeded by as much as 60%. After catch shares were introduced, overages essentially disappeared. In fact, landings were, on average, about 5% below the annual cap.

Bycatch and habitat destruction. The amount of nontarget species caught each year—not just fish, but sea turtles, dolphins, corals, and sponges—is staggering. Worldwide, about one-fourth of the total catch is thrown back, much of it dead or dying. Naturally, fisheries managers and fishermen themselves would like to eliminate this wasteful and costly inefficiency. However, when regulators attack the problem indirectly, the fishermen’s incentive is to obey the letter of the law but still find ways to catch as much as fast as possible. The result is continued bycatch, although sometimes of different kinds of sea life than before. When fishermen can reduce costs by avoiding bycatch or, even better, are held directly accountable for reducing bycatch, they do so.

Bycatch decreased in every catch share fishery examined in the study, by 40% on average, and the programs collectively prevented the waste of enough seafood to feed 16 million people in the United States for a year. The fishermen also deployed 20% less gear to catch the same amount of fish. The incentive to reduce gear use under catch shares exists because success depends on efficiency instead of speed. To fishermen, less use of gear means less capital expenditure and lower input costs in the form of bait, lost gear, and labor. For the environment, less gear means a lower likelihood of harmful interactions with habitat and wildlife.

AS NEW CATCH SHARE PROGRAMS COME INTO BEING, FEDERAL AND STATE FISHERY MANAGERS SHOULD CONTINUE TO GATHER DATA TO DETERMINE WHETHER THE SYSTEMS ARE ACHIEVING ENVIRONMENTAL, SOCIAL, AND ECONOMIC GOALS.

Economics. Typical business conditions in a traditionally managed fishery consist of a boom time during the initial years, followed by economic dislocation as fishing capacity is drawn into the fishery, the stock is overfished, and managers restrict fishing operations and lower total allowable catches. Most U.S. fisheries are overcapitalized: There is too much fish-capturing ability (vessels and gear) to efficiently harvest the level of allowable catch. The fishermen are then caught in a catch-22. A fisherman might be using only, say, half of his fishing capacity and would prefer to downsize his boat or gear, thus saving money on boat financing and operating costs while catching the same amount of fish. But if other fishermen in the fleet do not do the same, their bigger capacity will scoop up the fish faster than he can in the race against the clock. The result is that our fisherman makes the best choice under a tough set of circumstances and stays in the race. To make matters worse, when the entire annual supply of fish comes to market over a short period, buyers can offer low prices. Additionally, the intense competition to land fish quickly makes it impossible for fishermen to pursue quality-oriented instead of quantity-oriented business models.

When the regulatory driver of overcapacity is removed, revenues per boat increased 80% on average in the fisheries studied. Although the total number of hours worked remained steady, there was a move from a part-time labor market to more stable full-time fishing jobs, and fishermen reported greater satisfaction with the quality of employment. Consumers also benefited from greater availability of fresh fish.

Safety. Under traditional management, regulators close the season once the annual target has been caught, creating an intense race for fish. With each boat competing to get their fish landed first, fishermen need to work quickly, sometimes in dangerous conditions like bad weather. Catch shares largely eliminate the race for fish, allowing fishermen to avoid the most dangerous conditions. The results reflect those changes: In the Alaska halibut and sablefish fishery, the annual number of search-and-rescue missions decreased by more than 70% after catch shares implementation, and fatalities dropped 15% over five years.

Fisherman/manager cooperation. A study by researchers at the University of British Columbia showed that fully rebuilt fisheries in the United States would generate almost 300% more revenue per year than the current depleted stocks. However, because fishermen are not now guaranteed a share of the eventual gains, they have been loath to agree to the immediate catch reductions, monitoring, and enforcement needed to rebuild stocks. A typical observer’s response is to question why the government does not just “make them do it.” In fact, there are success stories in which regulators have put in tough new conservation measures over fishermen’s opposition. The problem is that the political dynamics of managing fisheries are like those associated with any other major regulated business in this country. Relying exclusively on the hammer of big government is not a sustainable public policy. In fact, it runs deeply counter to the very underpinnings of the nation’s views on the ideal relationship between a people and their government.

In fisheries, the fact that the regulatory tool kit lacks tools that work with, instead of against, fishermen’s incentives has set up epic struggles among fishermen, the government, and conservation. The result has been limited progress toward reducing overfishing.

ALTHOUGH THE IDEA BEHIND CATCH SHARES IS A SIMPLE ONE, DEVELOPING AND IMPLEMENTING SUCH A PROGRAM IN A GIVEN FISHERY INVOLVE COMPLICATED POLICY CHOICES.

Introducing catch share programs appears to lead to a significant improvement in the dynamic between fishermen and fishery managers. Nearly three-quarters of catch share fisheries have some kind of catch-monitoring system, compared with just one-quarter of non–catch share fisheries. One reason for a fisherman’s newfound willingness to accommodate observers is because strong monitoring will decrease the chance that his competitor can cheat. Fishermen also are more cooperative in official scientific efforts. Catch shares provide an incentive for fishermen and fisheries managers to collect data on stock sizes, not least because the larger the stock, the greater (usually) the total allowable catch, and so the greater the harvest permitted to each quota holder. As one example of observed improvements, the uncertainty around biomass estimates in these fisheries dropped from an average margin of error of ±43% before the implementation of catch shares programs to ±27% after. In British Columbia, a group of forward-looking quota owners actually lobbied the Department of Fisheries and Oceans to close new areas to fishing to protect important habitat for juvenile fish.

Helpful steps

Both theoretically and empirically, catch share programs improve management outcomes in ending overfishing and raising the economic prospects of fishermen. But although the idea behind catch shares is a simple one, developing and implementing such a program in a given fishery involve complicated policy choices. Regulators must tackle issues such as how to cost-effectively monitor individual catches, how to weigh the interests of historically fishing-dependent communities, and how to allocate catch privileges among fishermen. In fact, these issues can be and increasingly have been dealt with effectively, especially in newer catch share programs, during the policy design process at the individual fishery level. Still, outdated criticisms sometimes scare fishermen and regulators away from exploring catch shares in their fisheries. This is especially unfortunate because fishermen in fisheries where catch share programs have been implemented report strongly positive experiences with the shift. Experience with catch share programs reveals some important lessons about how contentious issues can best be addressed in the policy design process.

Initial quota allocation. Among the thorniest issues is how to divide the catch among participants in the fishery. Initial shares can be extremely valuable, worth tens or hundreds of thousands of dollars per fisherman. Some economists argue that the best method for distributing shares is to auction them off. In cases in which fishermen would accept this process, auctions may well be the preferred method. But in most cases to date, managers have instead allocated shares to fishermen who have been fishing that stock locally. Determining who gets these initial rights is understandably critical to participants in a fishery. Most fisheries have based initial allocation on records of historical catches, with some consideration given to a fisherman’s current level of investment. The exact allocation formula varies, but the key is to ensure an open process that accounts for legitimate interests and maintains the conservation incentives in catch shares programs. Fishery managers have the responsibility to identify the key values and characteristics they wish to maintain in a fishery and ensure that the quota allocation rules are consistent with those overall goals.

Catch monitoring. Programs must incorporate cost-effective monitoring of catch and bycatch. Some fisheries with minimal bycatch and habitat effects, such as the mid-Atlantic surf clam and ocean quahog fishery, can be effectively monitored simply by counting product landed at the dock. More complex fisheries, such as the British Columbia groundfish trawl, require onboard observers or new technologies, such as video monitoring systems and satellite tracking, to track bycatch and discards and to prevent fishing in closed areas. However, the need for effective systems is not limited to catch shares management only: All fisheries management programs need monitoring and enforcement systems. Regulators have had success in increasing monitoring during the transition to catch shares for several reasons. Among other things, the increased revenue per boat means that fishermen have the resources and understand the benefit to them of spending some of those increased revenues on mandated monitoring improvements as part of the deal to implement catch shares. Regulators should optimize efficiency while meeting information needs for each fishery.

Socioeconomic transitions. Implementing a new catch share program can create changes in local economics that probably will need to be addressed in advance. Fishing communities built on part-time labor will need help in transitioning toward fewer, more professional (full-time) jobs. Businesses that organized around pulses of fish landings need to restructure for lower but steadier streams of higher-quality fish. There are a range of design choices that can address key socioeconomic issues while meeting a program’s goals. Some catch share fisheries, such as the recently adopted tradable quota program for red snapper in the Gulf of Mexico, have chosen to allow any individual fisherman to own no more than 6% of the total allowable catch in order to prevent excess consolidation of quota ownership.

Other successful change-management initiatives have included setting aside a percentage of quota for indigenous or historically fishing-dependent communities (several fisheries in Alaska and Canada); establishing loan funds to aid purchases of quota by new entrants (Alaska halibut and sablefish); and designating some quotas as “community development quotas” whose earnings must be used to improve communities through investments in education, infrastructure, and fisheries-related industries.

Stakeholder education. When people’s livelihoods are at stake, they need to feel secure that any proposed changes are going to be an improvement. Policymakers proposing a catch share program must make sure that the stakeholders in the fishery have access to information about experiences in other fisheries so they can better envision potential changes to their own livelihood. For many fisheries, existing pressures are already driving fishermen out of business. Thus, fishermen need to have access to realistic assessments of alternative futures: with catch shares and without. Still, fewer and fewer fisheries can expect the status quo to simply continue. Education should focus on potential benefits and pitfalls and on strategies that have worked to build successful catch share programs elsewhere.

Environmental improvement. Policy designers must have clearly articulated environmental goals. When establishing catch share programs, regulatory councils need to establish a hard limit on total catch (if such a limit does not already exist) and enforceable limits on bycatch. In addition, the process of designing a catch share program provides an excellent opportunity for all stakeholders to look at which areas within the fishery should be off limits for the long-term benefit of fishermen and the environment.

Streamlining design. In the past, the design process has often taken many years. This challenge can be solved if NOAA and each of the eight federal regional fishery management councils, which are the key decisionmakers, set clear timetables and establish small groups of representative stakeholders who are directed to design catch share systems. In addition, staff support must provide technical analyses quickly to keep the design process moving.

Revenue responsibility. Catch shares release wealth. Fishery managers must carefully consider how best to tap additional revenue generated by a catch share program to run the system, improve data collection, achieve the social objectives of particular communities, and increase the levels of monitoring, enforcement, and scientific research. At the same time, the fact is that this additional revenue is a key motivator for fishermen to embrace change. Burdening catch share programs, especially in their early years, with high fees or other revenue recovery options can be counterproductive.

Adaptive management. As new catch share programs come into being, federal and state fishery managers should continue to gather data to determine whether the systems are achieving environmental, social, and economic goals. Allowing for changes in the management system as new information and best practices emerge will be another important key for successful catch share program management.

Overfishing is a key threat to the world’s oceans, and Congress has taken strong action by mandating an end to overfishing in U.S. fisheries by 2010. Between now and then, NOAA and the regional fisheries management councils will need to put in place caps on total allowable catch, along with management plans to ensure that the caps are met. Current management approaches would meet those caps with ever more complex limitations on fishermen, as the regulators try to stay one step ahead. But experience has proven that trying to achieve these caps using current regulatory approaches results in a less efficient fishing industry, increased collateral environmental damage, and a decrease in safety in the country’s most dangerous industry.

The nation cannot afford this, and there is a better choice. The data are clear that adding catch shares to fisheries management plans is the best way to meet fishing caps and end overfishing; catch shares create clear and effective accountability built on a foundation of positive incentives for fishermen. That powerful combination of accountability and incentives should be the default option in every fishery management plan.

Living Legos

Benjamin, I have one word for you: syntheticbiology. Of course, there is no need to update The Graduate, and that is really two words, but rewriting that line is a national pastime, and if we can string together small stretches of DNA to create a new organism, why fret over mashing a few words together.

Synthetic biology is not new. Scientists have been piecing together short stretches of DNA for decades. But as a recent report points out, advances in the speed with which DNA can be assembled and the growth of a commercial industry that produces short and not-so-short strands of DNA compel us to confront the implications of this technology becoming available to a much larger number of people. Synthetic Genomics: Options for Governance (www.jcvi.org/research/synthetic-genomics-report/) provides a thoughtful and useful framework for finding a path that will make it possible to tap the spectacular potential for useful applications of this technology while protecting society from its accidental and deliberate misuse.

Although the report was prepared by the J. Craig Venter Institute, the Center for Strategic & International Studies, and MIT, its deeper heritage is the dearly departed Office of Technology Assessment. Several of the authors and core advisory group members are OTA veterans, and the analysis is quite explicit in its intention to follow the OTA practice of presenting options, not recommendations. But borrowing a page from Consumer Reports, the report includes a chart in which the options are evaluated for their likely effectiveness with a range that extends from solid circles for Lexus-like dependability to empty circles for the policy equivalent of your father’s Oldsmobile.

Progress in the field has reached a pace that would make Gordon Moore’s head spin. In the early 1970s, Har Gobind Khorana with a team of 17 colleagues spent years assembling a gene with 207 base-pairs. In the 1990s, a large team with plenty of time could assemble a gene with 2,700 base pairs. In 2002, a team led by Eckard Wimmer spent about a year assembling an infectious poliovirus with 7,400 base-pairs directly from nucleic acids. A year later a Venter Institute group constructed a virus with 5,400 base-pairs, but it took only two weeks. Today, 24 U.S. firms and an additional 21 across the globe are building and selling segments of DNA as long as 52,000 base-pairs.

What once took a team of top scientists years to achieve can now be ordered with a phone call. And the stretches of DNA that are purchased can be cobbled together in a variety of ways using commonly available laboratory equipment. It’s not quite as simple as Legos yet, but one can readily imagine a day when amateurs could assemble genes in their garages. Perhaps the thought of alienated young gene hackers and retired boomers experimenting with new life forms does not lift your spirits. If not, don’t even start to think about how a terrorist might use this capability.

So maybe it’s time to think about how to govern this technology. Is there a way to enjoy the new medicines, materials, and sustainable transportation fuels that might become realities through synthetic biology without living in fear of environmental disasters or planned epidemics?

The work has already begun. The National Academies have issued several reports that touch on this question, and their Committee on Science, Technology, and Law is planning a workshop on the subject in 2008. The Department of Energy’s Biological and Environmental Research Advisory Committee recommended action, and a group at the University of California at Berkeley proposed voluntary steps the research community should take. Participants in the international Synthetic Biology 2.0 conference called for addressing security concerns, and the industry association of firms that produce and sell DNA segments is exploring what the companies should do to screen orders to ensure that they know when they are being asked to synthesize DNA from a dangerous pathogen.

As the Synthetic Genomics report explains, the broad range of concerns that should be addressed include “cultural and ethical concerns about manipulating life, economic implications for developed and developing regions, issues related to ownership and intellectual property, concerns about environmental degradation, and potential military uses.” But since the September 11, 2001, terrorist attacks, the prospect of terrorists developing and releasing a virus or other biological weapon has generated the most intense anxiety.

The report finds that the construction of a virus through synthetic biology is still so difficult that it is not a terrorist threat today. Looking ahead, the report concluded that “Over the next five years constructing an infectious virus will remain more difficult than obtaining it from nature or from laboratory stocks, with a few important exceptions [smallpox, ebola/Marburg, 1918 flu, and foot-and-mouth disease]. In ten years, however, the situation might be reversed.” Now is the time to begin preparing safeguards that will make this possibility less likely.

The report focuses on three areas of concern: enhancing biosecurity to protect against terrorists, fostering laboratory safety to safeguard lab workers and nearby communities, and protecting the environment. This is a good beginning, but as the report acknowledges there will be numerous other issues to address: government-supported R&D activities over which the United States has no control, ethical questions, related general biotechnology topics, and the adequacy of public health systems to deal with an accident or attack, to name a few.

But if you are wondering if it is worth opening this can of worms, consider the even more mind-boggling list of possible benefits. Synthetic genomics will open innumerable pathways for basic genetics research. Vaccine development and production could become much more efficient with the ability to make subtle DNA-level changes. Indeed, synthetic biology could make an enormous difference in all drug development and manufacturing. It could play a vital role in developing a cost-effective means to produce cellulosic ethanol and in manufacturing alternatives to many petroleum-based products. Fittingly, it might make it possible to replace the plastics that Benjamin was originally advised to pursue.

Science and technology policy sometimes seems mired in a Mobius strip. Debates about nuclear weapons, science and math education, evidence-based medicine, the role of government in applied research, and new energy technologies can make one feel condemned to an existential hell of reruns. Synthetic biology, for good and ill, offers something new. Although it raises some profound and long-discussed questions about human hubris, it also opens the door to a new world of potential risks and benefits. Now is the time to roll up our sleeves and get to work.

Open Access to Research for the Developing World

Kofi Annan, then secretary-general of the United Nations, noted in 2002 that “[A] wide consensus has emerged on the potential of information and communications technologies (ICT) to promote economic growth, combat poverty, and facilitate the integration of developing countries into the global economy. Seizing the opportunities of the digital revolution is one of the most pressing challenges we face.”

The intervening five years have seen a rapid expansion in the reach of digital technology to encompass much of the developing world. Top-down efforts such as the One Laptop per Child initiative, now commencing production of its sub-$200 laptop, represent one approach. Arguably more significant, however, is the change that is being driven from within developing countries. In the area of mobile telephony, for example, Africa has generally been neglected as a marketplace by the major international telecom companies, but this has not prevented domestic mobile phone companies from adding subscribers at a spectacular rate. The number of mobile phones in Africa has doubled in the past two years, and there are now more than 200 million mobile phone users on the continent: 10 times the number of landlines. Although cellular modems are not the ideal way to connect to the Internet, this is nevertheless an enormous leap in access.

As a result of these trends, developing countries are now more connected than ever before, and the digital infrastructure that now exists has the potential to transform access to knowledge. The primary obstacles are no longer technological but are related to issues of content licensing, distribution, and access control.

Access to knowledge is clearly a fundamental requirement for development. It is difficult to see how the following United Nations Millennium Development Goals can be effectively achieved without ensuring that developing countries have access to the latest relevant scientific and medical knowledge:

  • Reduce child mortality
  • Improve maternal health
  • Combat HIV/AIDS, malaria, and other diseases
  • Ensure environmental sustainability
  • Develop a global partnership for development

In particular, a global partnership aimed at addressing development issues would be hugely facilitated by a “knowledge commons” to ensure that developing and developed country researchers are not operating in isolation from one another.

Solutions to problems in the developing world depend on full and effective collaboration between those working in the developed and developing worlds. Leaving low-income countries to fend entirely for themselves in the face of problems that can be addressed using current scientific knowledge is not an ethically or morally acceptable choice. But nor is “parachuting in” solutions that have been developed entirely in the developed world without reference to local knowledge. The history of development cooperation is full of examples of attempted solutions imposed by the North on the South that lack the participatory elements crucial for socioeconomic acceptance and uptake.

In combination with appropriate local skills and expertise, online access to the latest research can help low-income countries not only deal with practical priorities in areas such as public health and agriculture but also provide a vital starting point to developing their own research capacity. M. S. Swaminathan, a key participant in India’s Green Revolution and now an active proponent of the role of access to knowledge in development, warns that “Many developing countries remain poor largely because they let the Industrial Revolution pass them by. They can ill afford to miss the information technology revolution.”

The importance of ensuring that developing countries have access to the latest medical research was recognized by the World Health Organization in 2000, and this led to the HINARI initiative, a partnership with research publishers that provides free or low-cost online access to medical journals for researchers working in the poorest countries. The HINARI model has spurred similar initiatives in agricultural research (AGORA) and environmental research (OARE).

Undoubtedly, initiatives such as HINARI have significantly improved access to research in developing countries. However, they offer only a partial solution. They represent a discretionary concession by publishers, allowing certain countries limited access to some content for a period of time. However, the publisher typically still retains exclusive rights over that content and determines how it may be used. For example, it is typically not allowable (without special permission) for research distributed under such schemes to be reprinted (for example, to allow distribution to sites without Internet access) nor for derived works (such as educational material) to be created and distributed. Initiatives such as HINARI also fail to address the access problems in countries with large economies such as Brazil, India, China, and South Africa. Although these countries have low per-capita incomes, they are nevertheless generally excluded from initiatives such as HINARI.

In addition, actual access has not met promises. A recent study published in BMC Health Services Research noted that users in African countries reported problems in accessing journal content through HINARI due to the technical requirements for login and authentication. Furthermore, because HINARI focuses on providing access at the institutional level, it does not fully address the access needs of practitioners, journalists, policymakers, and others who may not be affiliated with major institutes.

Research librarians in all countries are familiar with the increasing tendency of users to rely on Google and other Internet search engines for discovering information. Any system that aims to improve access to knowledge for developing countries must take this into account. Thus, accessibility should not depend on articles being accessed via a special portal or proxy server, or via complex authorization schemes. The simplest and most reliable way to ensure that knowledge is available where and when it is needed is to avoid access barriers altogether through a universal open-access model.

In the past decade, there has been a revolution in how we think about licensing digital information. Open licenses, which explicitly allow redistribution, reuse, and the creation of derivative works, have proved to be an extremely effective way to maximize the value to the community of digital resources.

The Linux open-source computer operating system and MIT’s Open CourseWare initiative are notable examples, as is Wikipedia, the free and open encyclopedia. In just a few years, Wikipedia has grown to become an invaluable knowledge resource, including more than 2 million articles in English and 5 million more articles in 200 other languages. Despite being a small nonprofit organization, Wikipedia is now one of the world’s top 10 most-accessed Web sites, demonstrating what can be achieved by bottom-up, openly licensed efforts. Wikipedia’s model of open contribution and distributed editorial control means that it is best seen as a complement to, rather than a replacement for, authoritative information sources. However, the massive use of Wikipedia, and the richness of its content, demonstrate a global thirst for knowledge that has not been addressed by traditional alternatives and provide encouragement for those seeking to create open models for the publication of peer-reviewed, editorially filtered content.

FUNDERS WHOSE FOCUS IS GLOBAL HEALTH SHOULD ENSURE THAT AS A CONDITION OF FUNDING, GRANT RECIPIENTS ARE REQUIRED TO MAKE THE RESULTS OF THEIR RESEARCH UNIVERSALLY ACCESSIBLE.

To facilitate the exchange of openly licensed resources, the organization Creative Commons has created a set of standard licenses, designed to be human- and computer-readable, that are now used by publishers and content creators to flag open content that is available for reuse. An increasing number of open-access scientific and medical journal publishers, including BioMed Central, Public Library of Science, and Hindawi, are embracing the use of open licenses for research publication. Researchers, after all, want their published research to be as widely read and cited as possible and for others to build on the results of that research. The same is true of the funders who pay the underlying costs of the research.

The Budapest Open Access Initiative declaration, drafted in 2002 by a group of publishers and other members of the research community, noted that open access to published scientific research had the potential to be an “unprecedented public good,” analogous to other open resources such as the human genome sequence and the global positioning system. In all cases, the openness of the resource not only ensures wide usage, but also stimulates innovation by allowing anyone to develop new value-added services that make use of these resources.

Alongside the growth of open-access journals, another important trend in expanding access to published research has been the development of open digital repositories, operated by research institutions and funders, which provide open access to deposited copies of research articles, including those published in traditional subscription-only journals. In response to pressure from funders, most conventional publishers now allow at least the author’s manuscript version of published articles to be made openly available in this way, once an embargo period of 6 or 12 months has elapsed. Although delayed access to a nonauthoritative version is not ideal, it is better than no access at all and represents a stepping stone toward full and immediate open access.

The most well-known open digital repository is the U.S. National Institutes of Health’s (NIH) PubMed Central. This has recently been joined by UK PubMed Central, a mirror repository operated by a consortium of UK biomedical funders that includes the Wellcome Trust and the Medical Research Council. As of October 2007, NIH only requests deposit from authors, whereas the UK funders require their grantees to deposit research. Recent legislative initiatives to strengthen the NIH policy into a requirement have received overwhelming bipartisan support in both houses of Congress, despite strenuous opposition and lobbying from traditional publishers keen to preserve the status quo.

Traditional publishers sometimes suggest that open-access deposit policies such as that proposed by NIH threaten to undermine peer-reviewed research publishing, but the growing success of peer-reviewed journals operating on a fully open-access model provides a powerful counterargument to these claims. Open-access journals generally cover their costs through publication fees (typically paid by the author’s funder or institution) instead of charging for subscriptions. Most open-access journals waive publication charges for authors from developing countries. Many open-access journals based in developing countries receive central institutional support, in which case there may be no need to charge fees to either authors or readers.

An unfortunate consequence of the traditional publishing system has been that in order to seek maximum global visibility and publicity for their work, leading researchers working in developing countries have until recently had no choice but to publish in developed-country journals, even though this has meant that readers in their own country would not have easy access to published results. However, the open-access publishing model is already improving this situation. D. K. Sadu, editor of the Indian Journal of Medical Sciences, published by the Indian open-access publisher MedKnow, refers to the challenge faced by subscription-based journals in developing countries as the “circle of limited accessibility.” These journals typically have limited circulation, which leads to poor visibility and readership. As a result, the journals receive limited recognition and few citations, and this means they attract few authors, few subscriptions, and low circulation—thus closing the circle. The transition of MedKnow’s journals, since 2001, to an open-access model has established a “circle of accessibility.” Manuscript submissions have multiplied severalfold, and the rate of citation of previously published articles from Med-Know’s archive, now easily accessible for the first time, is reported to have increased fivefold. In a short time, open access has thus allowed domestic journals in low-income countries to acquire an international reputation and audience.

Other organizations working on open-access initiatives in developing countries report similar success. Bioline, a joint Brazilian/Canadian project that helps journals from 24 low-income countries make articles freely available online, reports that the annual number of downloads of full text articles from its Web site increased from just 27,000 in 2000 to 2.5 million in 2006. This traffic, which comes from developing and developed countries all over the globe, emphasizes the ability of open access to bring researchers in different countries into a single connected community.

The Directory of Open Access Journals (DOAJ), based at Lund University in Sweden, tracks the rapid increase in the number of journals offering immediate open access to all research articles. Of the 2,700 journals listed in the DOAJ, a substantial fraction come from developing countries, including 222 journals from Brazil and 87 based in India.

The International Network for the Availabilty of Scientific Publications (INASP) is a nonprofit organization that takes a two-pronged approach to increasing access to scientific research in developing countries. Like HINARI, it negotiates to provide developing countries with low- or zero-cost access to existing journals where possible. In addition, INASP works with domestic journals in developing countries to enhance their online presence and accessibility. INASP assists journals in the setup and use of the Public Knowledge Project’s Open Journal System (OJS), which provides an attractive starting point for groups in developing countries wishing to operate open-access journals. For example, the African Journals Online project uses OJS to operate 285 journals.

This appeal of the open-access publishing model in research areas highly relevant to developing countries is strikingly demonstrated by the example of Malaria Journal, an open-access journal launched in 2002 that is already a global leader in its field. Starting with 19 articles in its first year, the journal has grown dramatically to 170 articles in 2007. What led malaria researchers to embrace this new online journal so rapidly? It is clear from author feedback that the journal’s policy of universal access, allowing not only researchers in all countries but also journalists, nongovernmental organizations, public health authorities, and educational institutions to have access, has played a central role. Currently, around 8% of the new articles appearing in PubMed are immediately and freely available online. In the field of malaria, that figure is closer to 20%, demonstrating the level of enthusiasm for open access in this field.

Although malaria kills millions of people each year, it did not attract the interest of traditional publishers because most of the deaths are in the developing world where the market for scholarly journals is very small. This meant that even as philanthropic foundations and governments were channeling increasing funds toward research of relevance to developing countries and global health issues, the traditional publishing model lacked effective means to communicate the results of that research to those to whom it was most relevant. The success of Malaria Journal indicates that there was significant demand for information about the disease even though there was limited money to pay for it. And the journal has achieved high quality as well as popularity; its 2006 impact factor 2.75 is the highest is the highest of any journal in Thomson Scientific’s Tropical Medicine category. As Pascalina Chanda of Zambia’s Malaria Control Centre has noted,

“Previously . . . it was almost impossible to know the latest in malaria research unless you read an abstract or an institution got some hard copies which always arrived a month or more after publication. [Open access] helps in providing the much-needed information on topical issues and one can learn from diverse methods and geographical settings and be able to participate in the global debate on health issues and also provide quality policy information. It also enables us from the developing world to publish our research findings and share the information with other researchers globally.”

The success of Malaria Journal demonstrates that open access can make high-quality research accessible to a global audience and can provide a platform for collaboration between researchers in the developed and developing worlds. The challenge now is to extend this success to other areas and to fully exploit the potential of open access to research to aid international development efforts.

To do this, several actions are necessary:

  • Subscription-only journals that publish research of relevance to developing countries should eliminate the barriers that still prevent many of those in developing countries from having access to that research.
  • Funders whose focus is global health should ensure that as a condition of funding, grant recipients are required to make the results of their research universally accessible. The policy of the Wellcome Trust, a significant player in research on global health issues, is exemplary in this area.
  • Research institutions should ensure that their authors are not discouraged by structural financial disincentives from making their research openly accessible. Institutions currently support subscription journal publishing through central library budgets. If open-access publishing is to compete on a level playing field, similar central support must be available to cover the cost of open-access publication. Many institutions are now setting up central open-access funds, supported by indirect costs received from funders. This is a promising model as it provides a scalable and sustainable basis for open access to the results of research.
  • Researchers working in fields of relevance to developing countries should investigate the many options for publishing their research in a way that guarantees fully universal access.
  • Last, those involved in international development efforts must consider how best to work with local communities to make effective use of the additional sources of knowledge that are now becoming available. Traditionally, the success of a research journal has been measured in terms of the number of citations that are generated. But in the case of medical research, for example, the goal is not simply to stimulate further research but to generate positive public health outcomes. We need to develop the means to measure and enhance the real impact of medical research in the developing world.

Archives – Winter 2008

KAY JACKSON, Industrial Clouds, Oil with gold and copper leaf on canvas, 34 X 38 inches, 2003.

Industrial Clouds

The paintings of Kay Jackson, an artist based in Washington, DC, address a wide range of environmental concerns including overpopulation, pollution, loss of habitat, and endangered species. In this painting from the National Academy of Sciences’ collection, industrial architecture becomes a symbol for pollution and human interaction with the environment. By using gold and copper leaf under the paint, the artist alludes to the use of the Earth’s elements to convey a message that is both mournful and intriguing.

In 1997, Jackson was commissioned to paint a nocturne of the White House as the official holiday card for President and Mrs. Clinton. Jackson’s work will be featured in a solo exhibition in February 2007 at Addison Ripley Fine Art, Washington, DC.

For current exhibition information and to view other work by Kay Jackson, visit kayjacksonart.com.

The political Einstein

If you’ve thought of Albert Einstein as he’s so often pictured by news media— as that famously tousle-haired, remote genius off in his own abstract world— then Einstein on Politics offers some surprises. A 1946 Time cover image set E = mc2 in a mushroom cloud behind “Cosmoclast Einstein,” who stares blankly at the reader. When Time proclaimed Einstein its “Person of the Century” in 2000, it bolstered his stereotype as “the embodiment of pure intellect, the bumbling professor with the German accent, a comic cliché in a thousand films.” True, the newsmag did credit Einstein for having “denounced McCarthyism and pleaded for an end to bigotry and racism,” yet still dismissed him as politically “well-meaning if naïve,” an opinion shared widely today.

Einstein’s scientific genius actually made it hard for us to learn his political views. Intimidated by his brilliant insights into things beyond our ken, we hesitated to seek his political counsel. And Einstein knew his own limitations, admitting in 1930 that,“My passionate interest in social justice and social responsibility has always stood in curious contrast to a marked lack of desire for direct association with men and women.”

Yet, from his days as a young academic in Europe to the end of his illustrious life in the United States in 1955 at age 76, Albert Einstein was a committed and often clever advocate for human dignity and the need for creative freedom. He was also a forceful writer and speaker, who pushed for world peace and against fascism and militarism when few other scientists even bothered.

Today, we respect Einstein for his opposition to the spread of nuclear weapons, but he is still best known for one famous political act: In 1939, he signed a letter to President Franklin Roosevelt that warned about German nuclear research and urged a U.S. response. Einstein played no other role in the Manhattan Project that built and deployed the A-bomb, was shocked when it was used, and crusaded against it ceaselessly. In his last political act, a week before he died, Einstein signed with Bertrand Russell a manifesto calling on the world’s scientists to renounce work on weapons of mass destruction. That challenge led to the Pugwash Conferences on Science and World Affairs and their persistent arms control initiatives, which flourished during the Cold War and continue today.

An earlier book, Einstein on Peace, published in 1960, revealed this creative and troubled man’s abiding pacifism along with his often fruitless efforts to create a more peaceful world. Now, with Einstein on Politics, we have a more accessible companion volume that reveals both the man himself and the many ways he tried to bend politics and politicians to achieve his grandly peaceful goals.

In 192 items, we discover Einstein reacting, conspiring, brooding, and proclaiming—often in pointed detail— his need to shape political events. Einstein’s letters to trusted colleagues, to newspapers, and to world leaders reveal intensely personal convictions and insights. His speeches, interviews, book forewords, statements, and manifestos all show us a mind and heart intent on making the world a safer, saner place. Einstein’s moral outrage is especially crisp in his “Manifesto to the Europeans” at the outbreak of World War I. “Not only would it be a disaster for civilization but . . . a disaster for the national survival of individual states . . .” he warned, “in the final analysis, the very cause in the name of which all this barbarity has been unleashed.”

The editors have crafted useful introductions and have identified in Einstein’s life three important political periods. First came imperial Germany’s collapse, from 1919 to 1923, when Einstein’s hopes for world peace spurred his efforts to halt militarism. From 1930 to 1932, Einstein’s second phase of intense political activity, he visited the United States to speak and write about Wilsonian democratic ideals and against U.S. isolationism. This effort ended with his remorse over failure at the 1932 Geneva Disarmament Conference and his acceptance that “militant pacifism” was no match for fascist advances in Europe.

Einstein’s third intense political surge began five months before A-bombs destroyed Hiroshima and Nagasaki, when he wrote, again, to President Franklin Roosevelt, this time warning about postwar consequences posed by the new weapons. Einstein shared with many the hope for a world government movement, first embodied in the new United Nations. He wrote often and spoke widely on radio and at public rallies about how nuclear weapons should impel nations to cooperate—or perish. He headed the Emergency Committee of Atomic Scientists to educate the public about the menace his colleagues had created. And in a poignant letter to fellow Americans in 1949, he denounced his new country’s racism.

EINSTEIN WARNED THAT “IN OUR TIME THE MILITARY MENTALITY. . . LEADS, BY NECESSITY, TO PREVENTIVE WAR. THE GENERAL INSECURITY THAT GOES HAND IN HAND WITH THIS RESULTS IN THE SACRIFICE OF THE CITIZEN’S CIVIL RIGHTS TO THE SUPPOSE WELFARE OF THE STATE.”

Readers concerned with how science affects society should read three Einstein essays in this collection that bear special witness to his original and timeless insights and to science’s problems today.

First, read “The 1932 Disarmament Conference,” which appeared in The Nation in September 1931, in which Einstein wrote that “achievements of the modern age in the hands of our generation are as dangerous as a razor in the hands of a three-year-old child. The possession of wonderful means of production has not brought freedom—only care and hunger.” Warning about “the technical development which produces the means for the destruction of human life . . .” Einstein insisted “it is not the task of the individual who lives in this critical time merely to await and to criticize.”

Second, read “The War is Won, but the Peace Is Not,” when Einstein challenged a Nobel Prize anniversary dinner in December 1945 by comparing atomic scientists to Alfred Nobel, who had invented dynamite and later, to atone, instituted awards that promote peace and science. “Today,” Einstein said, “the physicists who participated in forging the most formidable and dangerous weapon of all times are harassed by an equal feeling of responsibility, not to say guilt.” With a possible world government in mind, Einstein told his fellow scientists that “the situation calls for a courageous effort, for a radical change in our whole attitude, in the entire political concept.”

Finally, read Einstein’s biting essay on “The Military Mentality,” which The American Scholar published in 1947. Here Einstein compared post-World War II America to Germany under Kaiser Wilhelm II.“It is characteristic of the military mentality,” he wrote, “that non-human factors (atom bombs, strategic bases, weapons of all sorts, the possession of raw materials, etc.) are held essential, while the human being, his desires and thoughts—in short, the psychological factors—are considered as unimportant and secondary.” Einstein warned that “In our time the military mentality . . . leads, by necessity, to preventive war. The general insecurity that goes hand in hand with this results in the sacrifice of the citizen’s civil rights to the supposed welfare of the state.”

This review can’t even begin to capture the range of Einstein’s political views. But the scope is suggested by the 10 chapter titles: The First World War and its Impact, 1914-1921. Science Meets Politics: The Relativity Revolution 1918-1923. Anti-Semitism and Zionism, 1919-1930. Internationalism and European Security, 1922-1932. Articles of Faith, 1930-1933. Hitler’s Germany and the Threat to European Jewry, 1933-1938. The Fate of the Jews, 1939-1949. The Second World War, Nuclear Weapons, and World Peace, 1939-1950. Soviet Russia, Political Economy, and Socialism, 1918-1952. Political Freedom and the Threat of Nuclear War, 1931-1955.

Admittedly, Einstein sometimes rambled, as he did in letters to Sigmund Freud in the 1930s about the nature of human political aggression. Reading their exchanges here you may wonder about Einstein’s—and Freud’s—grasp of realpolitick. Yet Einstein also framed and asserted vital realities, as when he warned in 1954 how the U.S. “fear of Communism has led to practices which have become incomprehensible to the rest of civilized mankind and exposed our country to ridicule.”

Einstein held a wry view of his own celebrity and his role with the news media. After facing a throng of reporters when he arrived in New York in 1930, he noted that they “asked particularly inane questions to which I replied with cheap jokes that were received with enthusiasm.” Yet Einstein could also be biting about competing political ideologies, as when he penned a poem on “The Wisdom of Dialectical Materialism, 1952”:

Through sweat and effort beyond compare

To arrive at a small grain of truth?

A fool is he who toils to find

What we simply ordain as the Party line.

And those who dare to express doubt

Will quickly find their skulls bashed in

And thus we educate as never before

Bold spirits to embrace harmony.

Still, as starkly as Einstein saw politics, he also saw hope. That hope shines throughout this volume. Consider turning to it when you’re seeking a surprise, because for all Einstein’s bumbling image and reputation, he is revealed here as a political thinker and activist in tune with, and often ahead of, his times.

The Whys and Hows of Energy Taxes

Current federal energy tax policy is premised in large part on a desire to achieve energy independence by promoting domestic fossil fuel production. This, we argue, is a mistake. The policy also relies heavily on energy subsidies, most of which are socially wasteful, inefficient, and driven by political rather than energy considerations. Finally, the energy taxes that are in place could be more precisely targeted to specific market failures, and these higher taxes themselves would encourage the production of alternatives more efficiently than do current subsidies.

It is widely held that the United States must reduce its reliance on foreign oil. The concern over U.S. vulnerability to the disruption of supply by the Organization of the Petroleum Exporting Countries (OPEC) is understandable, given the fact that the United States imports over 60% of the oil it consumes each year. Of the oil that the United States imports, 40% comes from OPEC countries and nearly half of that from the Persian Gulf region. Many Americans are also concerned that oil monies help countries such as Iran pursue activities that are contrary to U.S. foreign policy.

As a response to these concerns, current tax policy promotes domestic oil and gas production in a variety of ways. The federal government provides a production tax credit for “nonconventional oil” (essentially a subsidy for coalbed methane), generous depreciation allowances for intangible expenses associated with drilling, and generous percentage depletion allowances for oil and gas. In addition, the Bush administration has consistently lobbied to allow additional drilling on the Alaskan North Slope.

This supply response ignores a fundamental fact: Oil is essentially a generic commodity priced on world markets. Even if the United States were to produce all the oil it consumes, it would still be vulnerable to oil price fluctuations. A supply reduction by any major producer would raise the price of domestic oil just as readily as it raises the price of imported oil. In addition, if the United States reduces its demand for oil from countries such as Iran, it has little effect on Iran, because that country can just sell oil to other countries at the prevailing world price. Indeed, this effect has been made abundantly clear by historical experience. The United States has cut its dependence on Iranian oil to zero, buying no oil directly from that nation since 1991. Despite the U.S. import ban, Iran was the world’s fourth-largest net oil exporter in 2005.

A policy of energy independence that depends on boosting domestic oil and gas supplies through subsidies has several defects. First, subsidies reduce production costs and so do nothing to discourage oil consumption. Second, the policy encourages the consumption of high-cost domestic oil in place of low-cost foreign oil. A policy to encourage the United States to use up domestic reserves and thus become increasingly vulnerable in the future to foreign supply dislocations seems especially peculiar to us. Third, it is expensive. The five-year cost simply for the incentives mentioned above totals nearly $10 billion, according to the most recent administration budget submission.

Assuming that reliance on oil is unattractive, a clear sign that policy is headed in the wrong direction is the high and even recently increasing dependence on oil of the U.S. economy. Petroleum accounted for nearly 48% of primary energy consumption in the United States in 1977. Since this peak, it fell to a low of 38% in 1995 before inching up to just over 40% in 2005. Even going back to 1977, the 16% drop in the oil share from its peak to 2005 falls far short of the percentage reduction in oil share of other developed countries. The United Kingdom, for example, has reduced its oil share from a peak of 50% to just under 36%. France has reduced its oil share by 48% and Germany by 22%. In Asia, Japan has reduced its oil share by 39%, and even China has reduced its oil share by more than has the United States, with a 26% reduction. Current U.S. policies are leaving it increasingly vulnerable relative to other major oil-consuming nations.

One might argue that because the United States is such a large producer of petroleum products—the third-largest supplier behind Russia and Saudi Arabia— domestic supply incentives in the United States help reduce the world price of oil. U.S. efforts, however, are but a drop in the bucket. One of us has estimated that the domestic oil production incentives in the U.S. tax code have lowered world oil prices by less than one-half of 1%.

To summarize, energy independence as popularly construed has little economic content. If reliance on oil is a problem, then supply subsidies make little sense, as they just encourage additional reliance on oil.

Misguided subsidies

The single largest energy tax expenditure in the U.S. budget is the tax credit for alcohol fuels, with a five-year revenue cost of $12.7 billion. The 51-cent–per–gallon credit primarily benefits corn-based ethanol. The fundamentally political motivation for the subsidies to corn-based ethanol are apparent when one realizes that the United States levies a 54-cent–per–gallon tariff on imported ethanol. There is also debate in the scientific literature about whether ethanol takes more energy to produce than it contains. Even making an optimistic read of the literature, corn-based ethanol is expensive and provides little new energy to the economy. A study by Jason Hill and his colleagues at the University of Minnesota indicates that shifting the entire current corn crop to ethanol production would replace just 12% of U.S. gasoline consumption. This shift would reduce greenhouse gas emissions by less than 3%.

In addition to the ethanol subsidy, the federal tax code provides investment tax credits for solar and geothermal power production and advanced coal-burning power plants under section 48 of the tax code. Our recent research shows that the 20% investment tax credit for new integrated gasification–combined-cycle coal plants makes this technology cost-competitive with new pulverized coal plants. The subsidy for solar-generated electricity, however, is not large enough to make solar energy cost-competitive with natural gas or with other shoulder or peaking power plants.

A 21st-century U.S. energy tax policy would include an end to energy supply subsidies, a green tax swap, an end to the gas guzzler tax loophole, the possible use of “feebates,” and conservation incentive programs.

Section 45 of the tax code provides production tax credits for wind power, biomass, and other renewable power sources. The tax credit is currently 1.9 cents per kilowatt hour (kWh). The section 45 and 48 tax credits are the second-largest energy tax expenditure, with a five-year cost of over $4 billion. The production tax credit for wind and biomass makes these two power sources cost-competitive with natural gas. The problem with production tax credits is that they must be financed somehow, either with reduced federal spending elsewhere in the budget or with higher taxes. Presumably, the credits are in place to encourage non–fossil fuel electricity production. The credit, however, distorts behaviors among non–fossil fuel power sources.

A better approach on both of these counts would be to levy a tax on the power sources that one wishes to discourage. If, for example, the concern is carbon emissions, then a carbon tax is an appropriate response. A tax of $12 per metric ton of carbon dioxide in lieu of production tax credits for wind and biomass would make these renewable sources competitive with natural gas. Unlike the subsidies, however, the tax would raise revenue, which could finance reductions in other distortionary taxes. Additionally, whereas subsidies lower the costs of electricity for consumers, increasing the quantity of energy consumed, taxes lead to decreased consumption. In units perhaps more familiar to most readers, a carbon tax of this magnitude would raise the price of gasoline by 10 cents if it were fully passed on to consumers.

Other production tax credits in the tax code include a production tax credit for electricity produced at nuclear power plants (section 45J). Qualifying plants are eligible for a 1.8cent–per–kWh production tax credit, up to an annual limit of $125 million per 1,000 megawatts of installed capacity for eight years. This limit will be binding for a nuclear power plant with a capacity factor of 80% or higher, thereby converting this into a lump-sum subsidy for new nuclear power plant construction.

U.S. subsidies discourage conservation and promote the consumption of inefficient sources of energy, a result that is irreconcilable with the goals of any rational energy policy. Alternative energy subsidies that are currently in place play political favorites and would be unnecessary if the types of energy that policymakers view as undesirable were taxed at an efficient rate. With undesirable forms of energy more costly, the market, rather than government officials, would determine which alternatives are best.

Redesigning energy taxes

First, we note that the literature suggests that U.S. energy tax rates may well be too low. Taking into account accident externalities, congestion, and unpriced pollution, one recent paper by Ian Parry and Kenneth Small finds that the optimal gasoline tax in the United States is $1.00 per gallon, over twice the current rate, taking into account federal and state motor vehicle fuel taxes.

Second, the sole tax policy to discourage low-mileage automobiles, the gas guzzler tax, contains a loophole large enough to drive a sport utility vehicle (SUV) through. The gas guzzler tax is levied on automobiles that obtain fewer than 22 miles per gallon and explicitly excludes SUVs, minivans, and pickup trucks. This excluded class of vehicles represented 54% of new vehicle sales in 2004. The light truck category (comprising SUVs, minivans, and pickup trucks) is the fastest-growing segment of the new vehicle market, growing at an annual rate of 5.5% between 1990 and 2004. In contrast, new car sales are falling at an annual rate of 1.6%. Unofficial congressional estimates suggest that phasing out this loophole over four years would raise roughly $700 million annually once the phaseout was complete. Optimal tax policy does not support treating similar assets differently, and current policy introduces a significant distortion that could easily be fixed.

A 21st-century U.S. energy tax policy would include an end to energy supply subsidies, a green tax swap, an end to the gas guzzler tax loophole, the possible use of “feebates,” and conservation incentive programs. Ending subsidies to fossil fuel production would level the playing field among energy sources and shift the country from a policy of promoting fossil fuel supply to encouraging a reduction in fossil fuel consumption. In addition, it would move the United States away from the reliance on inefficient corn-based ethanol.

The United States should also implement a green tax swap. A green tax swap is the implementation of environmentally motivated taxes, with the revenues used to lower other taxes in a revenue-neutral reform. For example, Congress could reduce reliance on oil and other polluting sources of energy through the implementation of a carbon tax. The revenues could be used to finance corporate tax reform or to finance reductions in the payroll tax. Consider a tax of $15 per metric ton of carbon dioxide. Focusing only on carbon and assuming a short-term reduction in carbon emissions of 10% in response to the tax, a $15-per-ton tax rate would collect nearly $80 billion a year, a number that represents 28% of all corporate taxes collected in the United States in 2005. If carbon tax were fully passed on into consumer prices, it would raise the price of gasoline by 13 cents per gallon, the cost of electricity generated by natural gas by 0.6 cents per kWh, and the cost of electricity generated by coal by 1.4 cents per kWh.

We note that a carbon tax is preferable to a carbon cap-and-trade system, as is currently implemented in Europe. Although a carbon charge and a cap-and-trade system could be designed to bring about the same reduction in carbon emissions in a world with no uncertainty over marginal abatement costs, the instruments are not equivalent in a world with uncertainty.

Given the uncertainties with respect to the introduction of new technologies to reduce carbon emissions, tax and permit systems can have very different efficiency costs. A number of researchers, including Richard G. Newell of Resources for the Future and Richard Pizer of Duke University, note that quantity restrictions such as cap-and-trade are appropriate only when either atmospheric pollutants are short-lived or the marginal costs of additional pollution above a threshold are extremely steep. Otherwise, price controls, like carbon taxes, are likely to be more efficient.

Because global warming depends on the stock of carbon in the atmosphere rather than on emissions in any one year, the expected efficiency costs of a carbon charge policy are likely to be much lower than the costs of a carbon cap-and-trade system. Essentially, the marginal damages from emissions in any given year have a roughly constant marginal damage so long as we are not at or near a threshold. Setting a price through a tax ensures that we avoid the risk of permit prices diverging dramatically from the marginal damages and thereby creating a large economic loss to society.

Moreover, although a cap-and-trade system could be designed in which the carbon permits are sold rather than given away, experience to date suggests that they will be given away. In that case, governments give up substantial revenue from cap-and-trade systems with which they could lower other distortionary taxes and limit the economic harm from the environmental policy. In a related vein, cap-and-trade systems generate substantial rent-seeking behavior, as firms lobby for grandfathering and generous allowances of permits once a program is put in place. Although firms are likely to lobby over the specific carbon charge rate and possibly coverage of the tax, a carbon charge is not conducive to lobbying over allocations, unlike permit systems.

A common criticism of carbon taxes is that they do not provide any certainty of emission reductions. A carbon tax provides certainty over the price of emissions but no certainty over emissions; a cap-and-trade system provides certainty over emissions but no certainty over the marginal cost of those emissions. (Note that this certainty over emissions is lost if a safety valve is incorporated in the cap-and-trade system.)

What we ultimately care about, however, are the economic and ecological consequences of higher concentrations of greenhouse gases in the atmosphere resulting from global emissions. Global climate models are impressively sophisticated, reflecting the enormous complexity of the climate system. Our understanding of the climate system is improving with ongoing research, and one result is that our sense of the emission reductions that will be required to stabilize the planet’s temperature and prevent large economic and ecological losses is also evolving. To give primacy to specific emission reductions regardless of the cost is to suggest a greater certainty in the climate science than currently exists, and implicitly but implausibly makes controlling emissions the top policy priority, trumping all others.

This is not an argument for policy delay. Given the long lags between emissions and climatic response, it would be imprudent to wait for greater precision in the climate science before taking action. But we should not act as if we know the precise level of emissions reductions to undertake. Instead, we should balance reductions against the economic cost of achieving those reductions as represented by the marginal cost of abatement. A tax does this automatically because profit-maximizing firms will operate at the point where marginal abatement costs equal the tax rate. With a clear and unambiguous schedule of carbon tax rates over time, businesses and households can rationally plan to reduce their carbon footprint through their capital purchase decisions as well as through their use of current capital.

An additional concern magnifies the advantage of carbon taxes. Carbon emissions are a global problem; the externalities from Chinese emissions hurt the United States just as much as emissions from the United States. Curbing carbon emissions requires an international solution. Cap-and-trade policies provide a serious moral hazard problem for governments of the developing world. Can we really expect developing countries to honestly police themselves, especially when quota violations would boost local economies? On the other hand, a carbon tax provides its own incentive for a government to closely police polluters. Governments, after all, are committed and competent revenue collectors.

Next, Congress should eliminate the gas guzzler tax loophole for light trucks. Congress might also consider strengthening the gas guzzler tax by shifting to a “feebate” approach, whereby low-mileage vehicles are taxed at increasing rates, as under the current gas guzzler tax, and fuel-efficient vehicles receive a tax subsidy. This could be structured to be revenue-neutral if desired.

Our final energy tax proposal is to increase the conservation investment incentives that were recently introduced in the Energy Policy Act of 2005. In a study of energy conservation incentives contained in the Energy Tax Act of 1978, we found that the tax credit was much more successful at raising investment levels than was a comparable energy price increase. We speculated that the credit program may have publicity effects that spur investment that the energy price increase does not have. In addition, uncertainty over the permanence of future energy price increases makes the certainty of the tax credit at purchase more valuable. A conservation credit that is technologically neutral would be a worthy accompaniment of a higher tax on carbon-based fuels if reducing reliance on these forms of energy is a policy objective.

The policies we advocate shift the United States away from fossil fuels and toward renewable energy. They also reduce the cost to federal taxpayers, while aligning private and social interests. This is the making of a 21st-century energy policy.

Freedom of Speech in Government Science

Since the early 1990s, researchers, scholars, journalists, and professional organizations have published hundreds of articles, books, and reports on the ethical problems related to industry-funded science, addressing such concerns as conflicts of interest, suppression of data and results, ghost authorship, and abuse of intellectual property laws. Although the investigative spotlight has focused on privatized science in the past 15 years, government science has received relatively little attention until recently. Three important publications—the Union of Concerned Scientists’ report Scientific Integrity in Policy Making, Chris Mooney’s book The Republican War on Science, and Seth Shulman’s book Undermining Science— have highlighted some of the ethical problems, such as limitations on free speech, politicization of scientific advisory panels, conflicts of interest, and bias, that can occur in government science.

According to Mooney, President George W. Bush’s administration has attempted to prevent government scientists from expressing their views about global climate change. James E. Hansen, director of the Goddard Institute for Space Studies at the National Aeronautics and Space Administration (NASA), said that public affairs staff members were reviewing his upcoming lectures, papers, media interviews, and Web postings. Hansen accused NASA administrators of trying to censor information that he planned to share with the public. NASA officials denied this accusation, claiming that Hansen’s public statements were not given special scrutiny and that all NASA scientists must have their media interviews reviewed by public affairs staff members to ensure coordination with the administration’s policy statements. Hansen countered that the administration was trying to intimidate him and that it had taken similar actions to prevent other researchers from communicating with the public about global warming.

Other scientists working for the federal government have also encountered problems with freedom of speech under the Bush administration. Former Surgeon General Richard Carmona told congressional investigators that federal officials weakened or suppressed public health reports to support a political agenda. He also said that the administration would not allow him to speak to the public about a number of different health policy issues, including stem cell research, emergency contraception, sex education, and global health. Administration officials have also rewritten Environmental Protection Agency (EPA) reports on global warming for political purposes.

Hansen’s confrontation with Bush administration officials raises important questions about the ethics of government science. Should scientists who work for the government have as much freedom of speech as academic scientists? What restrictions on speech, if any, can be applied to government science? I argue that government scientists should have freedom of speech but that the government may impose some restrictions on speech to ensure that research meets standards of quality and integrity and that policy messages conveyed to the public are consistent. However, any restrictions on speech must be applied carefully and cautiously to avoid undermining government science.

A philosophy of freedom

Freedom of speech is one of science’s most important norms. Nineteenth-century philosopher and economist John Stuart Mill developed an influential account of the importance of freedom of speech in public debate. According to Mill, social and scientific progress occurs through vigorous debate involving opposing points of view. To generate different points of view, people must have freedom of thought and speech. Progress cannot occur if the majority uses its power to suppress minority viewpoints. Many other scholars, such as Karl Popper, Paul Feyerabend, and Philip Kitcher, have built on Mill’s work to develop arguments for freedom of speech specifically for scientific inquiry.

Before discussing restrictions on freedom of speech in science, it will useful to distinguish between different types of limitations that the government might impose. Restrictions on funding are morally, legally, and politically different from restrictions on publication, which includes public discussion and dissemination. Restrictions on funding are unavoidable in societies where there is not enough money to fund every worthwhile project. Government agencies use peer review committees to decide which research proposals should be funded. Scientists who are denied funding by a federal agency are still free to conduct their research using funds from a different source, such as a private company, university, or foundation. Restrictions on publication pose a more serious threat to freedom of speech. Denying a person the right to present ideas in a public forum constitutes a significant interference with free speech, because publication is vital to science. Scientists require freedom of speech not just to discuss matters privately but also to share ideas with their peers and the public at large. Because restrictions on publication can have a much more substantial impact on research than restrictions on funding, this essay will focus on restrictions on publication.

TO ENSURE THAT POLICIES THAT RESTRICT SPEECH ARE IMPLEMENTED FAIRLY, IT MAY THEREFORE BE NECESSARY FOR AN ORGANIZATION THAT IS INDEPENDENT OF THE GOVERNMENT TO MONITOR AND REVIEW THE ACTIVITIES RELATED TO THE CONTROL OF SPEECH BY GOVERNMENT EMPLOYEES.

There are two basic models of freedom of speech in research: the academic model and the corporate/military model. In the United States, scientists working for academic institutions have unsurpassed freedom of speech. In most colleges and universities, scientists may publish articles or papers, express political opinions, discuss controversial ideas in classes, or talk to the media with very little administrative oversight or control. As long as they avoid slander, treason, fraud, sexual harassment, or other illegal forms of speech, scientists working in academic institutions are free to say or write almost anything. Organizations that represent university professors, such as the American Association of University Professors (AAUP), have vigorously defended academic freedom. The AAUP helps professors who have some difficulties exercising their academic freedom, investigates institutions accused of interfering with academic freedom, and censures institutions that it determines have violated academic freedom.

Not all researchers have as much freedom as those who work in academic institutions. Scientists who work for private industry must deal with restrictions on communication. Companies treat all aspects of R&D as confidential business information, which is protected by trade secrecy laws. They usually also require employees and contractors to sign agreements giving the company the right to own intellectual property and control publication of research results. Scientists who perform classified research for the government are not allowed to share their work with the public. U.S. government agencies have the authority to classify information that poses a significant threat to national security. Access to classified information is granted only to people with an appropriate security clearance on a “need-to-know” basis. People who publicly disseminate classified research without permission can face substantial legal penalties.

Government science

Which of these two models should apply to nonclassified government science? To answer this question, it will be useful to examine the role of government science in society. Government scientists, like other state employees, should serve the public interest and should not betray the public’s trust. The Standards of Ethical Conduct for Employees of the Executive Branch of the U.S. government proclaim 14 different ethical duties of federal government employees, including not using public office for private gain, adhering to the laws and regulations of the United States, avoiding conflicts of interest that undermine the performance of government duties, and not giving preferential treatment to organizations or individuals.

There are several specific ways in which scientists serve the public interest. First, government scientists conduct valuable research, often in fields or disciplines that receive very little funding from industry, such as public health, environmental health, and basic research in the physical or biological sciences. Government science contributes to the fundamental understanding of many different disciplines and often yields practical applications in medicine, agriculture, engineering, information technology, aeronautics, and other applied fields. Additionally, government science helps to inform regulatory decisions, legislative proposals, and policy recommendations. Second, government scientists provide expert advice to Congress, federal and state agencies, and the public. The advice given by government scientists is usually more objective and reliable than the advice given by scientists employed by industry or political interest groups, because government scientists are not officially aligned with any particular private interest or political ideology. For example, the Food and Drug Administration (FDA) receives a great deal of information and advice from pharmaceutical companies, professional societies, and patient advocacy groups concerning decisions to approve new drugs. To make fair and impartial decisions, the FDA also needs information from scientists who are not influenced by these economic and political biases. Third, government scientists educate the public about scientific issues with policy implications, such as pollution, infectious diseases, drug abuse, crime prevention, and drug safety. Government scientists educate the public through lectures, informational Web sites, popular books, or interactions with the media. In addition, government scientists help to educate the next generation of scientists by instructing, mentoring, and supervising graduate students and postdoctoral students.

To perform these different functions effectively, government scientists must be able to communicate with other researchers and the public free from fear of censorship, intimidation, or reprisal. There are several arguments for freedom of speech in government science. First, as stated earlier, freedom of speech is essential for conducting research. Placing restrictions on communication significantly alters scientific work and can have a negative impact on the research environment. Even if the government restricts the speech of only well-known scientists, such as James Hansen, these actions can have a chilling effect that alters the behavior of less well-known scientists. Scientists who are aware of the potential consequences of publishing or discussing ideas that contradict administration policies may engage in various forms of self-censorship, such as softening and qualifying the conclusions and recommendations in their publications or forgoing some types of research altogether.

Second, freedom of speech is important for providing expert advice. Restrictions on speech can undermine the formulation and implementation of well-reasoned independent expert opinions about controversial public policy issues. A lack of expertise in government decisionmaking can have adverse consequences for public policy, human health, the economy, and the environment. For example, if the FDA acts on biased or incompetent advice about a drug, it may fail to adequately protect the public from the drug’s harmful effects.

Third, freedom of speech is important for educating and informing the public about scientific issues with policy implications. To develop well-informed and cogent opinions about policy issues, people need to hear different perspectives, not just the perspective favored by the current administration. Restrictions on freedom of speech can limit the perspectives available to the public.

Fourth, granting government scientists freedom of speech helps agencies to recruit and retain highly talented researchers, who may decide to not work for the government if they are concerned about limitations on freedom of expression. Fifth, U.S. government scientists, like other U.S. citizens, have constitutionally protected rights to free speech. They should not have to choose between exercising these rights and working for the government. Although people agree to limit their rights to free speech when working for private companies, they should not have to do so when working for the government. Government work should be compatible with free speech, not detrimental to it.

Justified restrictions

Although these five arguments build a strong case for granting government scientists unfettered freedom of speech, there are reasons for placing some minimal restrictions on government scientists’ communications for specific, clearly defined purposes. First, government scientists often have access to confidential information concerning privately funded research, personnel matters, or research involving human subjects. Government scientists should not have a right to disclose confidentiality information without permission. For example, if a government agency and a private company enter into a cooperative research and development agreement (CRADA) for the purpose of developing and testing a medical product, government scientists working under the auspices of this CRADA should not disclose confidential information about the company’s products.

Second, because government scientists usually list their institutional affiliations when publishing articles or making presentations, the government’s reputation can be affected. If a government scientist publishes an article with substantial errors of fact, reasoning, or methodology, this will reflect poorly on the government and damage the public’s trust in government science. Thus, some type of quality control, such as internal peer review, is appropriate for government publications or presentations.

Third, when a government scientist communicates with the media, the public (or even journalists) may mistakenly assume that the scientist is speaking for the government, when he or she is expressing only a personal opinion. If the scientist expresses an opinion that goes against official policy, this can creates confusion in the public mind. To minimize confusion and to enable an administration to convey consist policy messages, it is appropriate to allow public relations offices to review a government scientist’s communications with the media. The purpose of such review should not be to stop the scientist from talking to the media but to allow the administration to prepare a response to the scientist’s interview.

These are all good rationales for restricting the speech of government scientists in some situations, but they must be applied carefully and cautiously to avoid the negative consequences of restrictions on speech discussed earlier. For example, internal peer review should not be used to block or hinder manuscripts for nonscientific reasons. Administrators who oversee internal review should protect the process from ideological, political, or other influences external to scientific peer review. Media review should not be used to suppress opinions that diverge from the administration’s positions but only to clarify what is government policy and what is personal opinion.

Most government agencies have policies similar to these in place as well as employee grievance mechanisms such as ombudsmen and Equal Employment Opportunity officials. Problems have arisen, however, because these policies have not always been implemented judiciously by federal agencies. Hansen, for example, alleged that NASA officials had used the media review policies to intimidate him and discourage him from talking to the media, even as a private citizen. Abuses like these have occurred as a result of pressure from the administration on agency officials to control the speech of government scientists.

To ensure that policies that restrict speech are implemented fairly, it may therefore be necessary for an organization that is independent of the government to monitor and review the activities related to the control of speech by government employees. The organization should scrutinize internal review and media relations policies to ensure that they are not too restrictive or burdensome. The organization should also address complaints by government scientists concerning freedom of speech and defend employees who face censorship, intimidation, or reprisal related to their public communications. The organization would play a role similar to the one played by the AAUP in protecting freedom of speech in academic institutions.

What type of organization could fulfill this watchdog role? Congress could pass legislation creating an office to monitor restrictions on government free speech, but there would be political opposition to this legislation from interest groups that prefer the status quo. Also, a government office might not be sufficiently independent from the administration, especially if it is located within the executive branch. Probably the best way to safeguard free speech in government science would be for a scientific organization, such as the American Association for the Advancement of Science (AAAS), to designate a committee or group to focus on these issues. A standing committee of the AAAS, the Committee on Scientific Freedom and Responsibility (CSFR), would be a natural fit for this role. The CSFR is charged with monitoring the actions and policies of governments and private organizations that restrict scientific freedom, collecting information concerning restrictions on freedom, and developing policies that protect scientists from impingements on their freedom. Its effectiveness would be significantly enhanced is it were authorized by Congress to continue its work and report the results to government officials.

Forging a New, Bipartisan Environmental Movement

Although our passion for the living Earth dates to our joyful youth spent outdoors, in Pennsylvania and California respectively, our intellectual commitment to the environment as a political and social issue can be traced to the first Earth Day, an event we witnessed as graduate students. We were enthusiastic participants in many Earth Days during the 1970s and 1980s. As college professors, we mentored undergraduates on the subjects we knew best in environmental studies, public policy, biodiversity, and behavior, and we introduced them to field conservation in ecosystems under siege, including the Okefenokee Swamp and wilderness areas in East Africa. We have seen firsthand the effects of systematic deforestation and the catastrophic loss of habitat and biodiversity. A recent assessment by the World Conservation Union identified more than 16,000 species currently under threat of extinction. Our book, A Contract with the Earth (Johns Hopkins University Press, 2007), aims to rally Americans to address these and other environmental problems, set priorities, and develop solutions to renew the Earth for the sake of our children and grandchildren. Renewal requires a long-term commitment by every citizen and a massive mobilization of the nation’s resources, talent, and technology. We must activate a sustainable, renewable culture that mobilizes people, organizations, industries, and governments to protect the natural world on a daily basis. Such commitments must begin with civil dialogue about issues that have been contentious and divisive.

It is time to forge a new, bipartisan environmental movement and create pathways for every American, indeed every nation, to cooperate and collaborate on achievable solutions to restore, revitalize, and renew the Earth. To accomplish this, we have proposed a new conversation among the diverse constituencies that must be recruited to action. No single political party owns the environmental issue; we need everyone’s help in achieving the goal of a sustainable natural world. A vital, fully functional Earth composed of abundant communities of diverse wildlife; healthy streams, lakes, and oceans; and clean air requires a strong commitment to better environmental practices in homes, communities, and workplaces.

We are gratified by the leadership of multinational corporations that have been recruited to the task by creative, tenacious nongovernmental organizations such as Conservation International and the Nature Conservancy. These organizations teach us that it is in everyone’s best interest to generate principles and policies that contribute to a better and more livable environment. Public/private partnerships have led the way in achieving these aspiring standards, because governments alone cannot solve the complex array of environmental challenges that we must conquer.

We also believe that people work best when they confront problems close at hand, so acting locally, through the structure of metropolitan, regional, and state governments, avoids the entanglements of larger, slower bureaucracies. The states of California and Florida and the city of Portland, Oregon, among others, demonstrate what can be done with strong leadership exercised close to the source of environmental problems and the people affected by them. We admire the dynamic environmental leadership of California Governor Arnold Schwarzenegger and Florida Governor Charlie Crist, who have championed alternative energy development in their states—action that must be widely emulated throughout the nation.

Strong positive leadership needed

The most important priority of our contract with the Earth is decisive environmental leadership. Currently, the federal government is stalemated and reticent to lead. Consequently, we have no national unity or direction at a time when there is an urgent need for action. Rather than succumb to the disabling ramifications of doomsday prophecy, we believe that our nation responds best when we are led by individuals who vigorously pursue workable solutions with optimism and confidence. Our concerns are underscored by recent research in the United Kingdom in which behavioral scientists demonstrated that positive, informative strategies that help people set specific health and environmental goals are far more effective in changing behavior than are negative messages based on fear, guilt, or regret. Strong positive leadership on the environment will help to stimulate market forces to deliver innovative environmental technology and clean renewable energy to power the economy.

We believe that many if not all environmental challenges can be resolved by developing new and better technology and by generating best practices in environmental stewardship. By leading the world in the production of innovative environmental tools, the United States will produce the renewable technology that will eventually provide clean energy to the rest of the world. Developing nations, especially China and India, need U.S. expertise to help solve their escalating emissions problems. With the Olympic Games approaching, the Chinese government is frantic to deliver clean air to the world’s best athletes and the masses of visiting spectators. It is likely that China’s struggle to control ambient environmental quality will dominate the daily news as the Olympic competition unfolds. Likewise, the United States’ reputation as a global leader depends on decisive leadership on many pressing environmental fronts, including the pursuit of new international agreements that are more realistic and effective than the Kyoto Accords.

The federal government can encourage innovation by issuing financial incentives such as the federal tax credit aimed at encouraging consumers to purchase hybrid gas/electric cars. Toyota has been highly successful in selling hybrids to U.S. consumers, but federal law eliminates the credit after a company has sold 60,000 vehicles. Clearly, Toyota has been penalized for winning in the marketplace. Less successful producers of hybrid cars, Ford, General Motors, Honda, and Nissan, are still able to qualify buyers for tax credits, but they will also eventually lose the incentive when they hit the ceiling.

This program demonstrates the limitations of modest incentives. More powerful and lasting incentives would dramatically stimulate sales of hybrids and other alternative-energy high-efficiency vehicles. Congressional leadership is needed to establish stronger incentives right now. If rebates and tax credits can induce consumers to buy hybrid cars, these products will be built in greater numbers, and the pace of conversion from petroleum to alternative fuels will be quickened.

Other incentives are needed as well, including incentives for manufacturing innovations to extend vehicle fuel consumption to 50 or even 100 miles per gallon. Such incentives work better and faster than punitive corporate average fuel economy standards. In addition, tax credits to help consumers to build new homes or modify existing homes to be more energy-efficient are still too low and uncommon. In western states such as Arizona, New Mexico, and California, there are some strong incentives for using solar and wind energy, but the rest of the nation lags far behind.

Historically, prizes have been used to stimulate breakthrough technology. Prizes are particularly effective motivators of entrepreneurs, who use investment capital to test their ideas and generally invest four times the value of a prize to win the competition. The X-Prize Foundation was recently established to manage such prizes as the $10 million Ansari X Prize for Suborbital Spaceflight, the $10 million Archon X-Prize for Genomics, and the $20 million Google X-Prize to land and successfully operate an unmanned rover vehicle on the Moon. Such prizes must be big, even huge, to produce meaningful discoveries on a grand scale. Perhaps a prize of $1 billion could be the impetus for a 500-miles-per-gallon car. Robust incentives and prizes might produce a hydrogen-based economy much faster than would conventional R&D.

In addition, those who award established prizes should focus more frequently on significant environmental issues. In 2004 and 2007, the Nobel Peace Prize was awarded to environmental activists for their efforts, to combat deforestation and climate change, respectively. It may be time to create a Nobel Prize specifically to honor effective environmental problem-solving. For example, the important work in biodiversity conservation planning performed by Conservation International is certainly worthy of Nobel-level recognition.

Ambitious goals

Presidential leadership applied in the spirit of President Kennedy’s bold goal of a lunar landing in less than a decade is the direction we need to take at this critical moment in the nation’s history. Although current estimates suggest that it will take 50 years to refine and disseminate hydrogen technology, we believe we could do it in 20 years if we elevate the goal to a national priority. We need to combine the entrepreneurial engines of the economy, strategic environmental philanthropy, and the powerful economic incentives of the federal government to achieve this goal. If we can mobilize the nation’s financial and human resources and prioritize clean hydrogen technology, we can lead the world to a profound and fundamental renewal.

STRONG POSITIVE LEADERSHIP ON THE ENVIRONMENT WILL HELP TO STIMULATE MARKET FORCES TO DELIVER INNOVATIVE ENVIRONMENTAL TECHNOLOGY AND CLEAN RENEWABLE ENERGY TO POWER OUR ECONOMY.

Dominated by spin, hyperbole, and belligerent infighting, U.S. politics has reached a point where civil debate is no longer the norm. No wonder so few of us are willing to enter the political domain. We can reverse this trend by aspiring to loftier goals. We need two political parties equally committed to solving difficult environmental problems; two parties willing to engage in a constructive civil debate to reach consensus and implement action. Targeted philanthropic and business investment will activate greater cooperation and serious engagement on these issues. Democrats and Republicans should start by issuing strong environmental planks in their party platforms. The nation needs a mandate for bipartisan team-building on the environment, but we also need a public policy menu so we can make choices among rational alternative environmental strategies. By promulgating and sharing an extensive catalog of effective environmental solutions, the United States will once again be recognized for its global leadership, its unmatched ingenuity, and its commitment to environmental protection.

We anticipate an extraordinary pace of scientific change in the first 25 years of this new century. For example, a revolutionary device developed by Georgia Tech engineers burns fuel in a wide array of devices with nearly zero emissions of nitrogen oxide and carbon monoxide. The Stagnation Point Reverse Flow Combustor, as it is known, was originally designed for NASA, but the design can be adapted to power a large gas turbine or a small home water heater. Innovations such as this occur every day on college campuses and in industrial laboratories.

Innovation will continue if we invest in the development of our best and brightest scholars. To keep the nation among the world’s leaders in science and technology, we need a national commitment to strengthen math, science, and engineering training and a national plan to achieve wider scientific literacy so that a growing number of our citizens will be able to fully comprehend the complex environmental issues that we must face together. Other nations, such as Germany, have recognized a need to upgrade their financial commitment to higher education, suggesting that we will face stronger economic and technical competitors in the years ahead.

By combining innovations in science and technology with the power of markets to shift resources to better outcomes and more choices of higher quality at lower cost, our growing list of new solutions will surely lead to significant and widespread prosperity. This is the essence of American entrepreneurial environmentalism, an approach that we believe is superior to bureaucratic, litigious, and unrestrained regulation. In A Contract with the Earth, we have drawn the strong conclusion that enterprise is not the enemy of the environment; instead, it is the engine that will drive new technologies that will help to solve our most challenging environmental problems, including global climate change. Further, we ought to acknowledge that we have already achieved many significant advances in environmental protection, and these achievements should build confidence in a nation that is too often unfairly portrayed as an environmental pariah. Indeed, humanity is depending on the United States to lead the change to a better and more sustainable natural world.

Racial Disparities at Birth:The Puzzle Persists

A baby born to an African-American (black) mother in the United States is twice as likely to die before reaching her first birthday as a baby born to a European-American (white) mother. A range of conditions contribute to infant mortality, but the most powerful predictors are being born too early (before 37 completed weeks of pregnancy) and/or too small (with a birth weight of less than 2,500 grams). Black infants are two to three times as likely as their white counterparts to be born prematurely and/or with low birth weight. Premature or low–birth weight infants who survive beyond infancy are far more likely than other infants to suffer major developmental problems, including cognitive, behavioral, and physical deficits during childhood, with lasting consequences in adulthood. They also have poorer prospects for employment and wages as adults. Prematurity and low birth weight (together referred to as adverse birth outcomes) also predict poor adult health, including diabetes, high blood pressure, and heart disease, all of which raise risks of disability and premature mortality. Caregiving to chronically ill and/or disabled survivors of adverse birth outcomes is a tremendous economic burden on families and society.

A growing body of research has been conducted in recent years into the causes of the racial disparities. The research has examined a wide range of possible factors, including differences in prenatal care, differences in women’s health before they become pregnant, and infections. This research has produced useful insights but has not identified a clear cause for racial disparities. More recently, researchers have hypothesized a role for stress and adverse experiences throughout life, not just during pregnancy, as possible explanations. Much greater research investment is necessary if we are going to solve the puzzle of why racial disparities in birth outcomes persist.

At least in one major area there is now a strong scientific consensus: Differences in prenatal care are unlikely to explain racial disparities in prematurity and low birth weight. Black/white disparities in receipt of prenatal care have narrowed markedly over time, particularly with major expansions of Medicaid maternity care coverage beginning around 1990, without concomitant narrowing of birth-outcome disparities. In addition, a number of studies have failed to link prenatal care, as typically provided in the United States, to improved birth outcomes in general. The literature is inconclusive regarding effects on birth outcomes of prenatal care enhanced with various forms of psychosocial support; few studies have been conducted that meet rigorous criteria.

Given the scientific consensus that standard prenatal care does not hold much hope for reducing racial disparities in birth outcomes, there has been an increasing interest in focusing on the health of women before they become pregnant, including ensuring access to medical care for chronic conditions. It seems unrealistic to think that medical care given during a nine-month or shorter period could dramatically reverse the adverse effects of a lifetime of experience before conception. It also seems unlikely that medical care alone in the period before conception could reverse the effects of a lifetime of social disadvantage.

Well-established causes of being born too small or too early—without consideration of racial disparities—include prenatal exposure to tobacco, excessive alcohol, or illicit drugs; being underweight at the beginning of pregnancy and gaining insufficient weight during pregnancy; very short maternal stature; and chronic diseases. The known causes of low birth weight and/or preterm birth, however, do not explain the black/white disparities; studies taking these factors into consideration have not seen a narrowing of the racial gap in outcomes. For example, black women are less likely to smoke or to binge drink during pregnancy and less likely to be underweight before pregnancy than are white women.

Several factors have been hypothesized to explain birth-outcome differences by race. Among the more widely held hypotheses has been the notion that occult (hidden) infections may explain the racial gap. Rates of infection with bacterial vaginosis, a genital tract infection previously thought to be benign but recently associated with adverse birth outcomes, are higher among African-American women, as are periodontal infections. Although many clinicians have been optimistic that infections would turn out to be an important and relatively easily modifiable missing piece of the puzzle, treating infections has not consistently led to improved birth outcomes. This suggests that rather than infections being a cause of adverse birth outcomes, they may be a marker for some other factor or factors that are associated with both infections and adverse birth outcomes.

There has been a widespread assumption, without evidence, that genetic differences are the key to the black/white disparity in birth outcomes. In part, this assumption has rested on the observation that the black/white birth-outcome disparities have persisted even after taking into account mothers’ educational attainment or family income around the time of pregnancy. However, no one has identified a gene or genes that are clearly linked to either prematurity or low birth weight, and the mechanisms involved appear different for the two outcomes and complex for both. It is likely that if genetic differences are involved in either, they would involve complex arrays or cascades of multiple genetic factors very unlikely to sort themselves out according to race. Although it is possible that genetic factors, particularly gene/environment interactions, could be involved, a primary role for genes is not supported by observed social patterns, which are discussed below.

Furthermore, current income and education reflect only a small part of the socioeconomic experiences of a woman, which could affect her birth outcomes through a range of biological and behavioral pathways. For example, among U.S. blacks and whites overall, the median net worth of whites ($86,573) is almost 4 times that of blacks ($22,914). In the bottom quintile of income, the median net worth of whites ($24,000) is 400 times that of blacks ($57). Wealth is probably more important than income for health because it can buffer the effects of temporarily low income, providing security as well as a higher standard of living. Furthermore, a black woman of a given educational or income level is far more likely than her similar–education-or-income white counterpart to have experienced lower socioeconomic circumstances when growing up. She also is far more likely to live (and to have lived in the past) in a neighborhood with adverse socioeconomic conditions, such as exposure to environmental toxins, crime, lack of sources of healthy foods and safe places to exercise, and/or poor-quality housing. There are many unmeasured socioeconomic differences between blacks and whites even in studies considering income and education; such studies should not but unfortunately often do conclude that observed racial differences must be genetic since they have “controlled for socioeconomic status.”

IT MAKES SCIENTIFIC SENSE TO FOCUS ON SOCIAL ADVANTAGE AND DISADVANTAGE AS PLAUSIBLE CONTRIBUTORS TO BLACK/WHITE DISPARITIES IN BIRTH OUTCOMES.

Social patterns may give us valuable clues to the unsolved mystery of black/white disparities in birth outcomes. For example, although birth outcomes consistently improve with higher education or income, the relative disparities are largest among more affluent, better-educated women: nearly a threefold difference in our data from California and also observed to be large in national data. The racial disparity is also seen among poor and uneducated women, but it is much smaller, closer to 1.3 in 1 in recent study. Why would the racial disparity be greater among higher–socioeconomic status (SES) women? It is unlikely that higher-SES black women are genetically more different from their white counterparts than are lower-SES women. (This issue is discussed further below.)

Comparisons among black women according to birthplace may also provide important clues to likely and unlikely causes of the disparities in birth outcomes. Mirroring what has been called the “Hispanic paradox” of good birth outcomes for immigrant Hispanic women (despite poverty) and poor birth outcomes of their U.S.-born daughters (whose income and education levels are generally higher around the time of childbirth than those of their immigrant mothers), black immigrants also have better birth outcomes than U.S.-born black women. In contrast to the unfavorable (compared to whites) birth outcomes of black women born and raised in the United States, birth outcomes among black immigrants from Africa and the Caribbean are relatively favorable, especially after considering their income and education. As with the comparison of racial disparities in different socioeconomic groups noted above, it is very difficult to explain this disparity by maternal birthplace with genetic differences. If the basis for the differences in birth outcomes by maternal birthplace were genetic, one would expect the immigrants (presumably with a heavier “dose” of the adverse genes) to have worse outcomes, not better.

Stress: A key piece of the puzzle?

In the past 15 to 20 years, knowledge has accumulated about the physiologic effects of stress, particularly chronic stress, in explaining racial differences in birth outcomes. Chronic stress could lead to adverse birth outcomes through neuroendocrine pathways. Neuroendocrine and sympathetic nervous system changes caused by stress could result in vascular and/or immune and inflammatory effects that could lead to premature delivery as well as inadequate fetal nutrition. Living in a crime-ridden neighborhood and facing constant pressures to cope with inadequate resources for housing, child care, transportation, and feeding and raising one’s family are stressful, but such factors are rarely measured. Racial disparities in wealth and income are likely to translate into racial disparities in social networks that can provide financial and other material support during times of need. A growing body of literature on the health effects of subjective social status suggests that an awareness that one is in a group considered socially inferior could be a stressor with strong health effects.

Studies of stress as a possible contributor to adverse birth outcomes have not produced consistent findings. They have, however, tended to focus on stress experienced during pregnancy, rather than chronic stress across a woman’s lifetime, despite the fact that current knowledge of the health effects of stress makes chronic (rather than acute) stress far more plausible as a causal factor in racial disparities in health. It could be a key mediator of many of the unmeasured socioeconomic factors that vary by race, including childhood socioeconomic adversity and neighborhood socioeconomic conditions.

It is biologically plausible that experiences associated with a legacy of racial discrimination are another potential source of unmeasured stress that may contribute to black/white disparities in birth outcomes, and some studies have demonstrated this connection. Incidents of overt racism against African-Americans in the United States are still pervasive, although probably becoming less frequent over time. More subtle experiences associated with racism, however, also could be stressful; for example, a constant awareness and state of arousal in anticipation of racist comments, whether subtle or overt, being made in one’s workplace could be stressful. Vicarious experiences related to fears about one’s children or other family members facing discrimination; or a background awareness of the long history of discrimination, including slavery, experienced by blacks in general; are other potential sources of chronic stress that also could exact a health toll, including on birth outcomes, even in the absence of overt incidents. The literature in this area is in the very early stages of development, and the results are not consistent; better measures of experiences of racism are needed to advance knowledge of the potential health effects of discrimination in various forms, not only dramatic overt incidents.

Could experiences of racism account for the counterintuitive finding of a greater racial disparity in birth outcomes among more affluent and educated women? One can only speculate, but unmeasured differences in socioeconomic factors during life appear to be a possibility, along with experiences related to racial discrimination. Unmeasured socioeconomic exposures (for example, in childhood and/or at the neighborhood level) could influence birth outcomes through pathways involving nutritional effects, exposure to toxins, and other adverse exposures related to low socioeconomic status, as well as stress. Paradoxically, a more educated black woman may, on a chronic basis, experience more discrimination and more constant awareness and fears of it, because she is far more likely than her less educated black counterpart to be working, playing, shopping, and traveling in a predominantly white world.

Implications for action

Given the staggering influence of birth outcomes on health during lifetimes, far more investment is needed in understanding the mechanisms that explain prematurity and low birth weight and racial disparities in them. Far more research is needed on social and psychological influences on birth outcomes, on how they are mediated biologically, and on how to intervene even before we completely understand all of the mechanisms at the molecular level. We have no firm answers now (except perhaps firm indications about some disproven explanations), but we have some very plausible hypotheses that require testing under a range of circumstances. Among the biologically plausible hypotheses are a major role for stress and adversity experienced throughout life, not only during pregnancy, which would mean that intervening during pregnancy may be too little and too late. Unmeasured experiences in early childhood and across a woman’s life before conception could be important sources of stress that could explain racial disparities. These experiences could include unmeasured socioeconomic factors at the neighborhood and family levels as well as experiences related to racial discrimination and awareness of it, even in the absence of dramatic overt incidents. Gene/environment interactions cannot be ruled out as contributors to racial disparities in birth outcomes. If these interactions were involved, however, they would be very complex; biomedical solutions are not on the horizon at present and in any case would be a long way off, making it important to make vigorous efforts to identify and modify the triggers for the disparity in the social and physical environments. It makes scientific sense to focus on social advantage and disadvantage—including not only socioeconomic factors but also potentially subtle, chronically stressful experiences related to our legacy of racial discrimination—as plausible contributors to black/white disparities in birth outcomes. Even without definitive proof of their role in birth-outcome disparities, there are compelling ethical and human rights reasons to direct our attention to eliminating the profound and longstanding differences in social conditions that still break down along lines of skin color.

A New Strategy to Spur Energy Innovation

The United States must confront the reality of its energy circumstances. Consumers and industry are facing the prospect of a continued rise in the real price of oil and natural gas as conventional reserves are depleted. The increased reliance of the United States and its partners on imported oil—a large proportion of which comes from the hostile and politically fragile Persian Gulf—is constraining the nation’s pursuit of important foreign policy objectives. At the same time, greenhouse gas emissions, especially carbon dioxide emissions from coal-fired electricity-generation plants, are contributing to dangerous global climate change. In the absence of an aggressive U.S. carbon-emission control policy, there in no possibility of an international agreement on greenhouse gas emissions that includes both developed countries and rapidly emerging ones such as China and India.

There is only one solution to the challenge: The United States must begin the long process of transforming its economy from one that is dependent on petroleum and high-emission coal-fired electricity to one that uses energy much more efficiently, develops alternative fuels, and switches to electricity generation that is low-carbon or carbon-free.

The benefits of such a transformation are indisputable: It would avoid unnecessary cost and disruption to the U.S. economy, protect the environment, and enhance national security. The United States has sought to adopt an effective and coherent energy policy since the first oil crisis of 1973, but it has failed to do so. The challenge for U.S. political leaders is to craft, fund, and diligently sustain a range of policy measures that will make this critical transition as certain, rapid, and cost-effective as possible.

In order to meet this challenge, the United States must undergo an innovation revolution. The rate at which the United States is able to develop and deploy new energy technologies will, to a great extent, determine the ultimate speed and cost of the economic transformation. Large-scale carbon capture and sequestration, advanced batteries, plug-in hybrid vehicle technologies, next-generation biofuels for the transportation sector, and a number of other innovations will be vital to achieving a low-carbon economy, and the United States must not only develop but deploy these technologies. The benefits of such innovation will accrue to other countries as well, for U.S. technical assistance programs and trade will carry these advances abroad.

Over the years, the U.S. government has spent more than $300 billion in direct expenditures on energy research, development, and demonstration (RD&D) that have been combined with a variety of indirect financial incentives such as tax credits, loan guarantees, guaranteed purchase, and even equity investments. In addition, the government has adopted a patchwork quilt of regulations designed to speed the adoption of various energy technologies.

Unfortunately, the resulting pace of innovation generated by this public investment has not been sufficient given the urgency and scale of today’s energy challenge. The various measures that it has employed (including direct federal support for RD&D, indirect financial incentives, and mandatory regulations) have been developed and implemented individually with too little regard for technological and economic reality and too much regard for regional and industry special interests. There has not been an integrated approach to energy technology innovation that encompasses priority areas of focus, the responsibilities of various funding agencies, and the mix of financial assistance measures that are available.

If the United States simply continues to pursue energy innovation as it has in the past, then the path to a low-carbon economy will be much longer and costlier than necessary. We propose a new approach for energy RD&D in the United States that will set in motion an innovation revolution by

  • Creating an interagency Energy Innovation Council to develop a multiyear National Energy RD&D strategy for the United States.
  • Increasing the energy RD&D program budget to more than twice its current level.
  • Launching a sustained and integrated energy R&D program in key areas.
  • Establishing an Energy Technology Corporation to manage demonstration projects.
  • Creating an energy technology career path within the civil service.

Songs of experience

Some important lessons can be gleaned from previous federal efforts to promote energy innovation through direct federal support, indirect financial incentives, and regulatory mandates.

Direct federal support. The Department of Energy (DOE) is the agency that provides the most financial support for energy RD&D. Yet many of the demonstration projects undertaken by DOE since the 1970s have not been successful. Prominent examples include the Clinch River Breeder Reactor in the early 1970s; DOE-managed large-scale synthetic fuel projects such as Solvent Refined Coal; surface and in-situ shale projects; the Barstow, California, Central Solar Power Tower; and the Beulah, North Dakota, Great Plains coal gasification project.

There are many reasons why these demonstration projects failed, but three shortcomings stand out. First, the projects were based on overly optimistic engineering estimates of technological readiness and cost. Some of these difficulties could have been averted if more time had been spent gathering data from small-scale engineering development projects and more attention had been paid to modeling and simulation of process performance and economics.

Second, some of the demonstration projects met predicted levels of technical performance, but the cost was so far above the then-prevailing market prices that the projects were market failures. This was a particular problem for synthetic fuel projects. It can be avoided only if there is a clear differentiation between those projects that are intended to demonstrate technical performance, cost, and environmental effects and those that are undertaken to increase production with federal assistance or in response to federal mandates.

Third, DOE business practices differed dramatically from commercial practices, and thus its project results were not credible demonstrations for private industry or investors. Tight DOE budgets caused projects to be funded inefficiently, which led to stretched schedules and increased capital costs. Budget pressure also invited cost-sharing requirements that were motivated by fiscal necessity rather than fair compensation for proprietary information. In addition, federal acquisition regulations, auditing, work rules, and project management contributed to cost overruns.

The underlying difficulty is that DOE, and other government agencies, are not equipped with the personnel or operational freedom that would permit the agency to pursue a first-of-a-kind project in a manner that convincingly demonstrates the economic prospects of a new technology. A different approach is needed.

The record of DOE in earlier-stage energy technology development is much stronger. DOE’s work has directly contributed to advancements in technologies ranging from simulation tools for coal-bed methane production to basic materials development for photovoltaics. Nevertheless, there are several areas, such as batteries, where progress has not met expectations despite significant DOE support.

Indirect financial incentives. Indirect financial incentives are measures such as loan guarantees, guaranteed purchase, tax credits, and equity investment that “pull” innovation by providing financial benefit for deploying a new technology. The indirect incentives have the advantage that they do not introduce government procedures into the development and innovation process, thereby allowing it to take place in a more fully commercial manner.

Indirect incentives are appropriate for the demonstration phase, when technology feasibility is established and commercial viability needs to be demonstrated in early deployment. Guaranteed purchase is often proposed as a way of buying down the unit cost of new technology (as, for instance, was the case with photovoltaic arrays). Loan guarantees and tax credits, meanwhile, are popular measures for early demonstration of large-scale clean coal technologies, such as integrated coal gasification combined cycle with carbon capture and sequestration, and of nuclear power plants. The 2005 Energy Policy Act contains significant indirect incentives of this type, but the technology demonstrations should be considerably broadened.

It is important to note that different measures have different incentives. Production tax credits (such as those for wind power) and guaranteed purchases spend government money on projects that successfully produce some product, whereas loan guarantees are designed to provide protection for the investor even if the project fails.

Of course, all indirect incentives are not equally sensible. For instance, the existing volumetric ethanol excise tax credit of $0.51 per gallon of ethanol is not the most economically efficient way to reduce U.S. dependence on imported oil. A better approach is to provide tax credits for cellulosic ethanol production, because this technology uses a less energy-intensive biomass feedstock to produce the desired liquid product than traditional ethanol production does.

Regulatory mandates. Regulatory mandates can significantly encourage innovation by accident or design, and there is a complex pattern of purpose and mechanism. For example, the Environmental Protection Agency (EPA) mandates “best available control technology” and sets emission standards in order to force the adoption of new technology. This approach has proved successful in, among other things, reducing diesel emissions and reducing criteria-pollutants emissions from power plants. Furthermore, in the early 1970s when domestic oil production was under price controls, DOE and its predecessor agency gave “entitlement” benefits for domestic production that used enhanced oil recovery techniques. This is an important example of how a regulatory incentive can result in the wide dissemination of an important energy technology.

The adoption of Corporate Average Fuel Economy standards is also widely viewed as a critical regulatory measure, given the political resistance to increased taxes on gasoline. In addition, many believe that government programs designed to encourage greater efficiency in appliances and buildings are effective, although the effects of higher energy prices and the new technology these higher prices encourage should not be overlooked.

Today, there is a particularly strong interest in using mandatory regulation to drive innovation, in part because of the strong political opposition to increased taxes for carbon emissions and gasoline use. For instance, Congress is now considering the use of a renewable portfolio standard for electricity generation and a renewable fuel standard for automobile fuels. Moreover, there have been situations in which such regulation has generated successful solutions to environmental problems, such as the EPA’s market-based cap-and-trade program for SO2 to address the threat of acid rain.

Regulatory mandates, however, lack the transparency and some other advantages of taxes. They must be carefully designed and coordinated at all levels to produce economically efficient results, and there are numerous instances in which poorly designed regulatory action has bred inefficiency. For example, states (and even localities) have found it necessary to adopt CO2 emission restrictions because the federal government has failed to do so, resulting in a flawed patchwork of regional emission controls rather than a more effective and comprehensive national standard. Ultimately, regulation is a tool that can accelerate innovation by serving as either a substitute for or complement to direct federal RD&D support, and policymakers must do far more to ensure that they strike the proper balance between them.

Federal flaws

The United States will not be able to achieve an innovation revolution until it addresses fundamental flaws in its approach to RD&D—flaws that cannot be repaired simply by increasing federal funding.

First, the current federal approach to innovation is based on a linear sequential process: research, exploratory development, engineering, system development, manufacturing, deployment, and logistic support. This model was developed (and has been used successfully) by the Department of Defense (DOD), but it is not well suited to today’s energy innovation challenge. The DOD’s primary RD&D objective is to create new technologies for its own use that meet set performance, schedule, and cost objectives. Although some of this research has applications in the private sector and is widely adopted, the DOD process is not designed specifically for broad commercial application.

Energy innovation, however, requires a market-driven rather than technology-driven approach to RD&D, because new energy technologies are only useful insofar as they are adopted and deployed by private industry. This requires that the government work closely with the private sector and environmental regulators to develop and demonstrate technologies that can be profitable given existing and anticipated market conditions and environmental standards. This also has the important benefit of creating some real assets, such as production facilities and intellectual property, that could enable the government to recoup a portion of its outlay.

Second, the RD&D efforts of the involved federal agencies are not properly designed to meet the interdisciplinary and cross-cutting challenge of energy innovation. Energy innovation requires coordinated and integrated progress on multiple fronts at multiple stages of development in areas ranging from genetic research on plants to the industrial design of refineries. The government’s fragmented approach reflects the prevailing RD&D model in which technology is developed to suit the needs of a single client (such as the agency overseeing it), and thus the related work and needs of other agencies are not adequately considered. Furthermore, there is no single governmental body responsible for harmonizing the disparate energy innovation efforts of DOE, the Department of Agriculture (USDA), Department of Commerce (DOC), National Science Foundation (NSF), the EPA, and others. The government must instead seek to reflect the trend in universities toward greater interaction and coordination among different fields of research. Until this happens, limited resources will continue to be allocated inefficiently, thereby slowing the energy innovation process.

Third, the government relies largely on traditional mechanisms, such as cost reimbursement for contracted work, for support of RD&D. From the Clinch Breeder Reactor to today’s FutureGen coal power plant project, the federal government does not make adequate use of indirect innovation incentives such as guaranteed purchase, loan guarantees, and tax credits. This is a result of a lack of authority to use indirect financing and a lack of personnel qualified to design and manage these more complex financial assistance mechanisms. By relying on direct cost reimbursement, the federal government increases the risk that it will end up underwriting the development and demonstration of technologies that are not commercially viable, as was the case with the U.S. Synthetic Fuels program.

Fourth, the participation of the private sector in energy innovation is critical, yet the roles of the public and private sectors in joint RD&D projects have not been effectively defined. The most striking contrast is the incredible explosion of venture capital activity financing startup energy companies as a result of the sharp increase in oil and gas prices and increased commercial interest.

The generation and distribution of energy are primarily private-sector activities in the United States and most other countries. Private energy concerns invest billions of dollars in all aspects of energy, from capital infrastructure to power plants to transmission grids to refineries to pipelines. These private companies also invest large amounts of money in energy RD&D—more, indeed, than DOE itself does. In addition, the energy industry is increasing its efforts in innovation, whereas DOE has reduced its expenditures, in real terms, to less than one-half of the 1978 level. Clearly, if federal and private-sector efforts are complementary, then progress will be faster and development costs less.

Over the years, DOE has made many attempts to integrate industry and public RD&D efforts. A variety of mechanisms have been explored, including consortia, such as the Advanced Battery Consortia and the Program for a New Generation of Vehicles, and cooperation with industry associations, such as the Electric Power Research Institute and the Gas Research Institute.

The record of these efforts is mixed. Progress has been hampered by bureaucratic rules governing intellectual property, cost sharing, and access to government facilities, as well as by the different objectives of the government and industry in R&D. However, there have been some notable successes, especially when industry and the government jointly pursue efforts to develop basic technology for general use by employing DOE laboratory facilities such as the Sandia combustion facility and synchrotron light and high-flux neutron sources at several DOE labs. Congress can build on these successes and significantly improve government-industry RD&D collaboration by expanding the ability of DOE, NSF, and other federal agencies to make cooperative agreements with industry.

It is particularly important to foster effective government/industry collaboration on demonstration projects because the purpose of such projects is to establish commercial feasibility. Too often, the commercial potential of demonstration projects is obscured by the involvement of federal agencies and their restrictive federal procurement requirements, government-loan repayment procedures, and concerns about intellectual property rights. As a result, the market is not convinced of an effective demonstration of technology and private industry does not get the information it needs from the demonstration to make investment decisions.

Fifth, although members of Congress have indeed proved willing to provide substantial funding for energy RD&D programs over the past three decades, they also have sought to influence the RD&D selection and development process in order to benefit their home districts. These pressures, in addition to the uncertainties surrounding the annual budget cycle, interfere with the energy RD&D process.

Sixth, successful innovation requires both the creation of new technology and the demonstration of technical performance, economic feasibility, and compliance with environmental regulations. The federal government has had considerable success in researching and developing new technologies; however, its record in the critical demonstration phrase, in which the technology needs to prove its commercial value in order to be adopted by the private sector, is far weaker. The root cause of these deficiencies is that energy projects are selected and R&D is undertaken without sufficient consideration or understanding of the goals of the demonstration phase (the widespread adoption of technology by the private sector). Moreover, DOE and other federal agencies lack the requisite financial and policy tools to carry out demonstrations in a manner that is credible to private investors.

Keys to success

A successful energy RD&D program should contain the following elements:

  • There must be ample and sustained support for early-stage research and exploratory development. It is important that these early stages of the RD&D process are not neglected because of the budget demands of later-stage technology demonstrations, for it is here that many entirely new ideas with long-term relevance are generated. The research agenda must also be managed to ensure that it encompasses the full range of energy challenges that the United States faces, from supply to production to distribution to end use.
  • RD&D spans the spectrum from early-stage research that explores new technical opportunity to later-stage demonstration projects that often require considerable resources. For the government, therefore, there should be an intimate relationship between setting policy and establishing programs designed to stimulate innovation.
  • The decisionmaking process must be integrated so that the factors of cost, technical performance, and environmental impact are factored in at each stage of development.
  • From the outset, every program should have a multiyear plan that clearly establishes a role for the federal government, industry, universities, and laboratories. This will help to ensure sustained (and disciplined) support and project management.
  • All later-stage demonstration projects should be carried out on as close to commercial terms as possible in order provide the private sector with the information it needs to make large investments in new energy technologies. This can best be achieved by using indirect financing methods and significantly easing federal procurement regulation.
  • There is opportunity for substantial international participation in selected energy RD&D projects. An important goal of many energy programs is to develop technologies that are attractive not only to U.S. companies but to foreign countries and investors as well. There is a wide range of mechanisms for international cooperation across the energy RD&D spectrum, and the United States should pursue new opportunities to coordinate the energy research efforts of countries around the world. Expensive long-term projects such as magnetic fusion energy attract significant international participation, as is the case with the $13 billion International Tokamak Experimental Reactor project.

In the future, the greatest opportunity may well lie in transferring technology developed in the United States or other industrialized countries to rapidly emerging counties such as China and India. Such transfers could help to induce rapidly emerging countries to participate in a global regime to limit greenhouse gas emissions. The Joint Implementation and Cooperative Development Mechanisms created in the Kyoto Protocol are examples of such an approach. These mechanisms are currently restricted to carbon-mitigating technologies, but the transfer of a broader range of technologies, addressing renewable energy, biofuels, and energy efficiency, could also be envisioned. It is unlikely, however, that technology transfer alone will be sufficient to bridge the gap between how developed and developing countries control carbon emissions.

The proposal to establish within DOE an Advanced Research Projects Agency for Energy (ARPA-E) that is modeled on the Defense Advanced Research Projects Agency (DARPA) is intended to replicate many elements of the innovation model that has been successful for the DOD, but it is unlikely to have a similar transformative effect on the energy sector. The DARPA model is technology-driven, not demand-driven; the focus is on performance, not cost. In the DARPA model, industry is an R&D contractor paid on a cost-plus basis with no indirect financial incentive mechanisms to encourage industry to demonstrate the commercial feasibility of new technology.

In order to accelerate energy innovation in the United States, the following five steps should be taken:

Create a new interagency group, the Energy Innovation Council (EIC), responsible for developing a multiyear National Energy RD&D Strategy for the United States. The mandate of the EIC would be to construct a plan that integrates the RD&D programs of the involved federal agencies over a multiyear period. The RD&D program would include both direct expenditures to support technology development and indirect financial incentives or regulations that are intended to promote demonstration of the new technology.

The EIC would be housed in the Executive Office of the President and composed of representatives from each of the federal agencies involved in energy and energy-related environmental RD&D, including DOE, the EPA, USDA, the DOC, and NSF. The president would appoint a chairperson who would manage the affairs of the council and oversee the development of the national strategy. Examples of suitable EIC chairs are the director of the Office of Management and Budget (OMB), the national economic advisor, or the director of the Office of Science and Technology Policy (OSTP).

The National Energy RD&D Strategy should include program priorities, schedules, and resource requirements. Though federal agencies could and should undertake some energy-related work outside of this RD&D interagency program (such as in fundamental research), such endeavors should be limited in number and scope in order not to detract from the larger integrated RD&D effort.

In developing the strategy, the EIC would make use of sophisticated modeling and simulation tools, as well as relevant engineering and cost data. This will enable it to assess alternative technology pathways and make the necessary tradeoffs. An advisory group made up of individuals from a range of industries, universities, and public interest organizations should be appointed by the president to support the council. When completed, the National Energy RD&D Strategy would be submitted to Congress for its review and endorsement. This strategy could then serve as the basis of a five-year authorization and appropriation for energy RD&D programs.

Increase the national energy RD&D budget to at least twice today’s level. Even a well-designed RD&D program will not be able to achieve the necessary rate of innovation at the current level of funding. According to NSF, federal nondefense energy R&D has declined sharply in real terms from almost $7 billion (in 2000 dollars) in 1980 to about $1 billion in 2006. Although about $1.5 million of this decline is explained by a change in accounting methodology in the late 1990s, the decline of energy R&D funding is striking. There have been increases in some areas of energy RD&D in the past year, but much greater resources are still required. The additional funding could come from a portion of the new revenue generated by a petroleum use tax, carbon-emission charge, or revenues from the sale of allowances in a cap-and-trade system.

The question is how much to allocate and to which agencies. To answer this question one must know, among other things, the expenditures of the various agencies on energy RD&D. As uncompromising management specialists say,“If you cannot measure it, you cannot manage it.” But although we have much information on DOE’s RD&D spending, numerous important participating agencies—the DOC, USDA, NSF, the EPA, and the DOD—do not disaggregate their RD&D expenditures by application, making it impossible to get a complete and detailed budgetary picture.

In part this is due to a genuine problem with classification. For example, NSF expenditures on materials science or chemistry that are principally motivated by the objective of advancing the basic understanding of a disciplinary subject also may have important implications for energy (such as catalysis and materials for batteries), yet are not classified as such. However, it is also true that agencies are reluctant to report expenditures by application for fear that the OMB, the OSTP, or Congress may insist on a reallocation of the agency’s effort from its functional interest to broader national purposes.

For a few areas that it views as especially important or promising, the White House will mount a multiagency planning effort. One such initiative is the Climate Change and Science Program/Global Change Research Program. With funding from the National Aeronautics and Space Administration, the DOE, NSF, the EPA, the DOC, USDA, the Department of Interior, and others, this program receives a great deal of public and congressional attention, as it should. Its multiagency cross-cutting budget is also valuable for program analysis. In this case, its history of funding reveals erratic financial contributions from the numerous agencies involved, which indicates how difficult it is to maintain sustained funding for federal R&D efforts.

Although noteworthy, this climate program is much smaller in scope than a truly comprehensive energy plan, which would require managing all of the budgetary resources devoted to energy RD&D by all government agencies. Based on the available information, and in particular on the DOE budget, we believe that the comprehensive energy RD&D budget should be at least twice what it is today.

Launch a sustained and integrated energy R&D program. A robust technology base program has multiple purposes:

  • Discover and explore new ideas for energy supply and efficiency use. This research and exploratory development activity is less costly to pursue than commercial-stage demonstration projects.
  • Acquire scientific and engineering data that provide a practical design base for deployment and scale-up when combined with modeling and simulation. This implies much greater reliance on process development unit scale development, augmented with serious theory and analysis.
  • Construct and support the needed experimental facilities for the R&D program located at DOE laboratories, universities, and industry consortia.
  • Establish mechanisms for interaction between technology experts and demonstration project design and operation. In many cases, early and consistent involvement of research specialists can solve technical issues that arise during project development. The innovation process is not one-directional.
  • Educate scientists and engineers for careers in the energy sector. Professional organizations such as the National Petroleum Council and the American Nuclear Society have noted the looming shortage of individuals with the technical skills needed for U.S. energy industries.

Energy efficiency, for instance, is one area that deserves greater research effort, as it is likely to yield important long-term and short-term payoffs. This new initiative on energy R&D should also embrace efforts at DOE and other agencies such as NSF, the EPA, and the DOC. Research is needed in nanoparticles to improve high-temperature ceramic materials and basic separation technology to use in hydrogen storage. Development efforts could be productive in fenestration, lighting, metering instruments, and advanced vehicles.

Create an Energy Technology Corporation (ETC) to manage demonstration projects. One of the recurring weaknesses in federal RD&D is the demonstration phase. Too often, this expensive stage in the energy innovation process is carried out in a manner that provides little useful information to the private sector.

What is needed is an ETC. This new semipublic organization, governed by an independent board of individuals nominated by the president and confirmed by the Senate, would have a single function: to finance and execute select large-scale demonstration projects in a manner that is commercially credible. To this end, the ETC should be composed of people who have expertise in areas where DOE officials traditionally have little experience: market forecasting, the use of indirect financing mechanisms, and industry requirements. Because it would not be a federal agency, the ETC would be free from the federal procurement regulations and mandated production targets that currently make it difficult to demonstrate a new technology’s commercial viability under real market conditions. In addition, the ETC would be funded in a single appropriation, which would reduce the influence of Congress and special interest groups on its decisionmaking. All of this makes the ETC uniquely suited to manage demonstration projects in a way that will accelerate the adoption of new technologies by private industry and, ultimately, the transformation of the U.S. economy.

There are many examples of demonstration projects that would dramatically improve the pace of energy innovation:

  • Cellulosic biomass–to–biofuels plants
  • Carbon sequestration
  • Integrated coal-fired electricity generation and CO2 capture
  • Smart electricity networks
  • Production of natural gas hydrates
  • Nuclear power projects based on the once-through fuel cycle
  • Superconducting transmission lines

The ETC we propose here differs fundamentally from proposals sometimes advanced for a new Manhattan or Apollo project for energy. The Manhattan and Apollo projects had solely technological purposes: the former to produce a nuclear weapon, the latter to put a human on the moon. The government was the only user of the output, there was no private market, and cost was not an object. In contrast, the ETC would be structured as a quasipublic corporation that operates in a manner of a private corporation embarked on expensive first-of-a-kind technology deployment.

The ETC also differs from the industry-managed technology consortia that DOE has sponsored for a number of decades in an attempt to increase private-sector participation [such as the Partnership for a New Generation of Vehicles, the Advanced Battery Consortium, the Electric Power Research Institute, and the Gas Research Institute (now part of the Gas Technology Institute)]. In spite of some successes, however, the rate of innovation here has not exceeded that of other RD&D models.

The ETC does resemble the U.S. Synthetic Fuels Corporation (SFC) that was established in 1980 for the purpose of reducing U.S. dependence on imported oil by producing synthetic gas and liquid fuel from coal, oil sands, and shale. Its mandate was to subsidize the construction of plants that would reach a target production level of 500,000 barrels per day by 1987. This production target was justified on the assumption that oil prices would double in the near future. In fact, prices fell by more than half, thereby rendering the enormously expensive SFC undertaking commercially unfeasible and making apparent the risks of funding demonstration projects that are designed to reach a fixed production level regardless of prevailing market conditions.

The essential difference between the ETC and SFC is that the ETC is exclusively concerned with demonstrating the operational and economic readiness of new technologies, whereas the SFC was concerned with achieving production targets without regard to the difference between production cost and market price. The ETC does, however, adopt the philosophy of SFC structure (properly conceived at the time) that DOE and other energy-related government agencies do not have the flexibility, tools, and competence to execute successful large-scale projects that must operate in the private sector.

Create an energy technology career path within the civil service. The new approach to RD&D that we are proposing requires a new type of civil servant to implement it. Federal agencies must develop or recruit a set of specialists who have the technical, financial, and management skills to participate in the integrated effort needed for successful energy innovation. This will require establishing a new career path with a distinct set of rules covering compensation, conflicts of interest, and promotion. Initially, the cadre should be limited to approximately 200 individuals.

An important motivation for the creation of this elite career service is that energy innovation is intrinsically interdisciplinary, requiring the integration of a number of disciplines for a successful RD&D program. For example, biomass requires the involvement of individuals with expertise in plant biology, agronomy, chemical engineering, economics, and environment. International experience in the Department of State or U.S. Agency for International Development would also be valuable. A career service that provides the opportunity, or even the requirement, that an individual have experience in a number of different agencies will strengthen the capability of the country to manage energy innovation successfully.

The country desperately needs dedicated public servants who have the capability to manage the sophisticated and expensive energy innovation challenge ahead. Establishing an elite service has the additional benefit of attracting a new generation of specialists who have the requisite skills but currently do not see government service as a sufficiently rewarding or prestigious career path.

A Blind Man’s Guide to Energy Policy

The United States has seemingly reached a consensus that energy is a serious problem. Unfortunately, there is no consensus on the solution. Three major constituencies are dominating discussion of the problem, and each approaches the issue from a different viewpoint. The constituency that is worried about climate change sees profligate use of fossil fuel that has dramatically changed our atmosphere. The energy security group sees dangerous reliance on foreign oil held by countries hostile to the United States. The economic vitality group sees high energy prices and market volatility threatening the economy and the U.S. standard of living. These three are certainly not the only constituencies, but they are the three that define the public interest aspects of the current policy debate in the United States, and more important, they are each pushing an agenda that does not mesh with that of the others.

Like the proverbial blind men describing an elephant, the three major constituencies participating in the energy debate have vastly different perceptions of the problem. And just as with the blind men, although each perspective is accurate as far as it goes, it is only by merging the views together that one achieves a complete and useful understanding of the energy problem. We must begin by reviewing the details of the three perspectives.

Environmentalists focus on the scientific consensus that greenhouse gases are contributing to rapid climate change with potentially catastrophic consequences. The recently released International Panel on Climate Change (IPCC) report concludes with 90% certainty that global warming is influenced by anthropogenic activity. Many individual scientists assert that the magnitude of predicted climate changes presented by IPCC is scientifically conservative. Some suggest recent data show that CO2 emissions are rising faster than forecast; other research suggests that temperature change is more sensitive to CO2 emissions than assumed; still others say impacts (e.g. Arctic ice) are more dramatic per degree change than thought. These scientists and their supporters see evidence of impending catastrophes in decreasing water supply, extreme weather, sea level rise, disease-vector migration, ecosystem failure, agricultural declines, air quality degradation, increased wild fires, and a host of other undesirable impacts. This group believes that the world cannot afford to gamble that these predictions are wrong because the consequences of them being right are potentially devastating. Because CO2 is largely cumulative in the atmosphere, early action to reduce emissions will be much more effective than any future measures. This group perceives a desperate need to act this decade in order to avoid what could be a completely unmanageable situation in the future.

It is important to realize that finding solutions that work for two of these three viewpoints can lead to solving the problems of the third.

To this constituency, climate is first and foremost an ethical and moral issue that transcends short-term economic concerns. They speak of intergenerational equity and the need for physical limits on emissions. Choices that shrink humanity’s CO2 footprint are favored. Many in this group think the problem can be solved with efficiency, conservation, and renewable energy. This constituency encourages all people to make changes in their daily activities to move toward a more sustainable lifestyle. They are willing to accept a decline in the standard of living in the wealthy countries in order to obtain a more environmentally sound world.

The energy security constituency is concerned primarily with the geopolitics of oil and the nearly exclusive use of this fuel for transportation. They see the geopolitics of oil becoming increasingly dire and conflicts in the Middle East being driven by demand for oil. They quip that the Iraq war is the first war in which the United States is paying for both sides because it is the dollars used to pay for Middle East oil that is being to support Islamic fundamentalism. In lighter moments, they ask, “How did our oil get under their sand?” They see oil and gas being used by Russia to exert political power over Europe. They are concerned that Venezuela, which is governed by U.S. critic Hugo Chavez, controls 15% of U.S. oil imports. They see the vulnerability created by having 25% of the world’s oil pass through the Straits of Hormuz. In Africa, they note that conflicts in Nigeria over oil wealth and corruption have disrupted oil supplies. They worry that countries such as China are unwilling to join political action against oil-supplier countries such as Sudan, or worse, nuturing the next Sudan in Chad. They do not want to see the U.S. way of life threatened by countries and factions that are difficult or impossible for us to control.

This security constituency favors ending U.S. vulnerability by ending its “addiction to foreign oil.” This group thinks that there is no domestic source of energy that is bad; their mantra is “energy independence.” They favor rapid increase in domestic corn-based ethanol production, regardless of the consequences for food prices or the farm environment. They oppose importing ethanol from Brazil because it might discourage domestic production. They propose the production of liquid fuel from coal, a very expensive process that produces a fuel that has twice the carbon footprint per unit of energy as does oil. For example, coal interests have proposed legislation in the House that would provide federal financial support to coal-to-liquids manufacturers if the price of oil falls below $40/bbl. They want to expand domestic oil supplies even if this holds no promise for long-term security. They see clear and present danger in world conflict and wish to insulate the United States from this problem. They may even argue that these “expensive” initiatives are cheap in an opportunity cost sense when the real alternative is a defense budget that must pay for Iraq in all its dimensions.

The economic vitality group sees high prices for energy potentially strangling the economy and worries about interference in the market that would make prices even higher. They observe increasing international demand for oil occurring simultaneously with a peaking supply of light sweet crude. They see that higher prices drive new technology and increased production of oil, but this oil is heavier and more expensive to produce and refine. They worry that global demand will increase faster than supply, driving prices higher. With China, a country with little domestic oil, adding more than 200,000 cars per month to its roads, demand for imported oil will rise quickly and so will prices. The economic contingent also worries about disruptions to supply resulting from refinery fires, pipeline leaks, hurricanes, or terrorism. They worry that environmentally motivated standards such as renewable energy portfolio standards will decrease options and increase the cost of energy. They note that the spike in oil prices after Hurricane Katrina was alleviated by lowering environmental standards for gasoline and that California’s extremely rigorous environmental standards for gasoline create scarcities that are a major factor in driving up record prices at the pump.

This constituency wants expanded capacity and favors investing in more oil production, reservoirs, pipelines, and refineries. They would like to drill the North Slope in Alaska for oil and open more federal lands to exploration. They believe that the market should and will control the energy system, but they support federal incentives to investment in new energy supplies. They favor importing ethanol from Brazil without tariffs because this will lower the price. They are afraid of carbon caps because of possible economic side effects. The face-off between Representatives Nancy Pelosi and John Dingell over the proposed schedule for enacting climate change legislation is largely about Dingell’s worries about lost jobs and economic harm.

Sharing insights

As each of these groups tries to sell their vision to the public, they are being forced to confront the narrowness of their vision. Environmentalists are learning that climate change by itself may fail to gather broad enough support to achieve their goals. Although more and more people are becoming aware of the climate problem, many are still unsure of the need for dramatic action. These skeptics do not want to make choices that they think will lead to “shivering in the dark” for a climate problem that is intangible to them. So environmentalists have begun building coalitions with other constituencies. They have found broad support for state-level renewable energy portfolio standards (RPSs), which require a certain percentage of electricity to come from renewable sources, among political factions who want to reduce their state’s risk of an energy supply crisis or to promote economic development within their state. In Nevada, when the climate constituency tried to change the RPS to a low-carbon standard which they found more directly beneficial to their cause, they lost the support of the economic and energy security constituencies who saw no benefit. Environmentalists had to settle for a renewable energy standard in order to build a coalition and get something done. Environmentalists are also being forced to reconsider their objections to nuclear power because many people view it as a potentially large source of greenhouse gas-free energy. An example is Patrick Moore, a founder of Greenpeace, who is now promoting nuclear power as an important part of solving the climate problem. This constituency needs to factor in the fundamental human desire to better their lives and increase their affluence while finding solutions that improve the environment.

The energy security constituency needs to face the reality that complete energy independence is a quixotic quest. They need to broaden their perspective to pursue energy resilience, which can be advanced by reducing energy demand and diversifying supplies as well as by boosting energy production. The United States uses about a quarter of the world’s energy; it imports 30% of all its energy and 50% of its oil. Eliminating imports is clearly out of the question for at least several decades and probably forever. Besides, exporting countries have a powerful motivation to keep selling oil to the United States. As prices rise with growing demand, they will increase exploration and look to new technology to help increase supply. U.S. consumers are going to want that oil, and all countries have an incentive to support global trade in principle.

The security constituency would do better to focus on controlling domestic demand for oil because demand is currently driving the price and making the country vulnerable to supply interruption. The gains in energy security from increased energy efficiency could be dramatically large compared to increasing domestic supply. Given that the United States imports half its oil, a 25% increase in the average fuel efficiency of vehicles could decrease imports by half. Finally, the security advocates have to face global political realities. U.S. allies and trading partners are making it clear that they will not let the United States ignore its responsibility to deal with carbon emissions. If the energy security constituency is willing to build coalitions and accept some responsibility for reducing carbon emissions, it can still be successful in its efforts to increase domestic energy supplies as well as supporting supply diversification and efficiency improvements that will also relieve some of the political pressure associated with being an energy importer.

The economic vitality constituency can learn that changing the energy system to meet climate and security needs could well be an economic stimulus. New industries are already springing up to meet the newly defined energy needs and are generating revenue and jobs. Countries such as Japan and the UK as well as states such as California are implementing policies aimed at making them leaders in new energy technology businesses. In California, venture capitalists were influential supporters of the enactment of AB32, the ambitious carbon cap law that mandates a return to 1990 levels of greenhouse gas emissions by 2020. These business people see opportunity for technology to address the climate problem and create wealth in the process. A study released by the University of California, Berkeley, last year projected that reducing greenhouse gas emissions in California would create 17,000 jobs and add $60 billion to the state gross domestic product by 2020. John Doerr, the venture capitalist who helped to start Google, has said, “Sustainable technologies are the next big thing … the mother of all markets,” and doubled the size of his investments in green technologies.

In early 2007, the executives of six major companies (Aloca, BP, DuPont, Caterpillar, GE, and Duke Energy) spoke out in favor of carbon controls through cap and trade, which they felt could be imposed without economic harm and with economic opportunities if applied uniformly. These executives also see carbon caps coming and want a level playing field and known boundary conditions for business. They think they can make money and do the right thing for climate. The executives of major oil companies are also coming to share this opinion.

Common ground

How can these three constituencies find common solutions in the next decades? Choices in the first half of the 21st century will be affected by the 50-year or greater life spans of many parts of the energy system infrastructure and dominated by the use of existing technology, hopefully supplemented by urgently needed applied research. Long-term, the energy system of the second half of the century will likely be dominated by whatever emerges from the advanced research and development we must begin to do now to develop entirely new technology. The research to support this transformation is absolutely essential, but in the meantime it is imperative to act quickly with the tools at our disposal.

Let us examine two aspects of the energy system, electricity and individual transportation, to see how common ground might be found. All three constituencies can favor reducing demand for electricity through increasing energy efficiency. They can agree on an approach including education, regulation, and the development of policy and financial instruments to encourage conservation. For example, Congress is considering new lighting standards that would eliminate energy-gulping incandescent light bulbs. Estimates show that the shift to fluorescent light bulbs would save $18 billion in electricity costs every year and would reduce demand equivalent to that currently met by 80 coal-fired power plants. States such as New York are calling for more stringent building codes. The California Public Utility Commission has developed mechanisms that reward utilities for promoting energy efficiency.

All three constituencies can support increasing the use of renewable energy to produce electricity, provided that the economic constituency sees an opportunity to make money. Encouragingly, entrepreneurs are entering this market and changing the economic perspective. For example, Silicon Valley entrepreneurs are exploring the use of nanotechnology to produce flexible solar cells that will be easy to manufacture and will lower the cost the cost from $10 to $1 per watt of capacity. About one-third of the states have renewable energy portfolio standards of one kind or another that will create a market for new renewable technologies, and Congress is considering adoption of a national standard.

Nuclear power is perfect from the point of view of energy security and climate, but it has formidable drawbacks. The high cost of new plants has been a barrier, but licensing reform is being implemented that should reduce the construction time and cost of new plants. Nuclear power is also burdened with concern about safety, waste management, and nuclear weapons proliferation. Although these are not central concerns of the three constituencies, they are issues that must be addressed cooperatively. The Bush administration’s Global Nuclear Energy Partnership program is one strategy to handle these concerns through reprocessing of spent fuel and careful management of the international fuel cycle.

Natural gas is the least harmful of all fossil fuels from a climate perspective, producing about half the CO2 per unit of energy as does coal. From the economic perspective, gas-fired electricity has an advantage in that the capital costs of building a gas generator are relatively small. However, this fuel is experiencing more price volatility than is oil or coal, and the largest reserves are in other countries. It is possible that we could face the same geopolitical problems with importing gas as we face with importing oil. Overcoming technical and economic obstacles to producing domestic gas could attract all three constituencies.

Coal produces more CO2 per unit of energy than any other fossil fuel, making it perhaps the worst demon of the climate change constituency. However, coal is domestic and is the most abundant and inexpensive source of energy. If coal use is coupled with cost-effective carbon capture and geologic sequestration of CO2, the use of this resource could be acceptable to all three constituencies. Carbon capture and storage (CCS) would allow use of this source while also addressing climate change and is arguably the most important technology to add to our current mix from the climate point of view. Since there are no direct energy benefits to sequestration, it will be viable only if carbon has value as a tradable allowance or if regulation limits allowable emissions. A strong CCS program would help to provide a lower-cost low-carbon energy system for the economic vitality constituency and remove obstructions to the use of the nation’s plentiful coal as part of an energy security strategy. It is not surprising that a suite of bills dealing with CCS are now moving through Congress with the idea of accelerating this important technology.

The case for the climate-plus-economic solution would be even more compelling if the security constituency would expand its focus beyond energy independence to include the risks to security from climate change.

Meeting the needs of all three constituencies for individual transportation is quite difficult. The transportation problem has three parts: the vehicle, the fuel, and the vehicle miles traveled. Vehicle efficiency is of interest to all three constituencies, and they can support standards to increase vehicle mileage. It has been more than 30 years since the United States revised the Corporate Average Fuel Economy (CAFE) standards. Now the Senate is working on legislation that would mandate a CAFE average of 35 miles per gallon by 2020 for cars and light trucks and for further improvement of 4 percent per year after that. The constituencies might be willing to support policies similar to those in China, where the sales tax on gas guzzler cars is 20%, but the tax on the most efficient cars is only 1%.

There are two major approaches to the fuel. The first is to use something besides oil to produce liquid fuels for mobility. The energy security constituency favors this approach as a first priority. However, some of these fuels will have little or no efficacy in addressing climate issues and can be quite expensive. These facts have motivated California to study a low-carbon fuel standard, and the European Union is following right behind them. The second is to use electricity instead of liquid fuel. From the climate perspective, this is not useful unless there is a complementary plan to produce the needed electricity without adding carbon to the atmosphere. Thus, meeting the needs of the three constituencies in the electricity sector could have the added benefit of helping to meet their needs in transportation.

Finally, the reduction of vehicle miles traveled (VMT) depends on the availability and attractiveness of public transportation and on land-use planning. The climate constituency generally understands the need to reduce VMT and supports solutions that have people live near where they work and to commute by foot, bicycle, or public transit. The security constituency might note that these choices would reduce demand for oil imports. Either constituency might support policies such as those being considered in the UK to reduce VMT and congestion. The UK may construct a vast national system for monitoring driving and then billing the drivers for amounts from one pence to one pound per mile traveled depending on congestion. The economic constituency is going to be wary of any policy that imposes changes on individual lifestyle. They will be more supportive of efforts to build communities that are environmentally friendly and energy efficient if these communities also provide the attraction of a better quality of life.

Putting the pieces together

Although it is clear that the three constituencies could share common interests in sensible policies to promote efficiency, renewable energy, lower cost nuclear power, more domestic gas, coal with sequestration, new liquid fuels or electric cars, and bold new designs for urban living, it is still a possibility than any one of the constituencies on their own can drive bad choices. The current energy system is dominated by the economic constituency and is responsible for the problems we have now. If the climate constituency alone dominates future choices, we could choose solutions that require long-term financial subsidies and result in market inefficiencies and higher prices. If the security contingent dominates, we might pick solutions that have little effect on the climate problem (corn-based ethanol) or even increase greenhouse gas emissions significantly (coal-to-liquids). Interestingly, investors are now backing off developing coal-to-liquids projects precisely because the associated emissions are causing economic uncertainty. Unless greenhouse gas emissions are curbed, rapid climate change is likely to have a widespread disruptive effect across the globe, particularly in the developing world. Stresses on water supplies, agriculture, fisheries, and the habitability of coastal land could leave millions hungry, homeless, and desperate and perhaps violent. Consider the mayhem that Hurricane Katrina created in the richest country in the world. Adapting to the consequences of climate change will push many poor countries to the edge of despair.

It is important to realize that finding solutions that work for two of these three viewpoints can lead to solving the problems of the third. If carbon reduction is made the organizing principle and the most cost-effective approaches are pursued, the result will also serve the purposes of the security group. The policy will aim to reduce energy demand through efficiency, do the research to make renewable energy less costly and more reliable, make carbon sequestration an economical option so that low-cost coal can be tapped, and perhaps make nuclear power a reasonable alternative. All of these actions will reduce dependence on oil imports and reduce the security risk from climate change. Solving the problem of reducing greenhouse gas emissions economically essentially solves the whole problem.

The case for the climate-plus-economic solution would be even more compelling if the security constituency would expand its focus beyond energy independence to include the risks to security from climate change. The fact that climate change will be a security problem has been highlighted recently in a report conducted by high-level military personnel titled National Security and the Threat of Climate Change and by a congressional action instructing the CIA and DOD to include security risks due to climate change in the next national intelligence estimate. This will include pinpointing the regions at highest risk of humanitarian suffering and assessing the likelihood of wars erupting over diminishing water and other resources and an assessment of the “direct physical threats to the United States posed by extreme weather events such as hurricanes.” Indeed, an understanding of the critical security implications of climate change could be the key to creating the consensus necessary for concerted action on energy policy.

Reaching this general consensus is only a first step. Important decisions will have to be made about the implementation of a carbon cap and trade system, the level of fuel efficiency standards, the details of building codes, R&D priorities, infrastructure investments, the ways to help developing countries, and a host of other details necessary to crafting an effective energy policy. But the nation will never reach the stage of optimizing policy if it fails to find the common thread in the concerns raised by the climate, security, and economic constituencies. This first step of building a foundation of agreement on the overall shape of energy policy is today’s essential task.

In his book The Open Society and Its Enemies, philosopher of science Karl Popper observes that “Instead of posing as prophets, we must become the makers of our fate.” That is exactly what we need to do with energy in the 21st century. We must become the makers of our energy, climate, economic, and security fate. Some of us are beginning to suspect that this challenge will demand new thinking and adaptability on a scale never managed before. Let us hope we have the wisdom and capacity to make one of the largest changes in human society ever required. Identifying the common ground for a broad societal vision of what needs to be done will provide us with a basis for hope.

Global Science Gaps Need Global Action

When it comes to the global state of science, technology, and innovation (ST&I), there’s more than one divide. Many readers of Issues in Science and Technology are familiar with the North-South divide between developed and developing countries—a divide that continues to persist. But there’s another divide as well—a South-South divide—that is becoming increasingly prevalent within the developing world. The fact is that some developing countries are rapidly gaining strength in ST&I, whereas others continue to languish.

Yet, before considering what is happening in scientifically lagging developing countries, it might be helpful to provide a broad outline of the world of ST&I as it exists today. First, there are countries with strong ST&I capacity. This group of about 25 countries, consisting largely of countries that belong to the Organization for Economic Cooperation and Development (OECD), enjoys across-the-board strengths in all areas of science and technology (S&T) and has the capacity to transfer scientific and technological knowledge into products and services that boost their economies. Rich in ST&I, they are financially well off as well.

Second, there are countries with moderate ST&I capacity. This group of about 90 countries includes some of the largest countries in the developing world, among them China, India, and Brazil. But the list contains others as well: Argentina, Chile, Malaysia, Mexico, South Africa, and Turkey, to name just a few. It is a diverse group with wide-ranging capabilities. The majority have a degree of competence in a select number of fields. But broad pockets of weakness remain, and the scientific infrastructure, including classrooms and laboratories, while improving, still often trails the quality of instruction and equipment found in countries with strong ST&I capacities.

The ability of these countries to bring their scientific and technical know-how to the marketplace is relatively weak, although recent indicators suggest that this transition is becoming less problematic in a few countries. In February 2007, for example, the World Intellectual Property Organization (WIPO) reported that although the United States still leads the world in patent applications, Asia is rapidly narrowing the gap. China filed nearly 4,000 patent applications in 2006, more than double the 2005 total. “New centers of innovation, particularly in northeast Asia, are emerging,” noted a WIPO official, “and this is transforming the geography of both the patent system and of future growth.”

That is the good news. The bad news is that there is a third category of countries marked by weak ST&I capacity. A survey conducted by the Academy of Sciences for the Developing World (TWAS) has identified 79 such countries, the majority of which are in sub-Saharan Africa and the Islamic region. These countries have very limited capacity in every field of S&T. They have poor teaching facilities, substandard laboratories, and scant ability to transfer their knowledge and know-how into products and services, especially products and services that can compete in the international marketplace. Researchers in these countries lack the capacity to participate in cutting-edge scientific endeavors, and many of their most promising young scientists migrate to other nations to pursue their careers. In the majority of these countries, there is minimal government support for ST&I. More generally, there is the absence of a culture of science.

Thus, the first and most significant challenge for international cooperation in ST&I is this: How can international cooperation help reduce the disparities among nations, particularly the disparities that exist between the scientifically stronger nations and the 79 countries that TWAS has identified as weak in ST&I?

Expanding the reach of ST&I to countries that have been largely left behind is one of the most critical problems of our time. But it is by no means the only one. The problems of sustainable well-being are increasingly complex and global in their dimensions. Yet the people who are most vulnerable to the risks posed by global assaults on the environment are often the most impoverished and marginalized people in the developing world.

In our interconnected world, which has become a truly global community thanks largely to the Internet and airline travel, no country can fully escape the acute problems that plague other nations. That is the message encapsulated in the Millennium Development Goals (MDGs) approved by member states of the United Nations in 2000. These goals set targets to address the world’s most pressing problems—problems that stand in the way of sustainable well-being in the developing world and that threaten peace and harmony everywhere: poverty, hunger, the spread of infectious diseases, poor education, gender inequality, and the lack of access to safe drinking water, sanitation, and energy.

To help make progress on all these fronts, the MDGs’ eighth measure calls for the creation of global partnerships that tap the collective talents of individuals and institutions in the developed and developing worlds. Experts agree that the MDGs have no chance of being met unless special attention is paid to problems of well-being (or should we say ill-being) that exist in Africa. More than 40% of all Africans do not have access to safe drinking water. Seventy percent do not have access to electricity. Twenty-five million Africans are infected with HIV, more than 60% of the world’s total. Ninety percent of the world’s malaria victims, numbering more than one million people each year, reside in Africa. Agriculture is the main source of sustenance and income for 70% of all Africans. Yet in Africa, 30 million children go to bed hungry every night.

SCIENCE FOR THE SAKE OF SCIENCE IS NO LONGER SUFFICIENT JUSTIFICATION FOR DOING SCIENCE IN MANY PARTS OF THE WORLD WHERE BUDGETS ARE LIMITED.

Africa may be poor, but it is not small. It includes more than 20% of Earth’s landmass, comprising an area larger than Australia, Brazil, Europe, and the United States combined. And although Africa may be weak, it is home to nearly one billion people. Africa, in short, may be poor and weak, but it cannot be ignored. In many respects, the future of our planet lies with the future of Africa. Africa, simply put, is where global attention must be focused if we are to make progress in meeting the MDGs.

But that still leaves open the question of what tools must be summoned in our efforts to succeed. The fact is that the MDGs cannot be achieved without strong capacity to generate and use ST&I and without vigorous and sustained international partnerships to help build this capacity.

Other global issues, which affect the developed and developing world in equal measure, also carry growing significance. Global warming is at the top of this list. But in addition, there are issues related to energy security, access to adequate supplies of drinking water, and the overexploitation of natural resources such as fisheries and forests.

Consequently, the second major challenge is this: How can international collaboration in ST&I assist in solving urgent global problems facing the world today? Reducing the gap between rich and poor countries and ensuring that the most critical global issues are tackled with tools that only global ST&I can provide are daunting challenges that cannot be met unless a critical mass of well-trained scientists is present in all countries.

Today, experts estimate that more scientists who have been educated and trained in universities in sub-Saharan Africa have migrated to the United States than have remained in Africa. Experience has shown that brain drain cannot be stopped unless the most talented scientists find favorable working conditions in their homelands. Once a scientist has left and established roots in another country, it is difficult to lure him or her back home, although China, South Korea, and Taiwan have been exceptions to this rule. Yet, as Rajiv Gandhi, the eldest son of Indira Gandhi and former prime minister of India, has noted: “Better brain drain than brain in the drain.” Experience has also shown that a nation’s scientific diaspora can be tapped through international scientific exchange in ways that could prove beneficial to both the scientists’ host and adopted countries.

So the third challenge for international cooperation in ST&I is this: How can global cooperation assist in converting the brain drain into brain circulation, providing benefits for both scientists and the scientific community regardless of where a scientist was born and where he or she chooses to live and work?

Science is a global enterprise, and excellence in science has always depended on the ability of scientists to associate freely with their colleagues around the world. Such movement not only benefits international science but also serves to deepen international understanding and appreciation of cultural diversity—a welcome byproduct in today’s troubled world. Yet as we all know, the free circulation of scientists, especially to the United States, has been severely restricted since the terrorist attacks in New York City and Washington, DC, on September 11, 2001.

The scientific community fully recognizes that security interests take precedence over scientific exchange. Nevertheless, it also recognizes that scientific exchange is an important instrument in the fight against ignorance, suspicion, hopelessness, and terrorism. The U.S. State Department, urged by the U.S. National Academy of Sciences and others, has taken steps to ease the burden of entry into the United States for scientists traveling from abroad. But many of our colleagues, particularly those from Africa and the Islamic region, hope that more can be done. Governments in the developing world are also discussing, and in some cases implementing, strategies to facilitate foreign travel by their scientists. For example, earlier this year the foreign ministers of the African Union (AU) endorsed a proposal to grant diplomatic passports to African scientists to ease their travel across Africa.

Although individual scientists from the developing world would benefit directly from these measures, no country would benefit more than the United States. Despite its inhospitable attitudes of the past few years, the United States remains the destination of choice for the most talented students and scientists from the developing world. As critics of the policy within the United States have noted: Many of the nation’s top graduate programs in science and engineering would be severely handicapped if foreign students stayed home. It is also worth pointing out that nearly half of all U.S. Nobel laureates since 1990 are foreign-born.

Therefore, the fourth major challenge is: How can the global scientific community persuade governments, especially the United States, to ease visa problems faced by scientists from the developing world and particularly those from the most impoverished and troubled regions of the developing world?

“Information wants to be free” is the clarion call of those of us who promote its free exchange. But what we often fail to emphasize is that information—that is, quality information—is expensive to produce. In recent years, the Internet and other forms of electronic communication have revolutionized the way in which scientific information is distributed and, increasingly, reviewed, edited, and published. These trends have had an enormously positive impact on global science. Never before have scientists in the developing world enjoyed access to such an extensive amount of current information. Never before have scientists been able to communicate so easily and directly with their colleagues in other parts of the world. And never before has international scientific collaboration been so easy to plan, organize, and implement.

But critical issues remain. Developing countries, particularly the poorest developing countries, often do not have sufficient resources and expertise to build and maintain up-to-date electronic communications systems. Broadband Internet connections are still rare in much of the developing world, and even online subscription rates are too high for many developing-world scientists to have access to the most current literature.

So the fifth challenge is: How does the global scientific community help ensure that scientists in all nations have electronic access to the new information and communication technologies and to the most current scientific literature?

The challenges for international cooperation in ST&I for sustainable well-being are many. I have just touched on the most significant ones. Now I would like to turn to the bright side of the equation: the opportunities for international cooperation, which are no less numerous and no less significant than the challenges. In some cases, they are one and the same.

There are new fields of science and new cutting-edge technologies that promise to have extraordinary impacts on global well-being.

  • Information and communication technologies (ICTs) are not just highly specialized fields in their own right but also enabling forces that help to advance all fields of S&T. ICTs, in fact, have led to a melding of fundamental and experimental research through the facilitation of mathematical modeling.
  • Biotechnology is having a strong impact on agriculture, public health, medicine, and environmental science, transforming each in new and unexpected ways.
  • Nanotechnology promises to revolutionize materials science; to bring physics, biology, and chemistry closer together; and ultimately to have broad-ranging implications in a variety of critical areas, including water, energy, human health, and the environment.
  • Space S&T help us to monitor environmental change (for example, assessing rates of deforestation and desertification) and devise effective responses to a host of ecological problems.

Several developing countries, especially those with growing scientific and technological capabilities, have been eager to embrace and pursue these new technologies. China and Brazil, for example, have partnered on a joint initiative leading to the launch of two satellites designed to chart land and ocean resources. Two more satellites are planned for 2008. Nigeria launched two remote-sensing satellites earlier this decade, and this May it launched its first communications satellite, in collaboration with China.

China is investing substantial sums of money in nanoscience and nanotechnology. That investment is paying off handsomely in terms of publications. In fact, a recent survey found that in 2004 Chinese scientists published the largest number of papers on nanotechnology in international peer-reviewed journals, exceeding the number of papers published by scientists in the United States. Brazil, India, and South Africa are also making substantial investments in nanotechnology.

India’s investment in ICTs is well known. The nation now enjoys world-class status in this field and is home to a number of corporations that rank among the largest and most influential in the world, including Infosys, Wipro, and Tata Consultancy Services. Pakistan, Brazil, Malaysia, South Africa, and many other developing countries have invested enormous resources in the development and expansion of ICTs. And let us not forget that South Korea, a nation that in 1962 had a gross domestic product (GDP) of just $2.3 billion (comparable to that of Uganda), embraced information technologies as one of the key sectors in its plans for long-term sustainable growth, first with telephony technologies and more recently with the Internet. Today, South Korea’s GDP exceeds $765 billion and ranks 11th in the world.

Developing countries have also taken significant steps in joining the global biotechnology research community. Malaysia, for example, has embarked on a broad-based biotechnology program to increase national wealth and improve the well-being of its citizens. China has made biotechnology a top priority, launching five large biotechnology research centers. In Africa, Nigeria has developed and is now implementing a national biotechnology policy, and Ghana has drafted a biosafety law that is now awaiting legislative approval. Governments across Africa have acknowledged the need to develop capacity in biotechnology and are now trying to match their rhetoric with action.

All of this adds up to new opportunities for international cooperation in ST&I, opportunities that hold the promise of advancing both science and sustainable well-being across the globe.

Science for the sake of science is no longer sufficient justification for doing science in many parts of the world where budgets are limited. Today, increasing attention is being paid to creating organizations and even disciplines that focus on the complex interactions between human and environmental systems. We have seen this effort unfold in the development of a series of conferences held by the United Nations during the 1980s and 1990s, culminating in the World Summit on Sustainable Development held in Johannesburg in 2002. And we have seen this in the creation of an international project aimed at linking knowledge to action: the Initiative for Science and Technology for Sustainability (ISTS) at the Kennedy School of Government at Harvard University. TWAS is delighted to be a partner in these efforts, joining the U.S. National Academy of Sciences, the American Association for the Advancement of Science, and many other research institutions in the developed and developing world.

ISTS has done an excellent job in articulating the principles of sustainability science and of raising the profile of this concept in the scientific and development communities. It has done an equally impressive job of highlighting examples of sustainability science and creating a broad conceptual framework for understanding why certain institutions devoted to science-based sustainable development succeed, whereas others do not.

In addition to the initiatives in the mid-level nations mentioned above, political leaders in the poorest countries with limited scientific and technological capabilities are also making increasing commitments to R&D and to regional cooperation in S&T. For example, at the AU Summit in January 2007 in Addis Ababa, Ethiopia, African leaders discussed regional strategies for the promotion of S&T. They announced that 2007 would be the year of “African scientific innovation.” Africa’s leaders have expressed support for S&T in the past, but the meetings were followed by meager results and ultimately disappointment. This time the level of commitment and enthusiasm is different. And this time the results could well be different.

Leaders at the AU Summit strongly recommended that each African country spend at least 1% of its GDP on S&T. Previous pledges to increase S&T spending have not been realized, but prospects are better this time. In fact, several African nations, most notably those that have also embraced democracy and good governance (including Ghana, Kenya, Nigeria, Rwanda, South Africa, Tanzania, and Zambia), have substantially increased their investments in S&T.

THE INTERNATIONAL SCIENTIFIC COMMUNITY MUST CONTINUE TO URGE THE G8 COUNTRIES TO FULFIL THE PLEDGES THAT THEY MADE IN GLENEAGLES.

The government of Nigeria, for example, has provided $5 million to launch an endowment fund for the African Academy of Sciences. Nigeria has also announced plans to launch its own national science foundation, modeled after the U.S. National Science Foundation. It has pledged $5 billion to the foundation’s endowment fund, money that is to be derived from revenues generated by the nation’s oil and gas industries. Only one nation in sub-Saharan Africa— South Africa—currently has a national science foundation.

At the AU Summit, the president of Rwanda, Paul Kagame, announced that his country has dramatically boosted expenditures on S&T from less than 0.5% of GDP a few years ago to 1.6% today. He also publicly committed his nation to increase investments in S&T to 3% of GDP within the next five years. That would make Rwanda’s investment in S&T, percentage-wise, comparable to that of South Korea and higher than that of most developed countries. A nation teetering on collapse less than a decade ago and still living in the shadow of genocide has embarked on a path leading to science-based sustainable development. Rwanda remains poor, but it is no longer hopelessly poor.

Last year, Uganda received a $25 million loan from the World Bank to support S&T within the country and the creation of centers of scientific excellence that will serve not only Uganda but also the entire region. The grant was given in part because of Uganda’s successful efforts to build its own scientific and technological capacities, particularly in the fields of public health and agricultural science.

This year, Zambia received a $30 million loan from the African Development Bank to support teaching and research at the University of Zambia and to provide postgraduate fellowships to some 300 students majoring in science and engineering. At the AU Summit, the president of Zambia, Levy Patrick Mwanawasa, proclaimed that building capacity in S&T is the only way to develop his country.

The president of Malawi, Bingu wa Mutharika, who heads one of the region’s poorest countries, acknowledged at the AU Summit that building scientific and technological capacity provides the only sure way to break the long-standing cycle of extreme poverty that has gripped the African continent for decades.“We have depended on donor countries for scientific development for so long,” he noted.“It is time we commit more resources in our national budget to advance S&T.” He urged his minister of finance to make S&T a budget priority and to provide additional funds for this effort on a sustained basis. He also pledged to create international centers of excellence in the fields of hydrology and biotechnology.

What makes the prospects for international cooperation in S&T for sustainable well-being so promising, even (or perhaps especially) when it comes to Africa, is that the global scientific community will not be acting alone in this effort. Over the past several years, there have been increasing commitments by governments in the developed world, and particularly in G8 countries, to support ST&I in low-income countries and especially in Africa.

In 2005, the Commission for Africa Report Our Common Interest, solicited by the UK’s Prime Minister Tony Blair and published during the G8 Summit in Gleneagles, Scotland, called on G8 countries to provide $5 billion to help rebuild Africa’s universities. The report also called for investing an additional $3 billion to help establish centers of scientific excellence in Africa. The G8 member countries unanimously pledged to support these recommendations, a decision that was greeted with enthusiasm in Africa and throughout much of the world.

Yet to date, G8 member countries have officially authorized only $160 million of support, targeted for the creation of networks of centers of excellence proposed by the AU’s New Partnership for Africa’s Development. Equally distressing, little of this money has actually been transferred to Africa. The international scientific community has an important stake in the success of this initiative, and it must continue to urge the G8 countries to fulfil the pledges that they made in Gleneagles.

The World Bank, through the Science Institutes Group, headquartered at the Institute for Advanced Study in Princeton, New Jersey, has provided loans for the creation of Millennium Science Institutes in Brazil, Chile, Turkey, and Uganda. The institutes offer scientists from developing countries an opportunity to conduct world-class research and to pursue cooperative projects with colleagues in a broad range of scientific fields. Several foundations have supported projects in science-poor countries that emphasize scientific and technological capacity building. Many of these efforts have focused on education and training for young scientists in the world’s least developed countries.

Rising levels of scientific excellence in developing countries, most notably Brazil, China, India, and South Africa, have opened new opportunities for South-South collaboration in education and research.

  • For example, agreements have been signed between TWAS and the governments of Brazil, China, India, and Pakistan providing more than 250 scholarships a year to graduate students and postgraduate researchers in poor developing countries to attend universities in the donor countries. TWAS pays for the plane ticket. The host countries pay for all other expenses, including housing and lodging. This is the largest South-South fellowship program in the world.
  • Brazil’s pro-Africa program supports scientific and technological capacity building in sub-Saharan Africa and especially in the Portuguese-speaking countries of Angola and Mozambique. The program includes research collaboration activities with Brazilian institutions.
  • China’s Development Fund for Africa, approved in 2006, will provide $5 billion over the next five years to assist African countries to achieve the MDGs through cooperation with China.
  • The joint Brazil, India, and Senegal Biofuels project in Senegal will seek to transfer Brazil and India’s expertise in the development of biofuels to one of Africa’s most scientifically proficient nations.
  • And the India, Brazil and South Africa tripartite initiative has recently agreed to launch a joint S&T program that will provide funds for joint problem-solving projects focusing on developing products with commercial value.

What does all this rush of activity add up to? Is it just another episode of fleeting interest in countries and people that have been left behind? Or are we entering a new era marked by sustained investments in ST&I, not just in the developed world but increasingly in the developing world as well?

I believe that we have more reason for optimism than cyni cism and that we may indeed be witnessing the beginning of a transformational moment in global science and science-based sustainable development. But for us to seize this moment, we need to develop and implement an action agenda designed to sustain and expand international cooperation in ST&I.

The Intergovernmental Panel on Climate Change, when issuing its policy summary in February 2007, proclaimed that we had reached a “tipping point” in our understanding of climate change. As Susan Solomon and other scientists who participated in this sterling example of international cooperation in science noted, it was now “unequivocally” true— indeed more than 90% certain—that human activities are responsible for altering our climate and for causing a significant rise in average global temperatures.

We have reached another tipping point as well. It has to do with the growing capabilities in S&T across the globe. These capabilities are rapidly transforming our existing bipolar world of S&T, previously anchored in the United States and Europe, into a multipolar world of science marked by the growing capabilities of Brazil, China, India, Malaysia, South Africa, Turkey, and others.

As the list of developing countries that garner strength in S&T increases in the coming years, the key question is this: Will just a handful of additional countries become scientifically strong, while the rest are left behind? Or will international cooperation in S&T help all countries into the fold, ultimately transforming science-based sustainable wellbeing into a truly global phenomenon?

The answer to this question lies, in part, in how the international scientific community responds to the challenges and opportunities that stand before it. The chances for success have rarely been brighter. The consequences of neglect and indifference have rarely been more troubling. The international science community should seize this moment. If we don’t, it could well fade into history as a lost opportunity that we, as both scientists and citizens, can ill-afford to lose.

From the Hill – Winter 2008

Fiscal year 2008 R&D funding levels on hold

The federal government’s fiscal year (FY) 2008 began on October 1, but most agencies are still operating under a continuing resolution extending funding at 2007 levels through December 14. Congress would like to spend $23 billion more on domestic programs than the president’s request, but President Bush has threatened—and made good on the threat—to veto any appropriations bill that exceeds his request. Efforts are under way to split the difference between the two spending goals, but it remains to be seen whether bills at that level would be acceptable to the president.

Congress finalized and the president signed a FY 2008 Department of Defense (DOD) budget with $77.8 billion for R&D, a 0.5% decline from the current year. The bill includes $13 billion for science and technology programs (down 7%) but a 3% boost for basic research to $1.6 billion. The primary reason why DOD R&D would decline for first time in more than a decade is that Congress has not yet considered a 2008 supplemental war funding package that now approaches $190 billion for the DOD portion, including $3.9 billion for R&D (nearly all for development). Because Congress is expected to approve most of the supplemental request intact in early 2008, the final DOD R&D total for 2008 is likely to be another large increase.

House and Senate appropriators were able to iron out differences in the Labor–Health and Human Services– Education appropriations bill that would have given the National Institutes of Health a 3.6% increase to $30.2 billion. But on November 13, President Bush vetoed the appropriation because it exceeded his budget request. The House attempted to override his veto two days later but fell several votes short.

Appropriators reached agreement on a Transportation/Housing and Urban Development appropriations conference report containing a 7% increase for R&D in the Department of Transportation. The House quickly approved the bill, but the Senate postponed its final vote until December.

Before the budget stalemate began, Congress had planned to add billions of dollars to the proposed budgets for federal R&D. Both the House and Senate endorsed large proposed increases for select physical sciences agencies as part of the president’s American Competitiveness Initiative and would continue to support administration plans to expand investments in new human spacecraft. But instead of cutting funding for other R&D programs as the president requested, the House and the Senate would provide increases to every major nondefense R&D funding agency and would turn proposed cuts into significant increases for the congressional priorities of biomedical research, environmental research (particularly climate change research), and energy R&D. Those proposed increases are now up in the air.

Expansion of FDA oversight power sought

In the wake of high-profile problems with drugs such as Avandia and Vioxx as well as concerns about the safety of imported food, members of Congress are continuing to push for an expansion of the Food and Drug Administration’s (FDA’s) oversight authority.

In September 2007, Congress approved and the president signed a major FDA reauthorization bill that gives the agency new powers to regulate prescription drug safety, enabling it to require pharmaceutical companies to conduct postmarket safety studies or to change the information on product labels. But senior members of Congress, including House Energy and Commerce Chairman John Dingell (D-MI) and Senate Finance Committee Ranking Member Charles Grassley (R-IA), have continued to call for an overhaul of federal drug and food safety practices.

In September, Energy and Commerce Committee Democrats proposed a bill that would impose user fees that would provide the FDA with more funding for inspecting food imports, give the agency authority to recall food products, and limit entry points for food imports to those near the FDA’s 13 inspection labs. The user-fee and port-limit proposals have drawn fire from some industry stakeholders.

In October, the committee released a report about China’s food inspection practices that Dingell said was “first-hand confirmation that food from China presents a clear and present danger to Americans under the current conditions of import.” The report indicated that China’s certification process is inconsistent. In addition, the FDA does not recognize that process, meaning that firms that don’t earn the certification can still export to the United States.

In November, a White House panel released an import safety plan that would give the FDA recall authority, though it cautioned that the U.S. government cannot “inspect its way to safety.” At the same time, the FDA announced its own food safety plan that seeks to focus its resources on the riskiest areas.

Rep. Rosa DeLauro (D-CT) and Senate Majority Whip Dick Durbin (D-IL) plan next year to take a different tactic by introducing a bill that would split the FDA into two agencies: one with jurisdiction over food and the other over drugs and medical devices.

The FDA inspects only about 1% of imported food, although it electronically scans all of it. The percentage of imported drugs that are inspected also numbers in the single digits. A recent Government Accountability Office (GAO) report discussed at a November 1 Energy and Commerce subcommittee hearing estimated that the FDA inspects approximately 7% of the foreign pharmaceutical manufacturers that export goods to the United States in a given year. Although the agency must inspect domestic drug plants every two years, there is no requirement for inspecting foreign facilities, and the FDA lacks a dedicated overseas inspection staff as well as an adequate tracking system. Causing further consternation to committee members was the fact that the GAO examined the issue and came to some similar conclusions nearly 10 years ago. Dingell called it a case of “déjà vu.”

FDA Commissioner Andrew von Eschenbach said that the agency is taking steps to improve information flow, including meeting with Chinese government officials. He has also put the brakes on a proposal, strongly criticized by some members of Congress, to close half of the agency’s inspection labs.

Another oversight issue for the FDA involves a bill passed in August by the Senate Health, Education, Labor and Pensions Committee that would authorize the agency to regulate tobacco. Eschenbach sent testimony to the House Energy and Commerce Committee opposing the legislation on the grounds that the already stretched-thin agency is geared toward promoting health, not overseeing a harmful product.

An Institute of Medicine panel came to a different conclusion, however, saying in May 2007 that the FDA is the best federal agency to deal with tobacco. The bill’s supporters include the strange bedfellows of antismoking groups and the tobacco giant Philip Morris, which, as the market leader, would stand to benefit from overarching limits on tobacco advertising.

Bill to promote electronic health records proposed

Arguing that the use of electronic health records (EHRs) is a necessary first step toward more comprehensive use of information technology (IT) in health care, the House Committee on Science and Technology on October 24 passed a bill (H.R. 2406) supporting efforts toward creating a national interoperable system for EHRs. Introduced by Chairman Bart Gordon (D-TN), the bill authorizes $8 million annually for two years to expand IT initiatives at the National Institute of Standards and Technology (NIST).

H.R. 2406 directs the National High-Performance Computing Program to coordinate federal R&D programs in health IT and requires NIST, in consultation with the National Science Foundation (NSF), to establish a university grant program for multidisciplinary research in health IT, with an emphasis on promoting collaborations with for-profit and nonprofit entities.

The bill would also require NIST to create a Healthcare Information Enterprise Integration Initiative to deal with major concerns regarding a national system of EHRs, including interoperability, privacy, security, and the specification of standards for technology. The bill would expand NIST’s authority to work with the user and technology communities to support interoperability analyses, along with the development of standards and software conformance and certification protocols. The bill requires the establishment of a Senior Interagency Council on Federal Healthcare Information Technology Infrastructure to coordinate the development and deployment of health IT systems by federal departments and agencies.

Proponents claim that a national system for health records can increase efficiency and reduce error. With current systems, the health information of one patient is often scattered among various providers, making it difficult to construct a complete medical history, especially in the case of an emergency or for elderly individuals. EHRs would make it possible to streamline administrative tasks for providers and patients and avoid adverse drug interactions arising from incomplete medical information. EHRs could also potentially halve the number of medical tests performed, because duplicate tests due to an inability to access previous results account for 49% of the clinical diagnostic tests performed. In addition, separate studies by RAND and the Center for Information Technology Leadership both say that a national network could reduce U.S. spending on health care by 5% annually.

In statements on the legislation, Gordon argued that the federal government should serve as a model in the field of health IT. However, witnesses at a hearing held by a subcommittee of the House Committee on Oversight and Government Reform argued that the federal government should do much more than act as a model; it should provide funding incentives for the adoption of health IT. Reports indicate that financing is one of the largest barriers to the implementation of EHRs, because doctors bear 80% of the cost burden in the form of equipment, software, training, and support for the systems, whereas they receive only 20% of the cost benefits.

Legislation would boost support for women in science

On September 10, 2007, Rep. Eddie Bernice Johnson (D-TX) introduced the Gender Bias Elimination Act of 2007 (H.R. 3514), which would implement many of the recommendations of the 2006 National Academies report Beyond Bias and Barriers: Fulfilling the Potential of Women in Academic Science and Engineering. The report asserted that the lack of scientific advancement by women is largely a result of the culture and structure of academic science.

Johnson’s bill takes almost verbatim the recommendations of the National Academies report, requiring federal granting agencies to provide mandatory workshops for department chairs, members of grant review boards, and agency program officers about methods to minimize gender bias. The bill also demands that agencies enforce nondiscrimination laws and conduct compliance reviews at universities as well as collect and publish data on the demographics and funding outcomes of all grant applications.

The report’s ideas were echoed in an October hearing on Women in Academic Science and Engineering held by the House Committee on Science and Technology’s Subcommittee on Research and Science Education. Chairman Brian Baird (D-WA) and Ranking Member Vernon Ehlers (R-MI), along with several witnesses, argued that not only are scientific departments often unwelcoming environments for women, but that the criteria used for advancement in these fields do not reward work such as support and mentoring of younger scientists, which is disproportionately provided by women.

Amid recommendations for reforming the scientific system, the NSF Advance program received praise at the hearing as a model for encouraging institutional transformation. The program aims to enable the full participation of women in academic science and engineering by providing grants for comprehensive programs to facilitate institution-wide change, as well as awards that support the analysis, adaptation, and dissemination of practices for increasing the representation of women in these fields.

Freeman Hrabowski, president of the University of Maryland, Baltimore County, testified that the Advance program should be expanded and the lessons learned through its grants should be applied at other institutions. Donna Shalala, president of the University of Miami and chair of the panel that produced the National Academies report, called for similar programs to be put in place at the National Institutes of Health and other funding agencies.

One recommendation of the Academies report that was not included in the legislation was allowing grant money to be applicable to dependent-care costs—an issue that Shalala continued to push for in her subcommittee testimony. Shalala also took the recommendation for compliance reviews a step further by advocating the establishment of a regulatory body that would hold universities accountable for Title IX provisions in academia as the National Collegiate Athletic Association does in athletics.

Even as Congress moves toward addressing the issues raised by the National Academies report, not all parties are in agreement as to the existence of biases against women in science. A conference held last month by the American Enterprise Institute analyzed the veracity of these biases, examining alternative explanations for the underrepresentation of women in the sciences, such as sex differences in aptitude or interest in the subjects.

Climate Change Science Program under fire

As discussions in Congress shift from debating the causes of climate change to examining solutions to address it, increasing attention is being paid to the research that supports these decisions. In particular, the Climate Change Science Program (CCSP), which funds approximately $1.5 billion in R&D in 13 government agencies, has come under scrutiny, and several efforts are under way to refocus its research portfolio to emphasize information relevant to policymakers.

A National Research Council (NRC) report, Evaluating Progress of the U.S. Climate Change Science Program: Methods and Preliminary Results, captures many of the issues raised by members of Congress and other stakeholders. The NRC report found that the research program has been successful in identifying and attributing global temperature trends and their corresponding environmental effects. But the report notes that the program has been less successful in understanding local temperature trends and regional effects of climate change and their impact on society. In addition, the report found that the CCSP has failed to sufficiently analyze adaptation plans and mitigation tactics.

Most of the witnesses at a November 14 Senate Commerce, Science and Transportation Committee hearing shared these concerns and voiced their support for the Global Change Research Improvement Act (S. 2307) introduced by Sens. John Kerry (D-MA) and Olympia Snowe (R-ME). The bill seeks to realign the research program to “a comprehensive and integrated United States observation, research, assessment, and outreach program which will assist the nation and the world to better understand, assess, predict, mitigate, and adapt to the effects of human-induced and natural processes of global change.”

The bill calls for a new strategic plan for the program and would establish a program office within the White House Office of Science and Technology Policy (OSTP) to coordinate research activities and budget proposals. S. 2307 would create within the National Oceanic and Atmospheric Administration a National Climate Service that includes a network of regional and local facilities for operational climate monitoring and prediction. The bill also directs agencies to adopt policies that ensure the integrity of scientific communications.

A related bill, the Global Climate Change Research Data and Management Act of 2007 (H.R. 906), introduced by Reps. Mark Udall (D-CO) and Bob Inglis (R-SC), is included in the House’s energy package. This bill emphasizes the need to conduct and communicate adaptation and mitigation research of interest to policymakers, and it directs the president to develop a new research plan that will be updated every five years.

A related issue that emerged during the Senate hearing was the need for a national assessment. Under the Global Change Research Act of 1990, the administration must produce a national climate change assessment every four years. The only one to be produced was completed in 2000. In lieu of a single assessment, the Bush administration decided to issue a series of 21 technical reports. Thus far, only three reports have been completed, though others are well into the review process.

In response to the delay, the Center for Biological Diversity and several other environmental groups, supported by a memorandum from Sen. Kerry and Rep. Jay Inslee (D-WA), filed suit. In August 2007, a federal court ruled that the administration had violated the Global Change Research Act by failing to produce a national assessment and ordered completion of the reports by May 2008. OSTP Director John Marburger testified at the hearing that the administration is committed to meeting the deadline. The completion of these reports will likely do little to satisfy Kerry, who believes that the series of technical reports is not comparable to a single assessment.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

How to Use Technology to Spur Development

After decades of global antipoverty efforts in which nonprofit organizations operated on a separate track from the business sector, disappointment with the results is leading a diverse group of institutions to test a new approach. In recent years, groups as diverse as the United Nations (UN), the World Bank, the U.S. Agency for International Development (USAID), nongovernmental organizations, national governments, and corporate giants such as Microsoft and Visa have backed the idea that philanthropy and profitability are not opposing forces. The central premise is that increasing the well-being of the poor while increasing the profits of the private sector can simultaneously be a sound development and business strategy. Operationally, this means marketing productivity-enhancing goods and services to millions of people, often poor and rural, who form what is called the bottom of the pyramid (BOP). Although this approach has generated much enthusiasm and creativity in development circles, much remains unknown about how well this model works in practice. If implemented inappropriately, this well-intended approach will neither create opportunities for the poorest nor be financially self-sustaining for the private sector. We need to evaluate in detail what has been tried, as an essential step toward developing region-specific, pragmatic, and practice-based approaches for how companies and governments can serve the global poor and increase business opportunities.

The BOP model posits that the world is an economic pyramid with four billion people at the bottom who live on less than $2 (in purchasing-power parity terms) per day. The 100 million people at the top collectively control more wealth and resources than the bottom four billion. That said, a joint report from the World Resources Institute (WRI) and International Finance Corporation (IFC) concludes that the BOP constitutes an enormous $5 trillion global market. For the most part, however, these consumers are not integrated into the global economy. They have significant unmet needs for financial services, technologies, water, sanitation, and health care. They often pay higher prices for basic goods and services than do their wealthier counterparts, a phenomenon known as the “poor penalty.” The BOP business approach argues that the private sector should lead the effort to develop this untapped yet lucrative market. These poor and usually commercially overlooked consumers, it is argued, need low-cost high-quality products, for which they are willing to pay, to raise their quality of life. The development of these markets would lead to poverty alleviation not through subsidies or handouts but through generating opportunities and choices for the poor.

This poor-as-consumers rather than poor-as-beneficiaries approach has received support and validation from a variety of influential stakeholders. The UN Development Programme supported the creation of innovative solutions to meet the demands of potential BOP consumers with the report, Unleashing Entrepreneurship: Making Business work for the Poor. USAID and the IFC recently joined forces to support a range of grassroots business-development projects that create sustainable economic opportunities specifically for the poor in the developing world. Also worth noting is that the number of small startups and entrepreneurs focused on the BOP is growing rapidly. Multinational and national companies are also attempting to meet the needs of the BOP, particularly with financial services, food, and consumer products. For example, Visa International has invested in BOP markets in Africa with low-cost banking technologies for use in rural locations.

The BOP model is in fact a continuum rather than a single model: Some proponents suggest that companies should be philanthropically oriented as well as profit-minded; others claim that simply doing business with the poor will lead to social and economic development; and yet others seem to conflate both positions without explicitly acknowledging the difference. Overall, the model represents a shift in business as well as development thinking in that it promotes private-sector–led efforts to serve the poor, instead of assuming that the government should take care of the poor while the for-profit sector caters to the middle and upper classes. The BOP model has obvious appeal to both sectors: The public sector is relieved of the huge cost of subsidizing basic services for the poor, and the private sector benefits from inroads into a consumer market of four billion people. And if the strategy works as promised, the poorest people in the world escape the poverty trap.

With the explosion of markets for low-cost cell phones, personal digital assistants, and personal computers, the information and communications technology (ICT) sector has been particularly influenced by the BOP business logic. More than half of the world’s population lives in rural or peri-urban areas outside the reach of ICT networks. To bridge this digital divide, the World Bank and IFC have invested $5 billion in loans to ICT projects in more than 80 countries. Most USAID programs worldwide have an ICT component, with its latest report indicating that the U.S. government spent a total of $120 million on ICT for development purposes (ICT4D). “Access to ICT for all” has also been identified as a means to achieve the UN’s Millennium Development Goals of sustainable development and poverty elimination. Many ICT4D projects strive for the dual goals of business viability and social development. The hope is that these technologies can be used to support health, e-governance, education, agricultural innovation, and market access, as well as create new business opportunities to lift communities out of poverty.

Mobile telephony represents the most dramatic ICT4D and BOP success story. According to the joint WRI and IFC report, between the years 2000 and 2005, the number of mobile subscribers in developing countries grew to nearly 1.4 billion, a fivefold increase. Annual increases in cell phone subscribers exceed 100% per year in some nations, notably in sub-Saharan Africa. Mobile phones increase mobility, reduce transaction costs, facilitate communication with relatives, and extend market competitiveness to rural sectors. The rural poor are increasingly purchasing and using mobile phones, which can provide access to jobs, medical care, commodity prices for fishermen or farmers, and, increasingly, financial services. This growing demand has translated into financial success for mobile phone companies, which now operate in some of the poorest regions of the world.

India stands out as a leader in developing ICT4D projects, with over 150 private and public initiatives. Mobile subscribers per 1,000 people increased from 4 in the year 2000 to 48 in 2004. Internet users per 1,000 people went from 5 in 2000 to 23 in 2004. The Indian government has made a concerted effort to deliver low-cost connectivity and ICT-enabled services to the “common person” for development purposes. One of the most popular channels for the mass delivery of ICT4D services is through access to shared computers in rural ICT kiosks (also known as telecenters). The kiosks are equipped with one or more Internet-enabled computers and are generally owned and run by independent entrepreneurs. The Indian government is in the process of installing 100,000 ICT kiosks for business and government services throughout the country through a franchise model. Microsoft Corporation India has committed to initiating an additional 50,000 kiosks on the premise that such kiosks can be drivers of growth and facilitate development through business opportunities. The most recent company to seek its fortune in rural India is Google, with a simplified search engine and mobile phone applications, customized to provide weather information, crop patterns, and other relevant data to rural customers. Reaching the BOP while remaining financially viable is an explicit goal in almost all of these efforts, so India’s ICT4D projects provide a window into the BOP approach in practice.

The Akshaya project in the southern Indian state of Kerala is a public/private-sector collaboration that aims for rural development through access to information and computer literacy and financial viability through sustainable business models. The private partners in this case are local entrepreneurs. Akshaya began by establishing 630 Internet-enabled computer centers, each serving 1,000 households and each run by individual entrepreneurs selected and trained by the government. The government’s role is to subsidize a basic computer training course for the rural population. The government also provides business training for entrepreneurs, facilitates loans, establishes Internet connectivity, develops curricula, and computerizes government forms. The entrepreneurs’ role is to leverage the subsidized computer training phase to attract new customers and to maintain the profitability of the business. At the same time, the entrepreneurs provide services such as computer literacy training and electronic payments for both the poor and nonpoor. The kiosks are therefore a means by which the government can deliver education and other services to the rural population. BOP proponents support this emerging trend in which businesses and governments, individually and in partnership, invest in advanced technologies and low-cost services to meet the needs of the world’s poor. Our empirical investigation of Akshaya, however, uncovered three aspects of the project that complicate the implementation of the BOP model.

First, contrary to the objectives of the BOP model, we found that many entrepreneurs are not actually catering to the poorest populations but to people who earn much more than $2 per day. In the Akshaya project, as in other ICT4D kiosk initiatives in India, the individual entrepreneurs running the kiosks face tradeoffs between serving the poor and making their businesses viable. Although these efforts are launched in the name of, and aim to serve, the poorest, in reality it is rarely practical to work with those at the bottom of the social hierarchy. Entrepreneurs get more business from the better-off, who are a step (or possibly several steps) up the economic ladder. In other words, the people in need of development services such as e-literacy or local-language computer education are often distinct from the people who are regular and paying kiosk customers. So entrepreneurs face branding, pricing, and marketing challenges in attracting both groups. On the one hand, cost recovery requires selling to clients who are middle or near-middle class, more experienced in computer use, and more interested in advanced courses than in subsidized educational offerings. On the other hand, the kiosk entrepreneurs are being asked to serve the poorest, who may attend the subsidized basic course but often cannot afford to continue using the centers or do not find applications they are willing to pay for. Several kiosk entrepreneurs who had made a good-faith effort to offer services and programs that the poor were supposed to “need” were not doing well financially. Even in a state such as Kerala, with its 91% literate population, we found that the financially successful Akshaya kiosks were used mostly by middle-class students and aspiring professionals, not by those who needed basic educational or e-governance services.

Second, those entrepreneurs who did succeed in attracting poor as well as middle-class clients had to engage in continual trust-building efforts, regularly update their assessments of local needs and demands, and occasionally offer below-cost discounts for the very poor. At the same time they had to communicate effectively with the aspiring and emerging middle classes and to convince them that their kiosks were as good as privately run telecenters that had no mandate to serve the poor. In effect, and somewhat against the spirit of the BOP model, these socially conscious entrepreneurs subsidized the true BOP with the profits generated through serving the non-BOP. This balancing act was achieved by only a few of the Akshaya entrepreneurs, showing that the “strong” BOP model, which claims that savvy entrepreneurs can serve the poor profitably without being philanthropically inclined, is too simplistic.

Third, we found that public perceptions can make or break a business model. The BOP approach encourages partnerships between the private sector, local governments, and nongovernmental organizations, but its advocates are often ahistorical in their prescriptions. Determining the right level and nature of public support for the private sector is crucial for the implementation of this model. But these variables are highly dependent on the historical relations between the government and the private sector in specific locations. The Akshaya project was implemented in Kerala, a region with a long history of government leadership in development programs for the poor. The middle class and poor alike thus had preexisting perceptions of what public-sector services look like. Because the project is a public/private partnership, with social goals in mind, both users and nonusers of Akshaya services indicated that these services were cheap, of low quality, and targeted toward the rural poor. Many people did not realize that the kiosks were private businesses intended to benefit the middle classes as well as the poor with relevant products and services. Consumers therefore tended to self-select out of Akshaya, with the relatively better-off using privately run, non-Akshaya computer centers, even if Akshaya centers offered comparable courses and services.

But why underestimate the value of serving the emerging middle classes or those who earn above the $2-per-day threshold? Many of these people have also had limited access to high-quality low-cost products and services in the past. It is too soon to comment on the overall economic effects of kiosk projects for the poorest populations, but households earning between $6 and $10 a day could represent significant market opportunities and (possibly) development prospects through BOP-type projects. But the projects initiated thus far have been significantly motivated by the need to serve the poorest populations. Governments, corporations, and international donors need realistic expectations of who in fact can be served and can benefit from the market-oriented approaches espoused by the BOP model. At present, the rhetoric and expectations often do not match the actual outcomes on the ground.

USAID, the World Bank, WRI, and other leading organizations have all accepted versions of the BOP philosophy as a win-win situation for ICT4D and entrepreneurship. If their ICT4D efforts and interventions are to have a real impact on less developed economies, they need to take a more transparent and nuanced approach to the BOP. The existing model, despite its good intentions, is in practice ambiguous about how to target a vast and internally differentiated market and impractical in its insistence on profitably serving the poorest. We propose a set of locally specific practice-based recommendations that can help the BOP approach evolve from an appealing idea to an effective strategy.

For companies and entrepreneurs:

  • Supply will not create the expected demand. Only context-specific market research can assess the priorities and the purchasing power of the BOP. The USAID-brokered Internet telephony project in Vietnam is a successful example of serving the BOP, because it was implemented only after a market survey indicated strong local demand for cheap voice communications. In addition, because the transaction costs of launching a local project can be high, USAID’s facilitating role in bringing together multiple local, state, and private stakeholders helped ensure a long-term commitment to the project. For ICT4D projects to serve the BOP and remain commercially viable, committed stakeholders and accurate demand estimates are critical. This will lead to more viable ICT products, services, and pricing structures.
  • The BOP is not a monolithic block of 4 billion people. Entrepreneurs must learn to segment and leverage the enormous variation within even the local BOP.
  • With respect to ICT, the most popular model for shared access—the computer kiosk—is not necessarily the best way to serve the rural poor. Although ICT4D kiosks are widespread internationally, it is expensive to maintain kiosks with PCs and Internet connectivity, and it is a challenge to develop services that contribute to social development as well as profits in a differentiated market.
  • If the target market were just the emerging middle class and not the BOP, profitability would be less of a challenge.
  • Mobile phones, with their low power requirements, low upfront costs, durability, and short learning time may be more useful than the personal computer in the BOP market. It has repeatedly been shown that the BOP, even the very poorest, has substantial communication needs.
  • For governments and international development organizations:
  • To determine the right level of public support for business-with-development partnerships, institutions must account for the legacy of past government services and their effect on consumer preconceptions.
  • The path from affordable products and services for the poor to social development is neither short nor direct. Investing in e-governance mechanisms for the BOP market will not automatically lead to meaningful development or an improved standard of living, especially if these consumers lack access to the most basic services such as water, roads, or health care.
  • In addition to enabling the poor to become consumers of products and services, it is important to enhance their capacity as producers and innovators. Buying from the poor and developing their marketing opportunities are at least as important as selling to them, because poverty reduction requires raising real incomes. Although BOP advocates have indeed voiced support for this idea, this is not the central proposition of the model.
  • Realistic expectations and policy transparency with respect to who can be served by BOP-based services are critical. Maintaining profitability with a customer base of the emerging middle classes is much more feasible than with a base of the rural poor and is valuable on its own terms. Serving the poorest may require targeted policies, some subsidized services, and facilitation efforts for longer periods than the current BOP discussions seem to recognize.

With a significant portion of the world still poor, hungry, and powerless, investing in the BOP is an uplifting idea for both companies and governments. But there is an inherent struggle between serving the poorest and commercial success. Governments must do more to encourage the private sector to make clear commitments to support the poorest. Companies and entrepreneurs may find that they need to cross-subsidize the true BOP, perhaps with a portion of their overall profits. We must think creatively but pragmatically about meeting both social and commercial goals so that the perceived rather than hypothetical needs of the BOP can be met and the capabilities of and opportunities for the poorest can be enhanced.

How to Fix Our Dam Problems

California is the world’s eighth largest economy and generates 13% of U.S. wealth. Yet Governor Arnold Schwarzenegger says high temperatures, low rainfall, and a growing population have created a water crisis there. A third of the state is in extreme drought and, if there’s another dry season, faces catastrophe. The governor fears that his economy could collapse without a $5.9 billion program to build more dams.

His concerns are widely shared in the United States—not to mention in dry Australia, Spain, China, and India. Yet as California desperately seeks new dam construction, it simultaneously leads the world in old dam destruction. It razes old dams for the same reasons it raises new dams: economic security, public safety, water storage efficiency, flood management, job creation, recreation, and adaptation to climate change. Dam-removal supporters include water districts, golf courses, energy suppliers, thirsty cities, engineers, farmers, and property owners.

With 1,253 dams risky enough to be regulated and 50 times that many unregistered small dams, California is a microcosm of the world. There are more than 2.5 million dams in the United States, 79,000 so large they require government monitoring. There are an estimated 800,000 substantial dams worldwide. But within the next two decades, 85% of U.S. dams will have outlived their average 50-year lifespan, putting lives, property, the environment, and the climate at risk unless they are repaired and upgraded.

Neither dam repair nor dam removal is a recent phenomenon. What is new is their scale and complexity as well as the number of zeros on the price tag. Between 1920 and 1956, in the Klamath River drainage 22 dams were dismantled at a total cost of $3,000. Today, the removal of four dams on that same river—for jobs, security, efficiency, safety, legal compliance, and growth—will cost upwards of $200 million.

Which old uneconomical dams should be improved or removed? Who pays the bill? The answers have usually come through politics. Pro-dam and anti-dam interests raise millions of dollars and press their representatives to set aside hundreds of millions more tax dollars to selectively subsidize pet dam projects. Other bills bail out private owners: A current House bill earmarks $40 million for repairs; another one sets aside $12 million for removals. The outcome is gridlock, lawsuits, debt spending, bloated infrastructure, rising risks, dying fisheries, and sick streams.

Dam decisions don’t have to work that way. Rather than trust well-intentioned legislators, understaffed state agencies, harried bureaucrats, or nonscientific federal judges to decide the fate of millions of unique river structures, there’s another approach. State and federal governments should firmly set in place safety and conservation standards, allow owners to make links between the costs and benefits of existing dams, and then let market transactions bring health, equity, and efficiency to U.S. watersheds. Social welfare, economic diversity, and ecological capital would all improve through a cap-and-trade system for water infrastructure. This system would allow mitigation and offsets from the vast stockpile of existing dams while improving the quality of, or doing away with the need for, new dam construction.

Big benefits, then bigger costs

A new dam rises when its public bondholder/taxpayer or private investor believes that its eventual benefits will outweigh immediate costs. When first built, dams usually fulfill those hopes, even if the types of benefits change over time. In early U.S. history, hundreds of dams turned water mills or allowed barge transport. Soon, thousands absorbed flood surges, diverted water for irrigation, or slaked the thirst of livestock. Later still, tens of thousands generated electrical power, stored drinking water for cities, and provided recreation. North America built 13% of its largest dams for flood control, 11% for irrigation, 10% for water supply, 11% for hydropower, 24% for some other single purpose such as recreation or navigation, and 30% for a mix of these purposes. Today, the primary reason is drinking water storage and, to a far lesser extent, hydropower and irrigation.

Unfortunately, we usually fail to heed all the indirect, delayed, and unexpected downstream costs of dams. With planners focused primarily on near-term benefits, during the past century three large dams, on average, were built in the world every day. Few independent analyses tallied exactly why those dams came about, how they performed, and whether people have been getting a fair return on their $2 trillion investment. Now that the lifecycle cost is becoming manifest, we are beginning to see previously hidden costs.

First, it turns out that a river is far more than a natural aqueduct. It is a dynamic continuum, a vibrant lifeline, a force of energy. Dams, by definition, abruptly stop it. But all dams fill with much more than water. They trap river silt or sediment at rates of between 0.5% and 1% of the dam’s storage capacity every year. Layer by layer, that sediment settles in permanently. By restraining sediment upstream, dams accelerate erosion below; hydrologists explain that dams starve a hungry current that then must scour and devour more soil from the river bed and banks downstream. Silt may be a relatively minor problem at high altitudes, but it plagues U.S. landscapes east of the Rockies, where precious topsoil is crumbling into rivers, backing up behind dams, and flowing out to sea. Removing trapped sediment can cost $3 per cubic meter or more, when it can be done at all.

NEITHER DAM REPAIR NOR DAM REMOVAL IS A RECENT PHENOMENON. WHAT IS NEW IS THEIR SCALE AND COMPLEXITY AS WELL AS THE NUMBER OF ZEROS ON THE PRICE TAG.

The second enemy is the sun. Whereas sediment devours reservoir storage from below, radiant heat hammers shallows from above. In dry seasons and depending on size, dam reservoirs and diversions can evaporate more water than they store. Rates vary from dam to dam and year to year, but on average evaporation annually consumes between 5% and 15% of Earth’s stored freshwater supplies. That’s faster than many cities can consume. It’s one of the reasons why the Rio Grande and Colorado Rivers no longer reach the sea and why precious alluvial groundwater is shrinking, too. Nine freshwater raindrops out of 10 fall into the ocean, so the trick is to see the entire watershed—from headwater forest to alluvial aquifers through downstream floodplain—as potentially efficient storage and tap into water locked beneath the surface. Today, irrigators pump more groundwater than surface water. In arid landscapes, water is more efficiently and securely stored in cool, clean alluvial aquifers than in hot, shallow, polluted reservoirs.

The third threat to dam performance, as both a cause and a consequence, is climate change. Dams are point-source polluters. Scientists have long warned that dams alter the chemistry and biology of rivers. They warm the water and lower its oxygen content, boosting invasive species and algae blooms while blocking and killing native aquatic life upstream and down. Rivers host more endangered species than any other ecosystem in the United States, and many of the nation’s native plants and animals, from charismatic Pacific salmon to lowly Southern freshwater mussels, face extinction almost entirely because of dams.

What we didn’t appreciate until recently is that dams also pollute the air. The public may commonly see dams as producers of clean energy in a time of dirty coal and escalating oil prices. Yet fewer than 2% of U.S. dams generate any power whatsoever. Some could be retrofitted with turbines, and perhaps various existing dams should be. But peer-reviewed scientific research has demonstrated that dams in fact may worsen climate change because of reservoir and gate releases of methane. Brazil’s National Institute for Space Research calculated that the world’s 52,000 large dams (typically 50 feet or higher) contribute more than 4% of the total warming impact of human activities. These dam reservoirs contribute 25% of human-caused methane emissions, the world’s largest single source. Earth’s millions of smaller dams compound that effect.

Worse, as climate change accelerates, U.S. dams will struggle to brace for predicted drought and deluge cycles on a scale undreamed of when the structures were built. This brings us to the fourth danger. Dams initially designed for flood control may actually make floods more destructive. First, they lure people to live with a false sense of security, yet closer to danger, in downstream floodplains. Then they reduce the capacity of upstream watersheds to absorb and control the sudden impact of extreme storms. Looking only at mild rainstorms in October 2005 and May 2006, three states reported 408 overtoppings, breaches, and damaged dams. Only half of the nation’s high-hazard dams even have emergency action plans.

The scariest aspect of dams’ liabilities is the seemingly willful ignorance in the United States of their long-term public safety risks. Engineers put a premium on safety, from design to construction through eventual commission. Yet after politicians cut the ceremonial ribbon, neglect creeps in. As dams age they exhibit cracks, rot, leaks, and in the worst cases, failure. In 2006, the Kaloko Dam on the Hawaiian island of Kauai collapsed, unleashing a 70-foot-high, 1.6-million-ton freshwater tsunami that carried trees, cars, houses, and people out to sea, drowning seven. This is not an isolated exception, but a harbinger.

These preventable tragedies happen because both public and private dams lack funds for upkeep and repair. In 2005, the American Society of Civil Engineers gave U.S. dams and water infrastructure a grade of D and estimated that nationwide, repairing nonfederal dams that threaten human life would cost $10.1 billion. The U.S. Association of State Dam Safety Officials (ASDSO) placed the cost of repairing all nonfederal dams at $36.2 billion. Yet Congress has failed to pass legislation authorizing even $25 million a year for five years to address these problems.

Cash-strapped states generally don’t even permit dam safety officials to perform their jobs adequately. Dozens of states have just one full-time employee per 500 to 1,200 dams. Hence state inspectors, like their dams, are set up to fail. Between 1872 and 2006, the ASDSO reports, dam failures killed 5,128 people.

As environmental, health, and safety regulations drive up the cost of compliance, owners of old dams tend to litigate or lobby against the rules. Others simply walk away. The number of abandoned or obsolete dams keeps rising: 11% of inventoried dams in the United States are classified under indeterminate ownership.

To date, warnings have been tepid, fitful, disregarded, or politicized. In 1997, the American Society of Civil Engineers produced good guidelines for the refurbishment or retirement of dams. They have been ignored. In 2000, the landmark World Commission on Dams established criteria and guidelines to address building, managing, and removing dams, but its report so challenged water bureaucrats that the World Bank, the commission’s benefactor, has tried to walk away from its own creation. Environmental organizations have published tool kits for improving or removing old dams, but activists often target only the most egregious or high-profile dozen or so problems that best advance their profile or fundraising needs.

Dams have always been politically charged and often the epitome of pork-barrel projects. For the same reasons, dam removal can get bipartisan support from leading Democrats and Republicans alike. The switch from the Clinton to Bush administrations led to attempted alterations of many natural resource policies, but one thing did not change: the accelerating rate of dam removals. In 1998, a dozen dams were terminated; in 2005, some 56 dams came down in 11 states. Yet despite bipartisan support, there has never been any specific dam policy in either administration. A dam’s demise just happened, willy-nilly, here and there. Dams died with less legal, regulatory, or policy rationale than accompanied their birth.

Thoreau had it right

No laws, no regulations, no policy? Federal restraint remains an alluring ideal in a nation that feels cluttered with restrictions. It’s a deeply ingrained American sentiment, embodied in Henry David Thoreau’s famous remark in Civil Disobedience: “That government is best which governs least.” Yet the founder of principled civil disobedience was also the first critic of seemingly benign dams because of their unintended effects.

While paddling with his brother on the Concord and Merrimack Rivers in 1839, Thoreau lamented the disappearance of formerly abundant salmon, shad, and alewives. Vanished. Why? Because “the dam, and afterward the canal at Billerica …put an end to their migrations hitherward.” His elegy reads like an Earth First! manifesto: “Poor shad! where is thy redress? …armed only with innocence and a just cause …I for one am with thee, and who knows what may avail a crowbar against that Billerica dam?”

Thoreau restrained himself from vigilante dam-busting, but 168 years later the effects of the country’s dams have only multiplied in number and size. Happily, the end of Thoreau’s tale might nudge us in the right direction. He did not complain to Washington or Boston for results, funds, or a regulatory crackdown. He looked upstream and down throughout the watershed and sought to build local consensus. Because the dam had not only killed the fishery but buried precious agricultural farmland and pasture, Thoreau advocated an emphatically civic-minded, consensus-based, collective, economically sensible proposal, in which “at length it would seem that the interests, not of the fishes only, but of the men of Wayland, of Sudbury, of Concord, demand the leveling of that dam.”

In other words, if those watershed interests were combined, they could sort out fixed liabilities from liquid assets. The economic beneficiaries of a flowing river, including the legally liable dam owner, should pay the costs of old dam removal, just as the beneficiaries of any new dam pay the costs of its economic, environmental, and security effects. In a few words, Thoreau sketched the outlines of what could emerge as a policy framework for existing dams that could be adapted to a river basin, a state, or a nation.

The most successful and least intrusive policies can be grouped under the strategic approach known as cap and trade. That is, the government sets a mandatory ceiling on effects, pollution, or emissions by a finite group of public and private property stakeholders. This ceiling is typically lower than present conditions. But rather than forcing individual stakeholders to comply with that target by regulatory fiat, each one can trade offsets, what amount to pollution credits, with each other. Those who cut waste, emissions, and effects better may sell their extra credits to laggards or newcomers. This approach leverages incentives to reform, innovate, and improve into a competitive advantage in which everyone benefits, and so does nature. Although it did not involve dams, a cap-and-trade policy was tested nationally under the 1990 Clean Air Act revisions aimed at cutting acid rain–causing sulfur dioxide emissions of U.S. factories in half. When it was announced, the utility industry gloomily predicted a clean-air recession, whereas environmentalists cried sellout over the lack of top-down regulatory controls. But cap and trade turned out to reduce emissions faster than the most optimistic projection. The industry grew strong and efficient, and the result was the largest human health gains of any federal policy in the 1990s. Annual benefits exceeded costs by 40:1.

Since then, cap-and-trade policies have proliferated from India to China to Europe. Though far from flawless, a cap-and-trade carbon policy is one success story to emerge from the troubled Kyoto Protocol to reduce emissions that accelerate climate change. Nations and multinational corporations such as General Electric and British Petroleum used it to reduce polluting emissions of carbon dioxide and methane while saving voters and shareholders money in the process. More recently, atmospheric cap and trade has been brought down to earth; the valuation and exchange in environmental offsets have been applied to land and water ecosystems. Certain states use cap and trade in policies to curb nitrogen oxides and nonpoint water pollution, others to reduce sediment loads and water temperature, and still others to trade in water rights when diversions are capped. California’s Habitat Conservation Plans work within the Endangered Species Act’s “cap” of preservation, yet allow “trade” of improving, restoring, and connecting habitat so that although individuals may die, the overall population recovers. Under the Clean Water Act, a cap-and-trade policy encourages mitigation banking and trading, which leads to a net gain in wetlands.

In each case the policy works because it lets democratic governments do what they do best—set and enforce a strict uniform rule—while letting property owners, managers, investors, and entrepreneurs do what they do best: find the most cost-effective ways to meet that standard. Given the documented risks of the vast stockpile of aging dam infrastructure in the United States, a cap-and-trade policy for dams could be tested to see if it can restore efficiency, health, and safety to the nation’s waters.

Making the policy work

The first step would be to inventory and define all the stakeholders. In air-quality cap-and-trade cases, these include factory owners, public utilities, manufacturers, refineries, and perhaps even registered car owners. In the case of dams, one could begin with the 79,000 registered owners in the National Inventory of Dams. Tracking down ownership of the estimated 2.5 million smaller unregistered dams may prove a bit challenging, until their owners realize that dismantling the dams can yield profit if removal credits can be bought and sold.

The second step would be to recognize the legitimate potential for trades. Dams yield (or once yielded) economic benefits, but every dam also has negative effects on air emissions and water quality, quantity, and temperature, therefore on human health and safety, economic growth, and stability. Even the most ardent dam supporter acknowledges that there is room for potentially significant gains in performance from dams as well as from the rivers in which they squat. Whereas the top-down goal in the past had been to subsidize or regulate new dams for their economic benefits, the aim in this case is horizontal: to encourage an exchange to reduce old dams’ economic and ecological costs.

Third, quantify the kind, extent, and nature of those negative effects. Our scientific tools have advanced considerably and are now ready to measure most if not all of those qualitative damages observed by amateurs since Thoreau. By breaking them down into formal “conservation units,” degrees Celsius, water quality, cubic meters of sediment, and so forth, we can quantify potential offsets in ecological and economic terms. The United States could set out rigorous scientific standards modeled on the Clean Air Act cap-and-trade policy or wetlands mitigation banking,

Fourth, start small, then replicate and scale up with what works best. The pilot exchanges could be structured by geography or by type of effect. But both kinds of pilot programs have already begun. One creative company in North Carolina, Restoration Systems, has begun to remove obsolete dams to gain wetlands mitigation credits that it can sell and trade, in most cases, to offset the destruction of nearby wetlands by highway building. In Maine, several dams in the Penobscot River watershed have been linked through mitigation as part of a relicensing settlement. On the Kennebec River, also in Maine, the destruction cost of the Edwards Dam was financed in large part by upstream industrial interests and more viable dams as part of a package for environmental compliance. On the west coast, the Bonneville Power Administration is using hydropower funds to pay for dam removals on tributaries within the Columbia River basin.

These early efforts are fine, but restricted geographically; each approach could be allowed to expand. The larger the pool of stakeholders, the greater are the economies of scale and the more efficient the result. But a national consensus and standards do not emerge overnight, nor should they, given that there are so many different dams. Each dam is unique in its history and specific in its effects, even though the cumulative extent and degree of those effects are statewide, national, and sometimes even global. A cap-and-trade policy will emerge nationally only as it builds on examples like these.

Finally, work within existing caps while using a standard that lets the amoral collective marketplace sort out good from bad. The beauty of this framework is that many of the national standards are already in place. Legal obligations to comply with the National Environmental Policy Act, Endangered Species Act, Clean Water Act, and Clean Air Act all have strong bearing on decisions to remove or improve dams. Some tweaking may be required, but perhaps not much. Recently, Congress revised the Magnuson-Stevens Act to pilot cap-and-trade policies in fishery management, in which fishermen trade shares of a total allowable or capped offshore catch of, say, halibut or red snapper.

Those overworked state and federal agencies responsible for enforcing laws—the ASDSO, the Army Corps of Engineers, the Fish and Wildlife Service, the National Marine Fisheries Services, and the Environmental Protection Agency— need not get bogged down in the thankless task of ensuring that each and every dam complies with each and every one of the laws. Dam owners may have better things to do than argue losing battles on several fronts with various government branches. All parties can better invest their time according to their mandate, strengths, and know-how: officials in setting the various standard legal caps and ensuring that they are strictly applied to the entire tributary, watershed, state, or nation; and dam owners in trading their way to the best overall result.

A cap-and-trade scenario

Suppose, for example, that a worried governor determines to cap at one-third below current levels all state dam effects: methane emissions, sedimentation rates, evaporative losses, aquatic species declines, habitat fragmentations, artificial warming, reduced oxygen content, and number of downstream safety hazards. He wants these reductions to happen within seven years and is rigorous in enforcing the ceiling. That’s the stick, but here’s the carrot: He would allow dam owners to decide how to get under that ceiling on their own.

At first, dam owners and operators, public as well as private, could reliably be expected to howl. They would label the policy environmentally extreme and say it was sacrificing water storage, energy, food, and flood control. But eventually, innovative dam owners and operators would see the policy for what it really is: a flexible and long-overdue opportunity with built-in incentives to become efficient and even to realize higher returns on existing idle capital. They would seize a chance to transform those fixed liabilities into liquid assets.

One likely effect would be private acquisition of some of the many thousands of small orphan dams. By liquidating these, an investor would accumulate a pool of offset credits that could be sold or traded to cumbersome dams with high value but low flexibility. This development has already emerged in isolated cases. In northern Wisconsin, the regional power company bought and removed two small, weak dams in exchange for a 25-year license to operate three healthier ones in the same watershed. Utilities in the West have taken notice and begun to package their relicensing strategies accordingly.

Another predictable outcome would be that, in order to retain wide popular and political support, big power, transport, and irrigation dam projects—think Shasta, Oroville, San Luis Reservoir, Glen Canyon, and Hoover—would mitigate their effects first by looking upstream at land and water users, then at other smaller dams that could be upgraded, retrofitted, or removed to gain efficiencies in ways easier or cheaper than they could get by overhauling their own operations and managements.

There would also be a likely expansion outward and upward in user fees raised from formerly invisible or subsidized beneficiaries from the services of existing dams. Such services range from recreational boaters, anglers, and bird hunters to urban consumers, lakefront property owners, and even those who merely enjoy the bucolic view of a farm dam. These disaggregated interests have largely supported dams, but only as long as others foot the bill for maintenance and upkeep. Economists call them free riders, and a new cap-and-trade dam policy would reduce their ranks. Dams that failed to generate enough revenues to meet national standards could earn credits by selling themselves to those interests that could. This happened when viable upstream industries on the Kennebec River helped finance the removal of Edwards Dam.

Another effect would be an innovation revolution in the kinds of tools and technologies that are already in the works but that have lacked a national incentive to really flourish. These include new kinds of fish passages, dredging techniques, low-flush toilets, and timed-drip irrigation, along with a more aggressive use of groundwater that pumps reservoir water underground as soon as it is trapped. The range of tools would also include financial instruments; in the West, they might accelerate the trading in water rights between agricultural, industrial, urban, and environmental users that has begun in Oregon, Montana, Washington, and California.

This brings us to a final advantage of a cap-and-trade policy for existing dams: global competitiveness. Seventy years ago, the United States set off a macho global race to build the biggest dams on Earth, starting with Hoover. It’s not clear which country won the top-down competition, which displaced 80 million people and amputated most of Earth’s rivers. But a new horizontal policy can lead to a competitive advantage. Whether scaled to tributaries or based on federal standards, the United States gains through dam consolidation, efficiencies, and innovation. Flexibility and incentives in a coast-to-coast market lower the transaction costs of repair or removal. Economies of scale would spur a substantial new dam removal and mitigation industry akin to the clean-air industry of scrubbers, software, and innovative technology sparked by the Clean Air Act or the Kyoto Protocol cap-and-trade policy. These don’t just bring down the costs of such policies in the United States; they create conditions for a competitive advantage for the United States. Exporting technology and skills will be in high demand beyond our borders, especially in China, Russia, and India, where most dams lie and where sedimentation and evaporation rates are high and dam safety and construction standards are low.

What is keeping this policy from emerging? Mostly it is because the competing governmental and nongovernmental organizations engaged in water think of dams as solitary entities locked within sectoral and jurisdictional cubicles. They fail to recognize that all dams have a national impact, positive and negative, on the life and livelihoods of communities throughout the United States.

A RIVER IS A DYNAMIC CONTINUUM, A VIBRANT LIFELINE, A FORCE OF ENERGY.

We regard as distinct each dam operated by the U.S. Bureau of Reclamation, Army Corps of Engineers, Tennessee Valley Authority, or Bonneville Power Administration. Together those public projects total half of the nation’s hydropower generation, but each is often seen as outside the laws that govern private hydropower authorized under the Federal Power Act. In turn, the 2,000 hydro dams overseen by the Federal Energy Regulatory Commission fall into one category and the 77,000 nonhydro (but federally registered) dams into another. We see 39,000 public dams as different from 40,000 private dams. We regulate irrigation dams differently from navigation dams and assign water rights to dams in western states but apply common law in eastern states, even when dams share the same river. Two dams on the same stream owned by the same company are subject to different environmental laws. We put 2.5 million small dams in a different category from 79,000 larger dams. The predictable mess is arbitrary and absurd and cries out for an overarching national policy.

Taking note of seemingly contradictory trends around dam construction and destruction worldwide, one might ask, “How far will the current trends go? How many old dams are we talking about repairing or removing? Hundreds? Thousands? A few big ones? A million little ones? Do we need more dams or fewer?”

Such questions largely miss the point of the policy envisioned here. We don’t need a specific number of dams, but rather we need healthier rivers, safer societies, and a more efficient and disciplined water-development infrastructure. How we get there is beyond the capacity of a single person to decide; only through a flexible horizontal market can we answer, together. A government policy can be the catalyst for and guide the direction of this market because it removes personal, political, ideological, and geographic biases from the equation. Nothing environmental and safety activists say or do can prevent new dam construction, and nothing dam supporters say or do can prevent old dams from coming down. But if the nation’s anti-dam and pro-dam interests were gathered collectively under the same fixed national ceiling and left to their own devices, Adam Smith’s “human propensity to truck, barter and exchange” could unite with the spirit of Thoreau’s civil “wildness.” A cap-and-trade dam policy’s embedded incentives would encourage the market’s invisible hand while ensuring its green thumb.

The United States once led the world in the construction of dams, but over time, many have deteriorated. Now, under a cap-and-trade policy, it can bring horizontal discipline to that vertical stockpile of fixed liabilities, reducing risks while improving the health and safety of living communities. The United States can once again show the way forward on river development. Through such a cap-and-trade policy it can help dams smoothly and efficiently evolve with the river economies to which they belong.

Let us close where we began, with Governor Schwarzenegger. If states are indeed the laboratories of U.S. democracy, he stands in a unique position to mount a market-based experiment for the United States as part of his agenda to build bigger, higher, and more new dams for water storage. He has already expanded in-state cap-and-trade schemes in water transfers, endangered species habitats, ocean fishery rights, and carbon emissions. He is open to the idea of removing the O’Shaughnessy Dam that has submerged Hetch Hetchy Valley in Yosemite National Park, even while he seeks more water storage elsewhere. Now, as the governor makes his pitch for big new multibillion dollar dams to save California from parched oblivion, he and other governors, not to mention heads of state from Beijing to Madrid to New Delhi to Washington, DC, could institute effective new policies to protect Earth’s liquid assets.

The Global Tour of Innovation Policy: Introduction

Innovation does not take place in a laboratory.

It occurs in a complex web of activities that transpire in board rooms and court rooms, in universities and coffee shops, on Wall St. and on Main St., and it is propelled by history, culture, and national aspirations. Innovation must be understood as an ecosystem. In the natural world life might begin from a tiny cell, but to grow and prosper into a mature organism, that cell needs to be supported, nurtured, and protected in a variety of ways. Likewise, the germ for an innovation can appear anywhere, but it will mature into a real innovation only if it can grow in a supportive social ecosystem.

The idea of an innovation ecosystem builds on the concept of a National Innovation System (NIS) popularized by Columbia University economist Richard Nelson, who describes an NIS as “a set of institutions whose interactions determine the innovative performance… of national firms.” Too often, unfortunately, analysts and policymakers perceive the NIS as the immutable outcome of large historical and cultural forces. And although there is no doubt that these large forces powerfully shape an NIS, many aspects of an NIS can be recast with deliberate action.

Among the essential components of an NIS are social norms and value systems, especially those concerning attitudes toward failure, social mobility, and entrepreneurship, and these cannot be changed quickly or easily—but they can change. Other critical components are clearly conscious human creations and are obviously subject to change; these include rules that protect intellectual property and the regulations and incentives that structure capital, labor, financial, and consumer markets. Public policy can improve innovation-led growth by strengthening links within the system. Intermediating institutions, such as public-private partnerships, can play a key role in this regard by aligning the actions of key players in the system such as universities, laboratories, and large companies, in conjunction with the self-interest of venture capitalists, entrepreneurs, and other participants with national objectives. Some systems underemphasize the role of the public sector in providing R&D funds and support for commercialization activities; other systems sometimes overlook the framework conditions required to encourage risk, mitigate failure, and reward success.

Paradoxically, international cooperation is a hallmark of scientific progress, technology development, and the production of final goods. At the same time, there is fierce international competition for the growth industries of the future with the jobs, new opportunities, and synergies that high-tech industries bring to a national economy. In the past, many nations believed that their innovation systems were largely immutable, reflecting distinct national traditions. The winds of globalization have changed that perspective. The articles that follow describe the efforts of a handful of nations to deliberately shape their NIS. They are all works in progress that illustrate that there is no perfect innovation ecosystem. Each country is struggling to determine what can be changed and what must be accommodated in its particular circumstances. What works in one context will not necessarily work in another. What works in one decade will not necessarily work in the next. And with the global economic systems always in flux, every country must be ready to reexamine and revise its policies. These articles contain no easy answers. They offer something much more useful: candid and perceptive discussion of the successes and failures that are slowly leading all of us to a better understanding of how innovation can be tapped and directed to achieve human goals.

This collection of articles is an outgrowth of the Board on Science, Technology, and Economic Policy’s project on Comparative Innovation Policy: Best Practice for the 21st Century, which is an effort to better understand what leading countries and regions around the world are doing to enhance the operation of their innovation systems. The project’s publications include Innovation Policies for the 21st Century; India’s Changing Innovation System: Achievements, Challenges, and Opportunities for Cooperation; Innovative Flanders: Synergies in Regional and National Innovation Policies in the Global Economy; and Creating 21st Century Innovation Systems in Japan and the United States: Lessons from a Decade of Change.

Ethanol: Train Wreck Ahead?

The new vogue in energy policy is plant-derived alternative fuels. Corn-based ethanol, and to a lesser extent oilseed-based biodiesel, have emerged from the margins to take center stage. However, although ethanol and biodiesel will surely play a role in our energy future, the rush to embrace them has overlooked numerous obstacles and untoward implications that merit careful assessment. The current policy bias toward corn-based ethanol has driven a run-up in the prices of staple foods in the United States and around the world, with particularly hurtful consequences for poor consumers in developing countries. U.S. ethanol policies rig the market against alternatives based on the conversion of cellulosic inputs such as switchgrass and wood fibers. Moreover, the environmental consequences of corn-based ethanol are far from benign, and indeed are negative in a number of important respects. Given the tremendous growth in the corn-based ethanol market, it should no longer be considered an infant industry deserving of tax breaks, tariff protection, and mandates.

In place of current approaches, we propose initiatives that would cool the overheated market and encourage more diversified investment in cellulosic alternatives and energy conservation. First, we would freeze current mandates for renewable fuels to reduce overinvestment in and overreliance on corn-based ethanol. Second, we would replace current ethanol tax breaks with a sliding scale that would reduce incentives to produce ethanol when corn prices are high and thus slow the diversion of corn from food to fuel. Third, we would implement a wide-ranging set of federal fees and rebates that discourage energy consumption and encourage conservation. Fourth, we would shift federal investment in cellulosic alternatives from subsidies for inefficient production facilities and direct them instead to upstream investment in R&D to improve conversion technologies. Together, these four changes would still retain a key role for biofuels in our energy future, while eliminating many of the distortions that current policy has created.

Infant industry no more

Since 1974, when the first federal legislation to promote corn-based ethanol as a fuel was approved, ethanol has been considered an infant industry and provided with increasingly generous government subsidies and mandates. Ethanol’s first big boost came in the late 1970s in response to rising oil prices and abundant corn surpluses. A tax credit for blending corn-based ethanol with gasoline created a reliable market for excess corn production, which was seen as an alternative to uncertain export markets.

But the real momentum for ethanol resulted from environmental concerns about the use of lead to boost the octane rating of gasoline. The phase-out of lead as an additive began in 1973, and ethanol replaced it as a cleaner-burning octane enhancer. In recent years, it has replaced the oxygen additive MTBE, which was phased out because of concerns about groundwater pollution. Ethanol’s increasing value as a gasoline additive has allowed it to receive a premium price, and by 2005 corn-based ethanol production in the United States reached 3.9 billion gallons.

More recently, increases in oil prices during the past two years brought ethanol into national prominence. From $52 a barrel in November 2005 to more than $70 in mid-2007, higher oil prices coincided at first with cheap corn: a prescription for supernormal ethanol profits. Investment in new capacity took off, and 2006 production topped 5 billion gallons.

Although high oil prices have given ethanol the headroom it needs to compete, the industry is built on federal subsidies to both the corn farmer and the ethanol producer. Direct corn subsidies equaled $8.9 billion in 2005, but fell in 2006 and 2007 as high ethanol-driven corn prices reduced subsidy payments. These payments may soon be dwarfed by transfers to ethanol producers resulting from production mandates, tax credits, grants, and government loans under 2005 energy legislation and U.S. farm policy. In addition to a federal ethanol tax allowance of 51 cents per gallon, many states provide additional subsidies or have imposed their own mandates.

In the 2005 energy bill, Congress mandated the use of 7.5 billion gallons of biofuels by 2012, and there is strong political support for raising the mandate much higher. President Bush, in his January 2007 State of the Union speech, called for increasing renewable fuel production to 35 billion gallons by 2017. Such an amount, if it were all corn-derived ethanol, would require about 108% of total current U.S. corn production.

In addition to providing domestic subsidies, Congress has also shielded U.S. producers from foreign competition. Brazil currently produces about as much ethanol as the United States (most of its derived from sugarcane instead of corn) at a significantly lower cost, but the United States imposes a 54-cent-a-gallon tariff on imported ethanol.

Negative effects

As the ethanol industry has spiked, a larger and larger share of the U.S. corn crop has gone to feed the huge mills that produce it. According to the Renewable Fuels Association, there were 110 U.S. ethanol refineries in operation at the end of 2006, another 73 were under construction, and many existing plants were being expanded. When completed, this ethanol capacity will reach an estimated 11.4 billion gallons per year by the end of 2008, requiring 35% of the total U.S. corn crop even with a good harvest. More alarming estimates predict that ethanol plants could consume up to half of domestic corn supplies within a few years. Yet, from the standpoint of energy independence, even if the entire U.S. corn crop were used to make ethanol, it would displace less gasoline usage than raising fleet fuel economy five miles per gallon, readily achievable with existing technologies.

As biofuels increasingly impinge on the supply of corn, and as soybeans and other crops are sacrificed to grow still more corn, a food-versus-fuel debate has broken out. Critics note that domestic and international consumers of livestock fed with grains face steadily rising prices. In July 2007, the Organization for Economic Cooperation and Development issued an outlook for 2007–2016, saying that biofuels had introduced global structural shifts in food markets that would raise food costs during the next 10 years. Especially for the 2.7 billion people in the world living on the equivalent of less than $2 per day and the 1.1 billion surviving on less than $1, even marginal increases in the cost of staple grains can be devastating. Put starkly: Filling the 25-gallon tank of a sport utility vehicle with pure ethanol would require more than 450 pounds of corn, enough calories to feed one poor person for a year.

The enormous volume of corn required by the ethanol industry is sending shock waves through the food system. The United States accounts for some 40% of the world’s total corn production and ships on average more than half of all corn exports. In June 2007, corn futures rose to over $4.25 a bushel, the highest level in a decade. Like corn, wheat and rice prices have surged to 10-year highs, encouraging farmers to plant more acres of corn and fewer acres of other crops, especially soybeans. The proponents of corn-based ethanol argue that yields and acreage can increase to satisfy the rising demand. However, U.S. corn yields have been trending upward by a little less than 2% annually during the past 10 years. Even a doubling of yield gains would not be enough to meet current increases in demand. If substantial additional acres are to be planted with corn, the land will have to be pulled from other crops and the Conservation Reserve Program, as well as other environmentally fragile areas.

BRUCE BABCOCK, IN A STUDY FOR THE CENTER FOR AGRICULTURAL AND RURAL DEVELOPMENT AT IOWA STATE UNIVERSITY, PREDICTED IN JUNE 2007 THAT ETHANOL’S IMPACT ON CORN PRICES COULD MAKE CORN ETHANOL ITSELF UNPROFITABLE BY 2008.

In the United States, the explosive growth of the biofuels sector and its demand for raw stocks of plants has triggered run-ups in the prices not only of corn, other grains, and oilseeds, but also of crops and products less visible to analysts and policymakers. In Minnesota, land diverted to corn to feed the ethanol maw is reducing the acreage planted to a wide range of other crops, especially soybeans. Food processors with contracts with farmers to grow crops such as peas and sweet corn have been forced to pay higher prices to keep their supplies secure. Eventually, these costs will appear in the prices of frozen and canned vegetables. Rising feed prices are also hitting the livestock and poultry industries. Some agricultural economists predict that Iowa’s pork producers will be driven out of business as they are forced to compete with ethanol producers for corn.

It is in the rest of the world, however, where biofuels may have their most untoward and devastating effects. The evidence of these effects is already clear in Mexico. In January 2007, in part because of the rise in U.S. corn prices from $2.80 to $4.20 in less than four months, the price of tortilla flour in some parts of Mexico rose sharply. The connection was that 80% of Mexico’s corn imports, which account for a quarter of its consumption, are from the United States, and U.S. corn prices had risen, largely because of surges in demand to make ethanol. About half of Mexico’s 107 million people live in poverty; for them, tortillas are the main source of calories. By December 2006, the price of tortillas had doubled in a few months to eight pesos ($0.75) or more per kilogram. Most tortillas are made from homegrown white corn. However, industrial users of imported yellow corn in Mexico (for animal feed and processed foods) shifted to using white corn rather than imported yellow, because of the latter’s sharp price increase. The price increase of tortillas was exacerbated by speculation and hoarding. In January 2007, public outcry forced Mexico’s new President, Felipe Calderón, to set limits on the price of corn products.

The International Food Policy Research Institute (IFPRI), in Washington, DC, has monitored the run-up in the demand for biofuels and provides some sobering estimates of their potential global impact. IFPRI’s Mark Rosegrant and his colleagues estimated the displacement of gasoline and diesel by biofuels and its effect on agricultural market prices. Given rapid increases in current rates of biofuels production with existing technologies in the United States, the European Union, and Brazil, and continued high oil prices, global corn prices are projected to be pushed upward by biofuels by 20% by 2010 and 41% by 2020. As more farmers substitute corn for other commodities, prices of oilseeds, including soybeans, rapeseed, and sunflower seed, are projected to rise 26% by 2010 and 76% by 2020. Wheat prices rise 11% by 2010 and 30% by 2020. Finally, and significantly for the poorest parts of sub-Saharan Africa, Asia, and Latin America where it is a staple, cassava prices rise 33% by 2010 and 135% by 2020.

Is ethanol competitive?

Although there are possible alternatives to corn and soybeans as feedstocks for ethanol and biodiesel, these two crops are likely, in the United States at least, to remain the primary inputs for many years. Politics will play a major role in keeping corn and soybeans at center stage. Cellulosic feedstocks are still more than twice as expensive to convert to ethanol as is corn, although they use far fewer energy resources to grow. And corn and soybean growers and ethanol producers have not lavished 35 years of attention and campaign contributions on Congress and presidents to give the store away to grass.

Yet because of the panoply of tax breaks and mandates lavished on the industry, the competitive position of the biofuels industry has never been tested. Today, however, the pressures and distortions it has created encourage perverse incentives: For ethanol to profit, either oil prices must remain high, further draining U.S. foreign exchange for petroleum imports, or corn prices must come off their market highs, allowing reasonable margins in the corn ethanol business. But high oil prices are what allow ethanol producers to pay a premium for corn. Hence, oil and corn prices are ratcheting up together, heedless of the effects on consumers and inflation. Bruce Babcock, in a study for the Center for Agricultural and Rural Development at Iowa State University, predicted in June 2007 that ethanol’s impact on corn prices could make corn ethanol itself unprofitable by 2008.

TO THE EXTENT THAT ETHANOL CREATES SHORTAGES AND DIVERTS CORN FROM FOOD AND FEED TO FUEL USES, IT WILL BECOME INCREASINGLY CONTROVERSIAL AND POLITICALLY VULNERABLE, AS WILL THE TARIFF WALLS ERECTED TO KEEP CHEAPER BRAZILIAN ETHANOL OUT OF THE U.S. MARKET.

Apart from ethanol-specific subsidies, tax breaks, and mandates, it is also important to recall that the ethanol market has been made in large part by shifts in U.S. transportation and clean air policies. When these policies are considered, it is clear that ethanol is not really competitive with petroleum, but has served instead as its complement. As increased production capacity allows ethanol to move beyond its traditional role as a gasoline enhancer (now a roughly 6-billion-gallon market) and become a gasoline replacement, several major concerns have arisen.

One critical factor involves a key ethanol liability: its energy content. Because it will drive a car only two-thirds as far as gasoline, its value as a gasoline replacement (rather than a gasoline additive) will probably gravitate toward two-thirds of gasoline’s price. A lower ethanol price would then lower the breakeven price that ethanol producers could pay for corn. Meanwhile, the domestic market for corn has been transformed from chronic surplus stocks and carry-forwards into bare shelves. Tighter supplies have led to higher prices, even in good-weather years. And what if dry hot weather produces a short corn crop? A 2007 report for the U.S. Department of Agriculture by Iowa State’s Center for Agricultural and Rural Development estimated that with a 2012 mandate of 14.7 billion gallons, corn prices would be driven 42% higher and soybean prices 22% higher by a short crop similar to that of 1988. Corn exports, meanwhile, would tumble 60%. In short, ethanol is switching from a demand-builder to a demand-diverter.

Another factor involves energy efficiency. If net energy efficiency is thought of as a dimension of competitiveness, a recent Argonne National Laboratory ethanol study summarized by the U.S. Department of Energy is revealing. It showed that ethanol on average uses 0.74 million BTUs of fossil energy for each 1 million BTUs of ethanol delivered to the pump. In addition, the total energy used to produce corn-based ethanol, including the solar energy captured by photosynthesis, is 1.5 to 2 million BTUs for each 1 million BTUs of ethanol delivered to a pump. If corn for ethanol is just an additional user of land, it is fair to ignore the “free” solar energy that grows the corn. But if corn-based ethanol is diverting solar energy from food or feed to fuel through subsidies or mandates, policymakers cannot so easily ignore it. Similarly, because ethanol has only two-thirds the energy content of gasoline, its greenhouse gas emissions per mile traveled (rather than per gallon) are comparable to those of conventional gasoline.

Yet another concern is the net environmental effect of ethanol. It takes from one to three gallons of water to produce a gallon of ethanol, which raises concerns about ground and surface water supplies. Although ethanol has some advantages over conventional gasoline in terms of its contribution to air pollution, it also has some disadvantages. One is its higher volatile organic compounds (VOCs) emissions, which contribute to ozone formation. Ethanol also increases concentration of acetaldehyde, which is a carcinogen. In addition, corn and soybeans are row crops that encourage the runoff of fertilizers and pesticides into streams, rivers, and lakes. As acres come out of soybeans and into corn (of the 12 million acres of new corn planted in 2007, three-fourths came out of soybeans), they require more nitrogen fertilizer. This nitrogen runs off into waters, encouraging algae blooms that choke off oxygen for fish and other creatures. All of the above belie ethanol’s reputation as “greener” than gasoline.

Finally, the logic behind the renewable fuels standard is that the raw material used—such as corn for ethanol—is renewable. Corn is renewable in the sense that it is harvested annually. But corn production and processing consume fossil fuels. So what is the net renewable benefit? Most estimates place the net renewable energy contribution from corn-based ethanol at 25% to 35%. Using a midpoint of 30%, that means that a mandate of 7.5 billion gallons, if filled by corn-based ethanol, yields a net renewable energy gain of only 2.25 billion gallons. Other products or processes may be more cost-effective in replacing gasoline.

As these problems become clearer, so does the appeal of cellulose as the feedstock for ethanol. The best role for corn-based ethanol then becomes simply building a bridge to the more promising world of cellulosic ethanol. But it is not clear why building a corn-based ethanol industry much beyond its current size as a producer of a gasoline additive makes sense as a prelude to cellulosic ethanol, for a number of reasons, First, technological progress in producing corn-based ethanol is not likely to be relevant to the technology challenges facing cellulosic ethanol. Second, growing areas for cellulose may well be different than for corn; if switchgrass is to be grown on current corn acres, it will have to beat high current corn prices in profitability. Third, the low energy density of cellulosic materials suggests that the handling and processing infrastructure they need is likely to be different in scale than for corn-based ethanol. Fourth, the economics of cellulosic ethanol are currently very high cost, with many other petroleum substitutes likely to be attractive before cellulosic ethanol. Finally, land-use conflicts—between food/feed and fuel or between conservation and fuel—differ in degree, not kind, between corn and cellulose and are likely to constrain a cellulosic industry’s capacity to well below the 35 billion gallons called for by President Bush. And whatever plant material is used to make biofuels, an estimate in the August 17, 2007 issue of Science suggested that substituting just 10% of U.S. fuel needs with biofuels would require 43% of U.S. cropland.

In short, there is enough uncertainty about ethanol’s supply and demand prospects to argue for a pause in the headlong rush into ethanol production. Turning corn surpluses into a gasoline additive was a strategy that made food and fuel complementary. But turning a tightening corn market into a less rewarding gasoline-replacement strategy heightens the conflict between food and fuel uses, with major environmental externalities and limited environmental benefits.

Fundamental change needed

If we are to avoid a situation in which ethanol becomes a demand diverter for corn, a fundamental reorientation in farm and energy policies is required. The alternative policy model will require replacing the mandates, subsidies, and tariffs designed to help an infant industry with a new set of policy instruments intended to broaden the portfolio of energy alternatives and to create market-driven growth in renewable energy demand.

Today, politicians compete with one another to raise the biofuels mandate. Little apparent consideration is given to the potential consequences of building markets on political fiat rather than sound finances. The result is that capacity is built too fast, at uneconomic scale, and in the wrong locations. Competing interests such as domestic feeders and foreign consumers can get trampled in the process, especially during a short crop, when the mandate functions as an embargo on other uses. Eventually, competing suppliers take over the traditional markets imperiled by ill-considered mandates. As this scenario unfolds, the burden of false economics and competitive responses may become too much to bear, and the shaky superstructure will crash, stranding assets and bankrupting many. In order to avoid such a crash, the United States should not increase the biofuels mandate beyond the current level of 7.5 billion gallons.

Next, consider subsidies to ethanol. The blender’s tax credit of 51 cents per gallon enabled ethanol to compete with gasoline in a market characterized by low gasoline prices and surplus corn supplies. That market no longer exists. Gasoline prices have skyrocketed because of high petroleum prices. The fixed per-gallon subsidy generated high profit margins for ethanol producers, which led to excessive growth in production. Some suggest correcting for this effect by replacing the fixed subsidy with a variable one that would decline as oil prices rose. This approach essentially would link ethanol to the volatile petroleum market.

Linking to the demand side of the equation, however, may not be the best avenue for reconciling food and fuel uses. We should consider the subsidy’s effect on the supply side of the equation. To the extent that an ethanol subsidy reduces surpluses, it is likely to enjoy continued and significant political support. But if it creates shortages and diverts corn from food and feed to fuel uses, it will become increasingly controversial and politically vulnerable, as will the tariff walls erected to keep cheaper Brazilian ethanol out of the U.S. market.

For these reasons, we should replace today’s fixed subsidy policy with a variable subsidy linked to corn supplies. As corn prices rise, the subsidy should be phased down. This would provide an incentive to convert corn to energy when supplies are ample, while allowing food and feed (and other industrial) uses to compete on an equal footing as supplies tighten and prices rise. When corn prices rise above some set level, the subsidy would fall to zero. At the same time, we should lower the tariff on imported ethanol.

An approach to ethanol incentives along these lines has three distinct advantages over current policy. First, it will function more like a shock absorber for corn producers and corn users; in contrast, a fixed subsidy in a volatile petroleum market functions like a shock transmitter that amplifies the effect of price swings. Second, it should largely disarm the emerging food- versus-fuel and environment-versus-fuel debates by letting market forces play a larger role in the industry’s future expansion. Finally, it preserves incentives for developing fuel uses in surplus markets, which would encourage continued technological progress in the breeding, production, processing, and use of corn for ethanol. Such developments should continue to improve corn-based ethanol’s competitive position.

Now consider energy policy. With better throttle control on ethanol’s role in the farm-food-feed economy, a fresh approach could also be taken toward U.S. energy policy and ethanol’s place within it. Current policy is too dependent on the political process: picking winners and losers and anointing technologies such as ethanol as favored approaches. Such an approach confronts two huge risks. The first resembles the risk Alan Greenspan foresaw in the U.S. stock market at the beginning of this century: an “irrational exuberance.” In the case of ethanol, the concern is that the enthusiasm for ethanol’s political rewards may run ahead of the logic that governs its economic realities.

A third element in our proposed mix of policies would be the creation of a wide-ranging set of fees and rewards to discourage energy inefficiencies and encourage conservation. Milton Friedman once proposed a negative income tax in which at a certain base income, taxes would be zero and below which subsidies would be paid to families. We propose a broad-based set of fees on energy uses that are carbon-loading and inefficient, but we would subsidize energy efficiency improvements that exceed a national standard. Simple examples would be progressive taxes on automobile horsepower and rebates to hybrid vehicles; fees on housing spaces in excess of 3,500 square feet; and rebates for energy-compliant, economical use of housing space. These “negative pollution taxes” would encourage conservation, while discouraging energy-guzzling cars, trucks, and homes. In particular, these policies could help encourage full–life-cycle energy accounting, tilting the economy toward the use of renewable fuels based on cellulosic alternatives to corn.

Finally, instead of subsidizing the current generation of inadequate cellulosic or coal gasification technologies, we would invest government resources in upstream R&D to bring further innovation and lower costs to these technologies so that they could compete in the market.

To move from our current devotion to corn-based ethanol and toward a new set of policies for renewable fuels will require bravery on the part of those who lead the reforms. The courage to admit that current policies have stoked the ethanol engine to an explosive heat may be in short supply. But unless the ethanol train slows down, it is likely to go off the tracks.

Polishing Belgium’s Innovation Jewel

Situated in the northern part of Belgium, the Flanders region is a natural meeting point for knowledge and talent, attracted by its highly skilled population, splendid cultural heritage, outstanding quality of life, excellent research, and easy accessibility. Its capital city Brussels doubles as the capital of Belgium and the headquarters of the European Union (EU). Additional assets include an open economy, excellent transportation and logistical infrastructure, and EU funding for science and technology development. Flanders has a strong educational infrastructure of six universities and 22 non-university higher-education institutions. These institutions have been grouped into five associations (Leuven, Ghent, Antwerp, Brussels, and Limburg) to facilitate and consolidate the implementation of the Bologna process, which aims to coordinate higher education across Europe.

Over the past quarter of a century, Belgium was gradually transformed from a centralized state into a federal state. During this process, education and nearly all R&D-related responsibilities were devolved to the regional authorities at the level of governance best suited for implementing these policies.

One of the richest and most densely populated European regions (6 million people in an area the size of Connecticut), Flanders has few natural resources. Its open economy is dominated by the service sector and by small and medium-sized enterprises (SMEs). The primary activities in the services sector are education, business services, and health care. Strong economic sectors include the automotive industry, the chemical industry, information and communication technology (ICT), and life sciences. Foreign companies represent almost 25% of Flemish added value and 20% of jobs in Flanders. Exports are extremely important and continue to grow [98.8% of Flanders’ gross domestic product (GDP) in 2005].

Its highly skilled, multilingual population has one of the highest productivity rates in the world. Flanders’ social and economic progress is largely determined by its ability to face and adapt to the constantly changing challenges of the knowledge society in an ever-expanding global environment. This backbone of knowledge is formed by the strong partnership between education, research, innovation, and entrepreneurship.

Reinforcing the scientific and technological innovation base is one of the government’s top priorities. In the mid1990s, the Flemish government started to systematically increase its investment in science and technological innovation. Over the past 10 years, public outlays for R&D almost doubled. They are evenly distributed to support R&D at academic institutions and in companies. In 2005, R&D accounted for 2.09% of Flanders’ GDP, well above the EU average of 1.85%. Businesses provided 70% of the R&D spending.

Increasing spending is a vital condition for a successful R&D policy but not sufficient on its own. The money must be spent wisely, and Flanders has studied successful programs in other countries to learn lessons that it can apply to its own programs. The result has been an R&D portfolio that seems to have the critical ingredients for success:

  • It maintains a balance between basic and applied research and between support for university and industry research.
  • It emphasizes a bottom-up approach where researchers are free to propose their own projects and funds are awarded on the basis of quality.
  • Universities and research institutes are given significant autonomy in directing research. The government provides a block grant to an institution, which must agree to long-term performance goals. Those that meet their overall goals continue to receive funding.

Flanders has been an active participant in EU discussions about innovation policy and has adapted its policies in response to what it has learned. At the Lisbon European Council in 2000, the EU heads of state expressed their ambition for the EU to become by 2010 “the most competitive and dynamic knowledge-based economy in the world, capable of sustainable economic growth with more and better jobs and greater social cohesion.” The 2002 Barcelona European Council set a target for every country to spend 3% of GDP on R&D by 2010, with two-thirds provided by industry and one-third by public authorities.

In 2003, the Flemish government signed an Innovation Pact with the key players from academia and industry. All parties subscribed to the 3% Barcelona target. The Flemish Science Policy Council (VRWB) has been designated to monitor the implementation of the Innovation Pact using a set of 11 key indicators. These include the number of patents, R&D personnel, higher-education degrees, and risk capital.

The most recent monitoring results, published in July 2007, indicate that Flemish innovative capacity remains average. We are not doing enough to transform the country’s excellent academic research results into innovative products, a problem encountered in many other European countries. In addition, only a few (mostly international) companies account for the majority of industrial research activities, making the Flemish economy particularly vulnerable to external events and corporate decisions.

In response, we have begun revising our R&D policy with the aim of resolving the innovation paradox by more effectively tapping into the practical applications of academic research, spreading innovation more broadly throughout the economy, and acquiring the strategic intelligence required to guide an evidence-based R&D policy.

R&D players

R&D in Flanders is carried out in many different places. The main players are the universities, the strategic research institutes, the hogescholen (non-university institutes for higher education), and industry.

Flanders has six universities, which share a threefold mission of education, research, and service to society and third parties. They are based in Leuven, Ghent, Antwerp, Hasselt, and Brussels. Since 2001, the University of Hasselt has been engaged in long-term cross-border cooperation with the Dutch University of Maastricht.

The 22 hogescholen form the second pillar of our dual system for higher education, providing higher education and advanced vocational training outside the universities. Their mission also includes scientific research and service to society. As stated earlier, universities and hogescholen have started to work together much more closely in what are known as associations.

Flanders also has four public strategic research centers, which are active in strategic scientific disciplines:

  • IMEC was founded in 1984 and has since developed into a world-renowned research and training institute for research into microelectronics. It currently employs more than 1,200 scientific and technical staff and has an extensive network of international contacts. Its commercial activities include technology transfers, cooperation agreements with companies, and participation in spin-offs. IMEC received a €39 million block grant from the Flemish government for 2007 (see sidebar).
  • The VIB, founded in 1996, is an inter-university research institute with more than 860 staff in several top-class university units, which operate in the field of biotechnology. Its basic activities consist of fundamental research, including research into cancer, gene therapies, Alzheimer’s disease, protein structures, technology transfer, and information dissemination. VIB’s public grant amounts to €38.2 million in 2007.
  • VITO was set up in 1992 and groups a dozen expertise centers for R&D; it also acts as a reference lab for the Flemish government. It employs more than 400 staff. Noteworthy activities include surface and membrane technologies, alternative sources of energy, and in vitro cell cultures. The public grant for 2007 is set at €35.2 million.
  • IBBT was established in 2003. Its primary mission is to gather highly competent human capital and perform multidisciplinary research made available to the Flemish business community and the Flemish government. This research looks at all aspects necessary for enabling the development and exploitation of broadband services, from technical and legal perspectives to the social dimension. Through investment in multidisciplinary research, the Flemish government wants to empower Flanders as an authoritative and international player in the information society of the future. In 2007, IBBT received €23 million from the Flemish government.

Last but not least, an abundance of market-oriented research is being done in and by companies, primarily SMEs. The government takes an active role in stimulating their participation in innovative research.

There is a growing awareness that innovation also depends on management, public and private governance structures, labor market organization, design, and other factors. The challenge is to develop suitable policy instruments to broaden the scope of innovation.

Policy priorities

The strategic priorities for Flemish R&D policy, as adopted by the Flemish government for the 2004–2009 period, can be summarized as follows:

  • A strong commitment to achieving the 3% of GDP spending target by 2010
  • The introduction of an integrated approach to innovation as a cross-cutting dimension
  • The strengthening of the building blocks for science and innovation (public funding, human resources, public acceptance of science and technology, research equipment, and infrastructure)
  • The efficient use of existing policy instruments for strategic basic and industrial research
  • The reinforcement of tools for knowledge transfer and marketing of research results
  • Continued attention to policy-oriented research and evaluation of existing policy measures
  • A strong emphasis on international cooperation, in both the bilateral and the multilateral context

Tackling the innovation paradox. University researchers have long worried that working with industry would somehow corrupt them and undermine the prestige of their work. Many industry leaders believe that there is little to be gained from working with universities because there is a structural mismatch between the academic research agenda and industry’s needs. In spite of these common preconceptions, Flemish universities and industries have found productive ways to work together. A study by the Catholic University of Leuven provides detailed evidence that Flanders is finding a way out of the innovation paradox and makes proposals on what can be done to advance this process. The key findings include:

  • In 2005, approximately 10% of all R&D expenditure in Flanders was generated in a collaborative partnership between industry and academia. According to 2003 data from the European Commission, industry in Belgium spends 10.9% of its R&D funding in university-related research settings, which is well above the EU average of 6.9% or the U.S. level of 6.3%.
  • Over the period 1991–2004, universities and public research centers have created 101 spinoff companies, including 54 over the past five years.
  • Research teams that work closely with industry also perform very well in basic research.

Encouraged by this evidence, Flanders is taking further steps to enhance university/industry collaboration. In 2004, the Industrial Research Fund (IOF) was established at the universities. The annual budget for this fund is currently around €11 million and is distributed over the six universities on the basis of performance-driven parameters, such as the number of spinoffs created, the number of patent applications, the volume of industrial contract research, and the budgetary share of each university in the European Framework Programme. Beginning in 2008, the annual budget will be increased to at least €16 million.

The IOF provides funds to hire postdoctoral staff who will concentrate on research results that show great potential for market application in the near future. This group of researchers will also be evaluated on the basis of their application-oriented performance. In the near future, the IOF will also allow the universities to fund projects in strategic basic research.

The IOF allows every university and its associated hogescholen to pursue its own policy of creating a portfolio of strategic application-oriented knowledge. Research contracts of this nature will lead to a more permanent structure for cooperation with industry. The aim is thus to stimulate industry-oriented research and to support the creation and/or consolidation of excellent research groups in industry-relevant areas by providing longer-term funding.

A second important instrument is the creation of interface cells at the universities. Functioning as government-funded technology transfer offices (TTOs), these cells help market research results through spinoffs and patents and provide advice on intellectual property rights issues to academic researchers. The operational budget for these TTOs doubled between 2005 and 2007. This increase will help staff and services become more professional and help the offices to deal with the challenge of extending their services to the broader landscape of the associations.

Even though both the IOF and TTOs are in place and it can be reasonably expected that they will make a difference in reducing the innovation paradox, further initiatives are needed. These include facilitating the mobility of researchers between sectors and the use of foresight methodology to assess the potential economic impact of existing and future technologies.

Intersectoral mobility. The movement of researchers between academia and industry is of paramount importance to enhance the exchange of knowledge and methodologies, to refine the research agenda, and to put young researchers in contact with an industrial environment where they can acquire skills not normally taught in an academic program. The main existing fellowship scheme for Ph.D. students is managed by the IWT, which is the Flemish innovation agency for industrial research. Fellows submit an applied research project, typically for a four-year period, which allows them to obtain their Ph.D. In addition, the IWT runs a limited postdoctoral program, which funds, for example, researchers planning to set up their own spinoff company.

The focus of these fellowship programs is obviously on applied research, but they cannot be considered as real intersectoral mobility because researchers are not moving back and forth between companies and university labs. In the months ahead, the Baekeland program will be launched as an alternative funding scheme, taking into account lessons learned from existing programs abroad. The program will establish four-year fellowships for Ph.D. students that are supported with a mix of government and private funds.

Foresight. In 2006, the VRWB undertook a major foresight exercise. With the support of many academic and industrial stakeholders, the VRWB embarked on the challenging task of trying to identify the major scientific and technological areas for the future, taking into account existing research potential, existing economic capacity, links with current international trends, and potential for future growth. The following six clusters have been identified:

  • Transport, services, logistics, and supply chain management
  • ICT and health care services
  • Healthcare, food, prevention, and treatment
  • New materials, nanotechnology, and the processing industry
  • ICT for social and economic innovation
  • Energy and environment for the service sector and the processing industry

Some might want to use these foresight results to send out a strong plea to reinstate thematic priorities in our existing funding channels. This, however, would be an unfortunate return to the past when Flanders had several top-down research programs, which didn’t leave enough breathing space for bottom-up initiatives and for smaller research actors. As said before, Flanders’ current research and innovation policy is based on an open no-strings-attached strategy, which allows and actively invites research proposals defined by the industrial and academic communities themselves. Funding is possible only after a thorough quality check, using the peer review principle as far as possible.

The results of the VRWB’s foresight exercise might become a useful reference instrument when deciding on the funding of new large-scale projects or research consortia. A potential area of application is the development of “competence centers,” which are bottom-up initiatives by industry to create a critical knowledge platform in their respective sectors. Open innovation is the underlying principle: Knowledge is accessible to all participants, and research is done in close collaboration with multiple industrial partners so that costs and risks can be shared. Of course, the necessary intellectual property rights and other legal agreements have to be put in place. About 10 competence poles are currently being funded, ranging from logistics and food to geographical information systems, to product development and industrial design. Foresight might come in handy when checking the feasibility and the potential economic impact of proposals for new competence poles.

Innovation as a horizontal policy dimension. Another major policy challenge is to broaden the concept of innovation to its nontechnological dimensions. Until very recently, Flemish innovation policy has been targeting only the technological dimension of innovation. There is a growing awareness, however, that innovation also depends on management, public and private governance structures, labor market organization, design, and other factors. The challenge is to develop suitable policy instruments to broaden the scope of innovation. The application of innovative public procurement is one of the instruments we are studying at the moment.

One of the policy priorities for the coming years is the “mainstreaming” of innovation; that is, to make sure that innovation becomes a horizontal dimension in all policy fields for which the Flemish government has responsibility. In 2005, the government approved the Flemish Innovation Plan, which puts forward nine main lines of action to stimulate creativity and innovation in all societal sectors, promote Flanders as an internationally recognized knowledge region, invest more in innovation, create an innovative environment, set a good example as a public authority, put more researchers to work, focus on the development of innovation hot spots in cities such as Ghent and Leuven, use innovation as leverage for sustainable development, and integrate innovative approaches into the social welfare system. The government endorsed this plan, which should lead to a horizontally integrated innovation approach across the board.

Strategic intelligence. The expanding rate of globalization and the complexities presented by an open innovation system in which governments no longer have the full range of instruments at their disposal to create an adequate policy mix make it imperative that governments join forces across borders. We need to enhance mutual understanding of our science and innovation systems, both within the national context and internationally.

High-quality and evidence-based policy preparation is possible only if one can bring together a team of policy experts who combine a good knowledge of the more theoretical innovation framework with well-tuned affinities for the practical needs and obstacles encountered on a daily basis by research actors, such as universities, higher-education institutes, or companies. In other words, desk study work and field work need to be combined.

One of the main challenges in the years ahead will be to boost the pool of science and innovation management knowledge capacities in Flanders and network the various agencies and organizations that carry out science and innovation analyses, very often on an ad hoc basis. The Flemish research landscape is so small and the capacity so limited that only a networked approach can yield efficient results. It does not make sense to have small and often isolated study cells at various organizations that are often not aware of each other’s activities. That situation actually reduces efficiency and creates situations where, for example, similar questionnaires are repeatedly sent to the same research units but by different senders. Coordination through a networked approach is clearly the way to go, and we will make this one of our policy priorities for the coming months and years.

As said before, we also need to increase the firsthand field knowledge of those charged with policy preparation. We therefore intend to set up a mobility program, which would allow the temporary exchange of staff among administrations, funding agencies, universities, public research institutes, higher-education institutes, and companies. Such an approach will make participants actively aware of the peculiarities of the “other” and often unknown environments. It will also greatly reduce the number of superfluous rules when designing new research programs or initiatives. Ultimately, greater mutual understanding is also a major contribution to innovation.

All actors in the Flemish research area also stand to benefit from up-to-date online statistical information; for example, on the number of publications and patents, scientific staff, or external contract revenues. This kind of information is not only necessary as an input for international data collection by international organizations but is also a valuable instrument for the government to monitor the impact of its science and innovation policy. Flanders has already taken steps in this direction. The Department of Economy, Science, and Innovation publishes in English and Dutch an annual budgetary overview on science and innovation in its Science, Technology and Innovation Information Guide. The Policy Research Centre for R&D Statistics has been entrusted with the biannual publication of the Flemish R&D Indicator Book. The next issue is planned for this year. As part of the recently approved action plan Flanders i2010, we will embark on the creation of an integrated online database with all relevant R&D data.

Given its strong and close international contacts, Flanders also stands to gain a lot from exchanging information and best practices with partners abroad. There are several instruments that help us in this effort. The European Commission has set up ERA-NETs, OMC-NETs, and INNO-NETs with the specific aim of enhancing innovation expertise and capacity in national administrations. At a bilateral level, Flanders engages in “innovation dialogues” with the Netherlands, Wallonia, and the United States.

After more than 15 years of continuous increases in public R&D spending, the Flemish funding system is reaching a state of completion; most of the funding instruments for curiosity-driven research, strategic basic research, and innovation are in place. The challenges ahead are to streamline these instruments, reducing overlaps and remaining obstacles, and to raise their effectiveness in tackling the innovation paradox. In this context, international policy-learning is extremely valuable, and this will be one of the priorities for the coming years.

From the Hill – Fall 2007

President Bush signs competitiveness bill

On August 9, President Bush signed into law the bipartisan America COMPETES Act (H.R. 2272), aimed at bolstering basic research and education in science, technology, engineering, and mathematics (STEM) to ensure the nation’s continued economic competitiveness. Despite signing the bill, however, the president expressed concerns about some of its provisions and said he would not support funding some of its authorized spending.

The passage of H.R. 2272 culminates two years of advocacy by the scientific, business, and academic communities, as well as by key members of Congress, sparked by the release of the 2005 National Academies’ report Rising Above the Gathering Storm.

The legislation, which incorporates many prior bills, authorizes $33.6 billion in new spending ($44.3 billion in total) in fiscal years (FY) 2008, 2009, and 2010 for a host of programs at the National Science Foundation (NSF), Department of Energy (DOE), National Institutes of Standards and Technology (NIST), National Oceanic and Atmospheric Administration (NOAA), National Aeronautics and Space Administration (NASA), and Department of Education. It puts NSF and NIST on a track to double their research budgets over three years by authorizing $22.1 billion and $2.65 billion, respectively. It also authorizes $5.8 billion in FY 2010 for DOE’s Office of Science in order to complete the goal of doubling its budget.

The act’s sections on NSF, DOE, and the Department of Education all have significant educational aspects. They are broadly aimed at recruiting more STEM teachers, refining the skills of current teachers and developing master teachers, ensuring that K-12 STEM education programs suitably prepare students for the needs of higher education and the workplace, and enabling more students to participate in effective laboratory and hands-on science experiences.

At NSF, for example, the law expands the Noyce program of scholarships to recruit STEM majors to teaching. DOE’s role in STEM education will be expanded by tapping into the staff expertise and scientific instrumentation at the national laboratories as a resource to provide support, mentoring relationships, and hands-on experiences for students and teachers. The Department of Education will become involved in developing and implementing college courses leading to a concurrent STEM degree and teacher certification.

The act would replace the Advanced Technology Program at the Department of Commerce with the Technology Innovation Program, with the primary goal of funding high-risk, high-reward technology development projects.

It also authorizes DOE to establish an Advanced Research Projects Agency for Energy (ARPA-E) to conduct high-risk energy research. With authorized funding of $300 million in FY 2008, the new agency is to be housed outside of DOE’s Office of Science, ostensibly to ensure that it does not rob from the Office of Science’s budget.

At the White House signing ceremony, President Bush had kind words in general for the legislation but also said that some of its provisions and expenditures were “unnecessary and misguided.”

Noting that the legislation shares many of the goals of his American Competitiveness Initiative (ACI), such as doubling funding for basic research in the physical sciences and increasing the number of teachers and students participating in Advanced Placement and International Baccalaureate classes, he said, “ACI is one of my most important domestic priorities because it provides a comprehensive strategy to help keep America the most innovative nation in the world by strengthening our scientific education and research, improving our technological enterprise, and providing 21st-century job training.”

But he said he was disappointed that Congress failed to authorize his Adjunct Teacher Corps program to encourage math and science professionals to teach in public schools, and he criticized 30 new programs that he said were mostly duplicative or counterproductive, including ARPA-E, whose mission, he said, would be more appropriately left to the private sector.

Bush also said the legislation provides excessive funding authority for new and existing programs, adding that, “I will request funding in my 2009 budget for those authorizations that support the focused priorities of the ACI but will not propose excessive or duplicative funding based on authorizations in this bill.”

Among those at the signing ceremony were congressional leaders who were key to shepherding the bill through Congress, including Rep. Bart Gordon (D-TN), chair of the House Science and Technology Committee, who said, “I am very concerned that the next generation of Americans can be the first generation to inherit a national standard of living less than their parents if we don’t do something. This bill will help turn that corner.”

Climate bills address competitiveness concerns

Several bills have been introduced in the Senate aimed at alleviating concerns about the potential impact that addressing climate change could have on U.S. economic competitiveness.

Sens. Jeff Bingaman (D-NM) and Arlen Specter (R-PA) introduced the Low Carbon Economy Act of 2007 (S. 1766) on July 11. It features a cap-and-trade system with targets to reduce greenhouse gases to 2006 levels by 2020 and 1990 levels by 2030. The bill encourages the development and deployment of CCS technology with a system of bonus emissions credits for companies that implement the technology.

S. 1766 contains provisions on international engagement meant to assuage critics of climate policies that do not include growing emitters such as China and India. The bill requires that the United States attempt to negotiate an agreement with other nations to take “comparable action” to address climate change. Beginning in 2020, the bill allows the president to require importers from countries that are not taking action to submit emission allowances for certain high-carbon products such as cement. Prices for these “international reserve allowances,” which would constitute a separate pool from domestic allowances, would be equal to those for domestic allowances, fulfilling a key tenant of trade law that tariffs be applied equally to domestic and foreign products.

The bill also attempts to limit costs by incorporating a cap on the price of emissions, referred to in the bill as a technology-accelerator payment but known to many as a safety valve. The price starts at $12 per ton of carbon and rises at a rate of 5% above inflation annually. A safety valve has been embraced by many in industry for providing price certainty, but criticized by economists and environmentalists who say it interferes with the power of the market and may also prohibit linkages with other international trading schemes.

Sens. John Warner (R-VA), Mary Landrieu (D-LA), Lindsey Graham (RSC), and Blanche Lincoln (D-AR) are using a different tactic to limit the costs of climate change legislation in a proposal Warner called “an emergency off ramp.” Their bill, Containing and Managing Climate Change Costs Efficiently Act (S. 1874), would create a Carbon Market Efficiency Board, modeled on the Federal Reserve Board, to regulate the market for carbon allowances. When prices are sustained above a certain threshold, the board could effectively reduce prices by borrowing credits from future years to expand the number of carbon permits available. The bill does not contain targets or timetables for greenhouse gas reductions, as sponsors intended for the proposal to be incorporated into a broader cap-and-trade proposal.

Warner, the ranking member of the Senate Subcommittee on Private Sector and Consumer Solutions to Global Warming and Wildlife Protection, ensured that the carbon market board provision would be included in at least one bill when he and Subcommittee Chair Joe Lieberman (ID-CT) incorporated it into the climate bill they plan to introduce in the fall of 2007. The America’s Climate Security Act will include provisions to establish a Carbon Market Efficiency Board, as well as provisions from the Bingaman/Specter bill to encourage other countries to address climate change. The draft calls for cuts in greenhouse gas emission of 70% below 2005 levels by 2050. Initially, 24% of the credits would be auctioned, with that amount rising to 52% in 2035. The auction would be run by a new Climate Change Credit Corporation and the proceeds used to promote new technology, encourage CCS, mitigate the effects of climate change on wildlife and oceans, and provide relief measures for poor nations.

Confrontation looms on R&D budget

The Senate and House are poised to add billions of dollars above the president’s budget request to the FY 2008 R&D budget, with much of the proposed new funding targeted for environmental, energy, and biomedical initiatives, according to an August 6 report by the R&D Budget and Policy Program of the American Association for the Advancement of Science (AAAS).

Congressional funding proposals also would meet or exceed the president’s spending plans for physical sciences research in the president’s ACI and for dramatic expansion of spending to develop new craft for human space exploration, said Kei Koizumi, the program’s director.

Whereas the White House proposed a budget for the fiscal year beginning October 1 that would have cut overall basic and applied research investment for the fourth straight year, Congress would increase research budgets at every major nondefense R&D agency. And with Congress exceeding the president’s overall domestic spending plan by $21 billion, there is the possibility of a budget conflict that could extend into FY 2008.“Because the president has threatened to veto any appropriations bills that exceed his budget request, these R&D increases could disappear or diminish this fall in negotiations between the president and Congress over final funding levels,” Koizumi concluded. Koizumi noted that earmarks— funds designated by Congress to be spent on a specific project rather than for an agency’s general policy agenda— account for one-fifth of the proposed new R&D spending.

According to the report, the House has approved all 12 of its 2008 appropriations bills; the Senate Appropriations Committee has drafted 11 of its 12 bills, but the full Senate has approved only the spending bill for the Department of Homeland Security. The Senate still must draft a spending bill for DOD. In all, appropriations approved by the House total $144.3 billion for R&D, $3.2 billion or 2.3% more than the current budget and $4 billion more than the White House 2008 budget proposal. The Senate would spend $500 million more on R&D than the House for the appropriations it has drafted.

Based on action thus far, Koizumi summarized congressional moves in several critical science and technology areas:

Energy: DOE’s energy-related R&D initiatives had received significant increases in 2007, but the Bush administration requested cuts for 2008. Congress would keep increasing DOE energy R&D spending dramatically, by 18.5% in the House to $1.8 billion and 29% to $2 billion in the Senate for the renewable energy, fossil fuels, and energy conservation programs, Koizumi reported.

Environment and climate change: Congress would turn steep requested cuts into increases for environmental research programs. Total R&D spending on environmental initiatives would rise 9.2% under House measures, compared to a 3% cut proposed by the administration. NOAA R&D, for example, would get a 9.9% increase in the House and 18.1% in the Senate. Among other prospective winners: the Environmental Protection Agency (EPA); the U.S. Geological Survey; and NASA. Some of the proposed funding for NASA would go to address concerns expressed by the National Research Council, the AAAS Board of Directors, and others that the number of Earth-observing sensors on NASA spacecraft could plunge in the years ahead if current NASA budget trends continue.

Biomedical advances: Lawmakers in both chambers would add more than $1 billion to the White House’s spending plan for the NIH budget, turning a proposed cut into an increase. But both the House and Senate would direct a significant part of that increase to the Global Fund for HIV/AIDS. As a result, the House plan would give most NIH institutes and centers raises of 1.5 to 1.7%, well short of the 3.7% rate of inflation expected next year in the biomedical fields; the institutes and centers would get 2.3 to 2.5% raises under the Senate bills.

STEM education: In addition to their support of STEM education measures in the ACI and America COMPETES act, lawmakers would add significantly to NSF education programs. NSF’s Education and Human Resources budget, after years of steep budget cuts, would soar 18% in the House and 22% in the Senate. Overall NSF R&D spending was cut in 2005 and 2006 but would jump to a record $4.9 billion in FY 2008 under both House and Senate plans.

NASA: After a decade of flat funding, overall NASA R&D funding would jump 9.8% under the House plan and 8.4% in the Senate. Both chambers would endorse large requested increases for the International Space Station facilities project and the $3.1 billion Constellation Systems development project to replace the Space Shuttle and carry humans toward the moon.

Energy bills face veto threat

After a contentious debate, the House passed two energy bills on August 4, but the bills will now have to be reconciled with a Senate bill that has different provisions and will face a veto from President Bush, who said the bills “are not serious attempts to increase our energy security or address high energy costs.”

The House approved the New Direction for Energy Independence, National Security, and Consumer Protection Act (H.R. 3221) and the highly contested Renewable Energy and Energy Conservation Tax Act (H.R. 2776). H.R. 3221, the broader energy package promised by House Speaker Nancy Pelosi, includes a renewable electricity standard but does not include higher corporate average fuel economy (CAFE) standards. H.R. 2776, a $16 billion bill that has received much criticism, increases tax incentives for renewable energy by reducing existing incentives for the oil and gas industries. The two bills were rolled into one after their passage under the rule for floor debate.

The House managed to push the speaker’s broad energy bill through with a vote of 241 to 172. The legislation’s star provision is a renewable electricity standard, which mandates that utilities produce 15% of their power from renewable sources by 2020. Utilities will be allowed to meet some of that requirement with energy efficiency measures. Originally, the standard was pegged at 20%, but it was reduced to 15% after many members noted that it might be difficult for some states with limited renewable sources to meet the requirement. The mandate does not apply to rural electric cooperatives and municipalities.

H.R. 2776 was intentionally kept separate from the broader energy package because of its doubtful acceptance on the House floor. The legislation ran into staunch opposition from the White House, Republicans, and oil-state Democrats immediately after being introduced, because it reduces tax incentives for the oil and gas industries in order to pay for renewable energy sources. A similar Senate package failed last month, but the House bill passed 221 to 189 with 11 Democrats defecting and 9 Republicans voting yes.

The conference between the House and Senate to reconcile the different bills will be a challenge. For example, the Senate bill includes a higher CAFE standard but not a renewable fuel standard. The Senate bill also includes provisions to increase ethanol and alternative fuel production, but the House bill does not.

Senators agreed to increase the CAFE standard from the current level of 27.5 miles per gallon (mpg) for cars and 22.5 mpg for light trucks to 35 mpg per fleet by 2020. The Senate bill mandates the use of 36 billion gallons of renewable fuel by 2022, a more than sevenfold increase from 2006 levels. In response to concerns about the environmental and economic effects of corn-derived ethanol, 21 billion gallons of this standard must be met with “advanced” biofuels such as cellulosic ethanol.

Proposals to fund coal-to-liquid technology were defeated by Democrats, who are opposed to supporting a fuel that would emit more carbon dioxide than conventional gasoline. However, measures to increase funding for carbon capture and storage (CCS) R&D were incorporated.

The Senate did not include language from a tax package prepared by the Finance Committee, though it may be inserted when the bill goes to conference. The tax package, worth $32.2 billion, would create incentives and subsidies for conservation and alternative energy, including clean coal technologies, CCS, cellulosic ethanol, and wind power. These programs would be funded by raising taxes and eliminating tax breaks now available to the oil industry. Many opposing the tax package said it would raise the cost of oil and gas production at a time when it is already unmanageable.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.