Forum – Fall 2009

Too few fish in the sea

Carl Safina’s review of how traditional fisheries management strategies have failed both the fish and the fishermen is right on target, as are his recommendations for new management strategies (“A Future for U.S. Fisheries,” Issues, Summer 2009). But I encourage all of us who think about the ocean to expand our thinking: Fishing isn’t the only activity that affects fish, and the National Marine Fisheries Services (NMFS) isn’t the only agency with jurisdiction over the ocean.

Traditional fisheries management has focused explicitly on fish, with plans addressing single species or groups of similar species. Traditional governance of resources has similarly focused on individual resources, ceding management of different resources to different agencies: fish to NMFS, oil to the Minerals Management Service, navigation to the Coast Guard. But increasingly, science is showing us that these seemingly disparate resources are delicately interconnected.

Discussion abounds in the scientific and popular media about dead zones and harmful algal blooms in the ocean and the nutrients from runoff that create them. Debate rages about the future of oil drilling and offshore renewable power sources in America’s energy future. Tensions and tempers continue to run high as the Navy develops sonar technology that could harm marine mammals. Congress is working up climate change legislation to reduce greenhouse gas emissions.

All of these activities affect the ocean and its inhabitants, and all of these activities affect one another. As we begin to move toward more integrated systems for managing fisheries, we must recognize the broader ocean and global ecosystem—including humans—and integrate it as well.

Toward that end, I have been working for nearly a decade in Congress on a piece of legislation: Oceans-21. It establishes a national ocean policy for the United States. It creates a governance system based on the notion that the multiagency approach we currently employ must be streamlined so we can continue to enjoy the many benefits the ocean provides us.

For too long we have reaped those benefits. We’ve harvested fish and extracted oil. We’ve hidden away our waste and pollution, pumping or dumping it offshore and out of sight. But the ocean’s bounty isn’t infinite, nor is its capacity to absorb our refuse. The time has finally come for us to recognize these truths and take action: We must learn to use the ocean in a sustainable way.

The ocean is the single largest natural resource on the planet and is critical to the habitability of our planet. It’s not just our fisheries that are at risk, but our very future.

REP. SAM FARR

Democrat of California

Co-chair of the House Oceans Caucus


Food for all

In “Abolishing Hunger” (Issues, Summer 2009), Ismail Serageldin presents a cohesive argument for action to abolish hunger while ensuring sustainable management of natural resources. I agree with the action he proposes, but I am concerned that such action will not be prioritized by those in charge. It is a long-standing lesson from historical developments and research that growth and development in agriculture are the key to successful economic growth and poverty alleviation in virtually every low-income developing country. That is so because most poor people are in rural areas and because agriculture usually is the best driver of general economic growth, generating $2 to $3 of general economic growth for every $1 of agricultural growth. Yet most African countries have failed to prioritize agricultural growth and development. Low-income developing countries with stagnant agriculture are almost certain to experience a stagnant economy and high levels of hunger and poverty.

The majority of the world’s poor, who reside in rural areas, are still being ignored, and in some cases exploited, and little investment is being made to produce more food, improve productivity, and reduce unit costs on small farms.

The recent global food crisis is a warning of what can happen when the food system is ignored by policymakers. National governments must now make the necessary investments in public goods for agricultural development, such as rural infrastructure, market development, agricultural research, and appropriate technology. These public goods are essential for farmers and other private-sector agents to do what it takes to generate economic growth and reduce poverty and hunger within and outside the rural areas. Farmers and traders in areas without such public goods cannot expand production, increase productivity and incomes, reduce unit costs of production and marketing, and contribute to the eradication of hunger. When food prices increased during 2007 and the first half of 2008, farmers responded by increasing production. But virtually all the increase came from countries and regions with good infrastructure and access to modern technology. A large share of the world’s poor farmers could not respond. In fact, the majority of African farmers cannot even produce enough to feed their own families. They are net buyers of food and as such were negatively affected by the food price increase.

This has to change if hunger is to be abolished. The knowledge is available to do so. What is missing is the political will among most, but not all, national governments in both developing and developed countries. Large food price decreases from the mid-1970s to 2000 created a false complacency among policymakers and low incomes among farmers in developing countries. Both caused very low private and public investments in agriculture and rural public goods. The recent global food crisis changed all that; or did it? What we have seen is a great deal of conference activity, talk, and hand-wringing but very little action, except irresponsible trade policies and short-term policy interventions to protect urban consumers, who may be a threat to existing governments (remember food riots). The majority of the world’s poor, who reside in rural areas, are still being ignored, and in some cases exploited, and little investment is being made to produce more food, improve productivity, and reduce unit costs on small farms.

At each of several high-level international conferences dedicated to the food and hunger situation, promises were made for large amounts of money to be made available for long-term solutions. Most recently, the G8 meeting in Italy promised up to $20 billion for that purpose. “Up to” are key words. So far, only very small amounts of the money promised at the earlier conferences have in fact come forth, and much of what did materialize was merely transferred from other development activities instead of being additional funds. Will the $20 billion materialize and will governments in developing countries now begin to prioritize agricultural and rural development? If the answer to the second question is no, the global food crisis of 2007–2008 is going to look like child’s play compared to what is to come as climate change increases production fluctuations and makes large previously productive areas unproductive because of drought or floods, and as countries lose their trust in the international food market and pursue self-serving policies at the expense of neighboring countries, something that has already begun. The planet is perfectly capable of producing the food needed in the foreseeable future and eradicating hunger without damaging natural resources, but only with enlightened policies. The time to act is now.

PER PINSTRUP-ANDERSEN

H. E. Babcock Professor of Food, Nutrition and Public Policy

Cornell University

Ithaca, New York


Better science education

Reading Bruce Alberts’ article was nothing if not painful (“Restoring Science to Science Education,” Issues, Summer 2009). More than 40 years ago, I was deeply involved in the work of PSSC Physics, the Elementary Science Study, and the Introductory Physical Sciences program. The work of some of the nation’s most gifted scientists, mathematicians, and teachers, these instructional programs were brilliant responses to the very challenges that Alberts correctly describes. Those programs are now gone, replaced by precisely the kind of standards, materials, and tests they were themselves designed to replace.

But Alberts is right. We should be discouraged. There is a developing consensus that the instructional system is a shambles. Growing numbers of people understand that the standards we have been using—even those of states like California that have been widely admired—suffer from all the crippling defects Alberts catalogues; that American-style multiple-choice, computer-scored tests will never adequately capture the qualities of the greatest interest in student performance; that the materials we give our students stamp out wonder and competence and produce instead aversion to science. But there is a chance that we might get it right this time.

We have been benchmarking the performance of the countries with the most successful education systems for more than two decades. Unlike the United States, many use what we have come to call board examination systems. These systems consist, at the high-school level, of a core curriculum, each course in which is defined by a well-conceived syllabus emphasizing the conceptual foundations of the discipline, deep understanding of the content, and the ability to apply those concepts and use that deep content mastery to solve difficult and unfamiliar problems. Each course comes with an examination at the end that calls on the student to go far beyond recall of facts and procedures to demonstrate a real analytical understanding of the material and the ability to use it well. The grades are also based on substantial student projects undertaken during the year, doing work that could not possibly be done in a timed examination. Most of the questions on the exams are essay questions, and most of the scoring is done by human beings.

My guess is that if Alberts looked carefully at the curriculum and exams produced by the best of the world’s board examination organizations, he would agree that this country would be well served by simply asking our high schools to offer them to their students and then train those teachers to teach those courses well. This is a solution that is available today, and, if it were implemented, could vault the science performance of our high-school students from the bottom of the pack to the top, especially if we were to prepare our students in elementary and secondary school for the board examination programs they should be taking in high school.

MARC TUCKER

President

National Center on Education and the Economy

Washington, D.C.


Nurturing sustainability

Fully half of the “five-year plan” articles in the Summer 2009 edition of Issues called for research to advance various aspects of sustainable development. Unasked was the question, “Who is going to do the work?”

The present system for training and nurturing young scientists is almost certainly not up to the job. To foster “The Sustainability Transition” outlined by Pamela Matson, we need researchers capable not only of (i) doing very demanding cutting-edge science, but also of (ii) working with decisionmakers and other knowledge users to define and refine relevant questions; (iii) integrating the perspectives and methods of multiple disciplines in their quest for answers; and (iv) promoting their inevitably incomplete findings in the rough-and-tumble world of competing sound bites and conflicting political agendas. But such skills are neither taught in most contemporary science curricula nor rewarded in the climb up most academic ladders.

The challenge of doing the work to harness science for sustainable development is compounded by the fact that there is so much work to be done. Elinor Ostrom has argued forcefully that the dependence of human/environment interactions on local contexts means that we must move beyond panaceas in our quest for effective sustainability solutions. This also means, however, that we need lots of scientists and engineers working close to the ground in order to design interventions appropriate to particular places and sectors.

To be sure, a number of initiatives have been launched during the past several years that are beginning to address the human capital needs of harnessing science for a sustainability transition. As documented on the virtual Forum on Science and Innovation for Sustainable Development (), an increasing flow of use-inspired fundamental research on sustainability is finding its ways not only into top journals but also into effective practice at scales from the local to the global. Elements of the training needed to help a new generation of scholars contribute to this flow are increasingly available from the undergraduate to postdoctoral level, backed by an increasing number of novel fellowships. New programs, centers, and even schools of sustainability science and its relatives have been springing up around the world. But as encouraging as these signs of progress surely are, they almost certainly remain wholly inadequate to the challenges before us. An essential complement to the “five-year plans” sketched in Issues’ summer edition must therefore be a systematic and comprehensive effort to characterize, and then to build, the workforce we will need to carry them through. The President’s Office of Science and Technology Policy, together with the National Academies, ought to give high priority to providing the U.S. leadership for such an initiative in building the science and technology workforce for a sustainability transition.

WILLIAM C. CLARK

Harvey Brooks Professor of International Science, Public Policy and Human Development

Kennedy School of Government

Harvard University


Pamela Matson’s article provides a compelling vision of the connection among science, nature, and the changes brought by a globalizing economy. This is a connection often described in terms of crises (climate change) or unbounded opportunity (the Internet). Matson takes a measured, sensible view, drawing attention to long-term transitions already in progress—in population, the economy, and the rising human imprint on nature. We do not yet know whether these transitions will lead toward sustainability: a durable, dynamic relationship with the natural world, in which the activities of one generation enhance the opportunities of future generations and conserve the life support systems on which they depend.

Examples from philanthropy show how science is essential to the search for sustainability. The Packard Foundation is probing consumption by supporting and evaluating the certification of products such as responsibly harvested fish. Does such an approach work to curb irresponsible harvest? Will organic food or carbon offsets scale up to dominate their markets? These are questions of sustainability science.

Together with the Moore Foundation, Packard has supported large-scale science-driven monitoring in the California Current. The Partnership for Interdisciplinary Study of the Coast Ocean has provided, in turn, much of the scientific basis for declaring marine reserves in California waters. Reserves are now under study in Oregon as well. Ecosystem-scale sustainability science has been developing the essential methods of integrating oceanography with nearshore biology. Ahead lies the exciting challenge of demonstrating, in partnership with government, the value of these approaches as the oceans acidify and warm, currents and upwellings change, and sea level rises. Without surveillance, humans will fail to see the warning signs of the thresholds that we and other living things are now crossing and needing to navigate.

For more than a decade, the Aldo Leopold Leadership Program has identified leading young environmental scientists; provided them training in science communications; and strengthened their voice in regional, national, and international policy debates. This too is sustainability science, in service of improved governance.

These three initiatives all aim at forging durable links between knowledge and action: science guided by the needs of users, relevant and timely, legitimate in a mistrustful world, and credible as a guide to difficult, consequential social choices.

Private donors’ means are meager in comparison to the needs of a sustainability transition. We and our grantees in the nonprofit sector need to collaborate far more and more effectively with business and government. As Matson urges, there are institutions to be engaged—not least the National Academies and the global resources of science. It is not that there is no time. But there may not be enough time, unless we seize this day.

KAI N. LEE

Conservation and Science Program Officer

David and Lucile Packard Foundation

Los Altos, California


Energy nuts and bolts

Vaclav Smil’s “U.S. Energy Policy: The Need for Radical Departures” (Issues, Summer 2009) is filled with wise insights and judgments. However, the use of vague language detracts from the article’s value for public policy (for example, “vigorous quest,” “relentless enhancement,“ “fundamental reshaping,” “responsible residential zoning,” and “serious commitment.”) These kinds of broad, unbounded adjectives appear in numerous reports on energy policy nowadays. They give the illusion of meaningful policy prescription, when in fact nothing useful can follow without the hard work of drawing boundaries on the scope, cost, and timing of programs—work that presumably is thought to be so trivial that it can be left to legislative bodies, nongovernmental organizations, and corporations to fill in the details.

Admittedly, there is only so much an analyst can do in one brief article. And laying out an abstract goal for Congress and other leaders, as Smil does so well, is in itself useful, particularly when the goal has a realistic time frame. However, we do not live in a world where wise people sit down together, think about what would be best, and join hands to put wise proposals into effect. We live in a messy, contentious world, where different stakeholders with different ideological biases, philosophies, and personal interests argue with each other, often to the point of paralysis. Programs are put into place by one party, only to be replaced by the other party after subsequent elections. A major challenge is to develop and negotiate energy transition policies that the great majority of stakeholders can accept; policies that are likely to survive party transitions. This is not just the job of legislators and lobbyists. Academics and other outside observers knowledgeable about energy issues could usefully spend time developing policies that are optimal from the perspective of negotiated conflict resolution, rather than from an individual’s view of an optimal world.

JAN BEYEA

Core Associate

Consulting in the Public Interest

Lambertville, New Jersey


Science politicization

Daniel Sarewitz’s “The Rightful Place of Science” (Issues, Summer 2009) is naive. Can he actually believe that U.S. presidents rely on science to drive policy, rather than using science to promote political ends? The White House Office of Science and Technology Policy, for example, exists less for providing advice and guidance than as a means to support and promote the policy views of the president.

By acknowledging President Obama’s support for increased spending on scientific research as the price to pay for congressional passage of his economic stimulus package, Sarewitz does show some recognition of the link between science policy and politics. He makes reference to the attitude toward science by several presidential administrations since the Eisenhower years, but inexplicably fails to mention President Bill Clinton or his science and technology czar, Al Gore. One wonders whether Sarewtiz is amnesic about that period, or as a loyal Democrat and science policy wonk is just embarrassed by it. In spite of the blunders and clumsy manipulation by George W. Bush and his minions, arguably the blatant and heavy-handed politicization of science by Vice-President Gore was the most egregious of all.

In government, the choice of personnel is often tantamount to the formulation of policy, and as vice-president, Gore surrounded himself with yesmen and anti-science, anti-technology ideologues: presidential science adviser Jack Gibbons; Environmental Protection Agency (EPA) chief and Gore acolyte Carol Browner, whose agency’s policies were consistently dictated by environmental extremists and were repeatedly condemned by the scientific community and admonished by the courts; Food and Drug Administration (FDA) Commissioner Jane Henney, selected for the position as a political payoff for politicizing the agency’s regulation while she was its deputy head; State Department Undersecretary Tim Wirth, who worked tirelessly to circumvent Congress’s explicit refusal to ratify radical, wrong-headed, anti-science treaties signed by the Clinton administration; and U.S. Department of Agriculture Undersecretary Ellen Haas, former director of an anti-technology advocacy group, who deconstructed science thusly, “You can have ‘your’ science or ‘my’ science or ‘somebody else’s’ science. By nature, there is going to be a difference.”

We now find ourselves in the questionable position of relying on taxing the very fuels whose consumption we are trying to curtail or eliminate.

Many Obama appointees who will be in a position to influence science-and technology-related issues are overtly hostile to modern technology and the industries that use it: Kathleen Merrigan, the deputy secretary of agriculture; Joshua Sharfstein, deputy FDA commissioner; Lisa Jackson, EPA administrator; and Carol Browner (she’s baaack!), coordinator of environmental policy throughout the executive branch. None of them has shown any understanding of or appreciation of science. Browner was responsible for gratuitous EPA regulations that have slowed the application of biotechnology to agriculture and to environmental problems; Jackson worked in the EPA’s notorious Superfund program for many years; and Merrigan relentlessly promoted the organic food industry, in spite of the facts that organic foods’ high costs make them unaffordable for many Americans, thereby discouraging the consumption of fresh fruits and vegetables; and because of their low yields, are wasteful of farmland and water. While a staffer for the Senate Agriculture Committee, Merrigan was completely uneducable about the importance of genetically improved plant varieties to advances in agriculture.

As Sarewitz says, the “rightful place” of science is hard to find. The Obama administration’s minions are not likely to take us there, and to pretend otherwise is disingenuous.

HENRY I. MILLER

The Hoover Institution

Stanford University

Stanford, California


Paying for pavement

Martin Wachs’ “After the Motor Fuel Tax: Reshaping Transportation Financing” (Issues, Summer 2009) paints a clear picture of what ails the current U.S. system of revenue-raising that uses motor fuel taxes and other indirect user fees to fund roads and mass transit needs. The article also points to a new direction: pricing travel more fairly using a flexible approach that is based on vehicle-miles traveled (VMT).

The past two to three decades have dramatically exposed the weaknesses of the motor fuel tax: substantial gains in fuel economy, the development of hybrid and other alternative-fuel vehicles, and a concerted effort to reduce the United States’ overreliance on fossil fuels. We now find ourselves in the questionable position of relying on taxing the very fuels whose consumption we are trying to curtail or eliminate. When we add the growth in truck VMT and overall travel and the eroding effects of inflation on a federal fuel tax that has not been raised in more than 20 years, it is not surprising that the Federal Highway Trust Fund has, and will continue to, experience deficits and require periodic infusions from the General Fund.

The Federal Highway Trust Fund, which is based on the motor fuel tax, was set up more than 50 years ago to ensure a dependable source of financing for the National System of Interstate and Defense Highways and the Federal Aid Highway Program. Given the shortcomings described above, we can conclude that the motor fuel tax is no longer a “dependable source of financing” for the transportation system.

What is the alternative? Recent commission reports and studies point to distance-based charges or VMT fees as the most promising mid- to long-term solution to replace the fuel tax. The use of direct VMT fees can overcome most, if not all, shortcomings of the fuel tax. Furthermore, because VMT fees relate directly to the amount of travel, rates can be made to vary so as to provide incentives to achieve policy objectives, including greater fuel economy and the use of alternative-fuel vehicles, which the current fuel tax encourages. In addition, however, rates could vary to include factors such as weight, level of emissions, and time-of-day charges. For example, Germany’s national Toll Collect distance- and global positioning system (GPS)–based system for trucks varies the basic toll rate as a function of the number of axles and level of truck emission.

Non-GPS technology is currently available that makes it possible to conduct a large-scale implementation of distance-based charges within two to five years. This approach would use a connection to the vehicle’s onboard diagnostic port, installed in all vehicles since 1996, to obtain speed and time. An onboard device uses these inputs to calculate distance traveled and the appropriate charge, which are the only information that could be sent by the vehicle onboard unit to an office for billing purposes. This approach goes a long way toward addressing public concern about a potential invasion of privacy. (There is a widespread perception that a GPS-based VMT charging system “tracks” where a driver is. This is an unfortunate misconception.)

Given the crisis that road and transit funding is facing, we strongly endorse Wachs’ “hope that Congress will accept the opportunity and begin specifying the architecture of a national system of direct user charges.”

LEE MUNNICH

Director, State and Local Policy Program

FERROL ROBINSON

Research Fellow

Hubert H. Humphrey Institute of Public Affairs

University of Minnesota

Minneapolis, Minnesota


Goal-oriented science

Lewis Branscomb’s “A Focused Approach to Society’s Grand Challenges” (Issues, Summer 2009) offers a good measure of astute analysis and worldliness to research and innovation policymaking in the Obama administration. Yet there is an irony underlying the “Jeffersonian science” approach that, if not dealt with head-on, may derail even the most intelligent and wise of prescriptions.

In my reading of the ongoing attempts to re-label (rather than actually reorient) publicly sponsored knowledge and innovation activities, Jeffersonian science has much in common with Donald Stokes’ earlier “Pasteur’s Quadrant,” as well as with the older, more prosaic, and more institutionally grounded “mission-oriented basic research.” Whatever you call it, this kind of research is not the pure, curiosity-motivated stuff that the National Science Foundation (NSF) was supposed to fund in the Endless Frontier model, but rather the stuff that the Departments of Defense, Commerce, Health and Human Services, and, eventually, Energy were supposed to fund.

Nevertheless, it seems—and here is the irony, as well as the way in which both Jeffersonian science and Pasteur’s Quadrant conceal it—that NSF has found better ways of eliciting and sponsoring this kind of research than the mission agencies have. NSF has done this through several mechanisms, including the increasing importance it has placed on centers, such as the Nanoscale Science and Engineering Centers that are the centerpiece of its involvement in the National Nanotechnology Initiative (full disclosure: I direct one such center); its commitment, however partially realized, to evaluating all peer-reviewed proposals according to a “broader impacts” criterion as well as an “intellectual merit” criterion; and its support (again, contrary to its initial Bushian conception) of a robust set of research programs in the social sciences, in particular in the ethics, values, and social and policy studies of science and technology.

This suite of mechanisms has meant, in my experience, a much more receptive atmosphere at NSF than at the mission agencies for engaging in the kind of transdisciplinary collaborations among social scientists, natural scientists, and engineers that are necessary to understand and manage, as Branscomb delineates, policies to address the “institutions and organizations that finance and perform R&D; … attend to the unanticipated consequences of new technologies; and manage matters related to such issues as intellectual property, taxes, monetary functions, and trade.”

I am thus skeptical that the “four-step analytical policy framework” that Branscomb advances from Charles Weiss and William Bonvillian will have much success in the mission agencies unless it is wed to a new understanding in those agencies; for example, that the “weaknesses and problems [emerging energy technologies] might face in a world market” have to do with many more dimensions than scientific novelty, engineering virtuosity, and economic efficiency, or that the “barriers to commercialization” that must be “overcome” in part involve how innovation happens and who does it and for and to whom, as well as what attitudes toward innovation people and institutions have.

A Jeffersonian science program that fulfills Branscomb’s ambitions would need much more than “greater depth scientifically.” It would need robust incentives for understanding the multiple disciplinary and societal dimensions of innovation and a merit review process, including written solicitations, program officers, and peer reviewers, that embraced the Jeffersonian ideal and was not beholden to outmoded ideas of research and development.

DAVID H. GUSTON

Professor, School of Politics and Global Studies

Co-Director, Consortium for Science, Policy and Outcomes

Director, NSEC/Center for Nanotechnology in Society

Arizona State University

Tempe, Arizona


Forum – Summer 2009

Human enhancement

I read with interest Maxwell J. Mehlman’s “Biomedical Enhancements: Entering a New Era” (Issues, Spring 2009).

My principal association with biomedical enhancements has been in connection with sport as a member of the International Olympic Committee and, from 1997 to 2007, as chairman of the World Anti-Doping Agency. This is a somewhat specialized perspective and is only a subset of the full extent of such enhancement, but I think it does provide a useful platform from which to observe the phenomenon.

Let me begin by saying that the advancement of science and knowledge should not be impeded. Neither science nor knowledge is inherently “bad.” We should be wary of any society that attempts to prohibit scientific research or to tie it in some way to ideology.

On the other hand, once knowledge exists, there may well be value judgments to be made regarding the circumstances and application of such knowledge. Some may be confined to the personal and the freedom of individuals to do what they wish with their own bodies, however grotesque may be the outcome. Other judgments, perhaps even impinging on the personal, may justify some effort, even if perceived as paternalistic, to be certain that decisions are fully informed and risks understood. Still others may require collective or selective prohibition, either by the state or by direct agreement.

Sport has generally proceeded by agreement in relation to enhancements. Participants agree on all aspects of sport, including the rules of the game, scoring, equipment, officiating, and other areas, as well as certain substances or enhancement techniques that the participants will not use. In this respect, the initial concern was the health of the athletes (many of whom have little, if any, knowledge of the risks involved), to which was added an ethical component, once consensual rules were in place to prohibit usage. This consensual aspect is what makes drug use in sport “bad”—not the drugs themselves, but the use of them notwithstanding an agreement among participants not to do so. It follows, of course, that anything not prohibited is allowed.

Given the ubiquitous character of sport in today’s society, it is often the lightning rod for consideration of enhancements, and we should resist those who try to lump all enhancements into the same category, or excuse all because one might be justifiable. If military or artistic or attention deficit syndrome–affected individuals can benefit from enhancements within acceptable risk parameters, and the resulting enhancement is considered acceptable, it does not follow that such societal approbation should necessarily spread to sport, whose specific concerns are addressed within its particular context. If that context changes over time, mechanisms exist to change the sport rules, but until they are changed, they represent the deal willingly agreed to by all participants, who are entitled to insist that all participants abide by them. There is no reason why any athlete should be forced to use enhancements simply because another athlete, who agreed not to do so, is willing to cheat.

RICHARD W. POUND

Chancellor

McGill University

Montreal, Canada

Richard W. Pound is a member of the International Olympic Committee, former chairman of the World Anti-Doping Agency, and a former Olympic swimming competitor.


Maxwell J. Mehlman outlines a compelling case for why banning human enhancements would be ineffective and, most likely, more harmful to society than beneficial. He and I share this view. Thus, he highlights the inadequacy of expanding arbitrary enhancement prohibitions that are found in normative practices, such as sport, to the wider world. He also explains why prohibition for the sake of vulnerable people cannot apply to the competent adult, although he acknowledges that certain human endeavors often compromise informed consent, such as decisions made within a military environment. He doubts that traditional medical ethics principles would be suitable to govern the expansion of medical technologies to the nonmedical domain. In so doing, Mehlman points to the various examples of enhancement that already reveal this, such as the proliferation of dietary supplements. Mehlman also draws attention to the likely “enhancement tourism” that will arise from restrictive policies, rightly arguing that we should avoid this state of affairs.

However, Mehlman’s argument on behalf of human enhancement offers the negative case for their acceptance. In response, we might also derive a positive case, which argues that our acceptance of enhancement should not arise just because prohibition would be inadequate or morally indefensible. Rather, we should aspire to find positive value in its contribution to human existence. The story of how this could arise is equally complex, though Mehlman alludes to the principal point: When it comes to enhancement, one size doesn’t fit all.

One can describe the positive case for human enhancement by appealing to what I call biocultural capital. In a period of economic downturn, the importance of cultural capital, such as expert knowledge or skills, becomes greater, and we become more inclined to access such modes of being. In the 21st century, the way we do this is by altering our biology, and the opportunities to do this will become greater year after year. In the past, mechanisms of acquiring biocultural capital have included body piercing, tattooing, or even scarification. Today, and increasingly in subsequent years, human enhancements will fill this need, and we see their proliferation through technologies such as cosmetic surgery and other examples Mehlman explores.

Making sense of an enhancement culture via this notion is critical, as it presents a future where humanity is not made more homogenous by human enhancements, but where variety becomes extraordinarily visible. Indeed, we might compare such a future to how we perceive the way in which we individualize clothing and other accessories today. Thus, the problem with today’s enhancement culture is not that there are too many ways to alter ourselves, but that there are too few. The analogy to fashion is all the more persuasive, because it takes into account how consumers are constrained by what is available on the market. Consequently, we are compelled to interrogate these conditions to ensure that the market optimizes choice.

The accumulation of biocultural capital is the principal justification for pursuing human enhancements. Coming to terms with this desire ensures that institutions of scientific and health governance will limit their ambitions to temper the pursuit of enhancement to providing good information, though they are obliged to undertake such work. Instead, science should be concerned with more effectively locating scientific decisionmaking processes within the public domain, to ensure that they are part of the cultural shifts that occur around their industries.

ANDY MIAH

University of the West of Scotland


A necessary critique of science

Michael M. Crow’s “The Challenge for the Obama Administration Science Team” (Issues, Spring 2009) doesn’t call spades digging implements. It’s unexpected to find a U.S. university president acknowledging and deploring the system of semi-isolated, discipline-oriented research in our universities. It’s also noteworthy to find his critique in a publication sponsored by the National Academies of Science and Engineering. Crow’s article may signal recognition that longstanding complaints about flaws in U.S. science policy can no longer be ignored at this time of national crisis. Even Daniel S. Greenberg, dean of U.S. science policy observers and a man fiercely devoted to the independence of scientific inquiry, has noted that public support without external oversight or responsibilities has not been healthy for U.S. science.

In a just-released five-year study of the origin of U.S. conflicts over environmental and energy policy, I identified additional and continuing adverse effects of the manner in which federal support for basic research was introduced to U.S. academia after World War II. Although the cost of the National Science Foundation and other federal research outlays was relatively modest, at least initially, the prestige associated with the basic research awards caused discipline-oriented, peer-reviewed publications to become the basis of academic appointments, promotion, and tenure. The quality of research products was generally high, but applied science, engineering, and larger societal issues became relegated to second-class status and interest. University curricula and the career choices of gifted scientists were affected. Scientific leaders failed to oppose the wholesale abandonment of mandatory science and math courses in secondary schools in the 1960s, so long as university science departments got their quota of student talent. Additional adverse direct or indirect effects of the new paradigm included initial neglect of national environmental policy and impacts on federal science and regulatory agencies and industry.

Finally, the entropic fragmentation of conceptual approaches to complex problems seems to contribute to the willingness of scientists and other leaders to ignore insights or ideas outside their preferred associations. This may complicate the task of gaining holistic understanding of major issues in society for citizens as well as scientists.

FRANK T. MANHEIM

Affiliate Professor, School of Public Policy

George Mason University

Fairfax Virginia


Squaring biofuels with food

In their discussion of land-use issues pertaining to biofuels (“In Defense of Biofuels, Done Right,” Issues, Spring 2009) Keith Kline, Virginia H. Dale, Russell Lee, and Paul Leiby highlight results from a recent interagency assessment [the Biomass Research and Development Initiative (BRDI), 2008] based on analyses of current land use and U.S. Department of Agriculture baseline projections. The BRDI study finds that anticipated U.S. demand for food and feed, including exports, and the feedstock required to produce the 36 billion gallons of biofuels mandated by the renewable fuel standard are likely to be met by the use of land that is currently managed for agriculture and forestry.

Looking at what would happen based on current trends is an important part of the overall biofuels picture, but only a part. Developing a strategic perspective on the role of biofuels in a sustainable world over the long term requires an approach that is global in scope and looks beyond the continuation of current practices with respect to land use as well as the production and consumption of both food and fuel. Although there is a natural reluctance to consider change, we must do so, because humanity cannot expect to achieve a sustainable and secure future by continuing the practices that have resulted in the unsustainable and insecure present.

We—an international consortium representing academic, environmental advocacy, and research institutions—see increasing support for the following propositions:

  1. Because of energy density considerations, it is reasonable to expect that a significant fraction of transportation energy demand will be met by organic fuels for the indefinite future. Biofuels are by far the most promising sustainable source of organic fuels, and are likely to be a nondiscretionary part of a sustainable transportation sector.
  2. Biofuels could be produced on a scale much larger than projected in most studies to date without compromising food production or environmental quality if complementary changes in current practices were made that foster this outcome.
HUMANITY CANNOT EXPECT TO ACHIEVE A SUSTAINABLE AND SECURE FUTURE BY CONTINUING THE PRACTICES THAT HAVE RESULTED IN THE UNSUSTAINABLE AND INSECURE PRESENT.

Consistent with the first proposition, we believe that society has a strong interest in accelerating the advancement of beneficial biofuels. Such acceleration would be considerably more effective in terms of both motivating action and proceeding in efficacious directions if there were broader consensus and understanding with respect to the second proposition. Yet most analyses involving biofuels, including that of Kline et al., have been undertaken within a largely business-as-usual context. In particular, none have explored in any detail on a global scale what could be achieved via complementary changes fostering the graceful coexistence of food and biofuel production.

To address this need, we have initiated a project entitled Global Feasibility of Large-Scale Biofuel Production. Stage 1 of this project, beginning later this year, includes meetings in Malaysia, the Netherlands, Brazil, South Africa, and the United States, aimed at examining the biofuels/land-use nexus in different parts of the world and planning for stage 2. Stage 2 will address this question: Is it physically possible for biofuels to meet a substantial fraction of future world mobility demand while also meeting other important social and environmental needs? Stage 3 will address economics, policy, transition paths, ethical and equity issues, and local-scale analysis. A project description and a list of organizing committee members, all of whom are cosignators to this letter, may be found at . On behalf of the organizing committee,

LEE LYND

Professor of Engineering

Dartmouth College

Hanover, New Hampshire

ABANDONED AND DEGRADED LAND CAN BE REHABILITATED AND BROUGHT INTO CROP PRODUCTION, OBVIATING THE CLEARING OF NATURAL LAND AND TROPICAL FOREST FOR PLANTING BIOFUEL FEEDSTOCKS.


The debate over the potential benefits of biofuels is clouded by widely varying conclusions about the greenhouse gas emissions (GHG) performance of certain biofuels and the role these biofuels are thought to play in food price increases. Misunderstanding is occurring because of limitations in methodology, data adequacy, and modeling, and the inclusion or exclusion of certain assumptions from the estimation exercise. These issues surface most prominently in characterizing indirect land-use change emissions produced by certain biofuel feedstocks. Initial estimates of GHG emissions thought to be associated with indirect land-use change made in February 2008 by Searchinger et al. exceeded 100 grams of carbon dioxide equivalents per megajoule (CO2 eq/MJ) of fuel. More recent estimates by other researchers have ranged from 6 to 30 grams of CO2 eq/MJ. The attribution of food price increases to biofuels is also driven by the degree to which relevant factors and adequate data are known and considered in the attribution analysis.

The four primary assumptions underpinning the attribution of induced land-use change emissions to biofuels are:

  1. Biofuels expansion causes loss of tropical forests and natural grasslands.
  2. Agricultural land is unavailable for expansion.
  3. Agriculture commodity prices are a major driving force behind deforestation.
  4. Crop yields decline with agricultural expansion.

Keith Kline et al. bring clarity to the debate about the role of biofuels in helping to reduce GHG emissions and dependence on petroleum. They identify important factors that must be considered if the carbon performance of biofuels, corn ethanol, and biodiesel from soybeans, in particular, are to be addressed credibly in an analytical, programmatic, policy, or regulatory framework.

Kline et al. point out that the arguments against biofuels are not valid because other factors must be taken into account, such as:

  1. Agricultural land is available for expansion.
  2. Yield increases minimize or moderate the demand for land.
  3. Abandoned and degraded land can be rehabilitated and brought into crop production, obviating the clearing of natural land and tropical forest for planting biofuel feedstocks.
  4. The causes of deforestation are manifold and can’t be isolated to one cause. Fires, logging, and cattle ranching are responsible for more deforestation than can be attributed to increased demand for biofuels.
  5. Improved land management practices involving initial clearing and the maintenance of previously cleared land may prevent soil degradation and environmental damage.

These are valid and relevant points. The authors correctly point out that land-use change effects are due to a complex interplay of local market forces and policies that are not adequately captured in analyses, which lead to incorrect attributions and potentially distorted results. Better information from field- and farm-level monitoring, beyond relying on satellite imagery data, is required to substantiate these influencing factors. The proper incorporation of better data is likely to yield refined analyses that clarify the GHG performance of biofuels. Citing several studies, including a few by the U.S. Department of Agriculture, the authors argue that corn ethanol’s impact on food prices is modest (about 5% of the 45% increase in global food cost that occurred in the period from April 2007 to April 2008). At the same time, the International Monetary Fund documented a drop in global food prices of 33% as oil prices declined, suggesting that food price increases were due more to other factors, such as energy price spikes. A balanced review of the literature would suggest that the findings the authors report are valid.

Estimates of indirect land-use change emissions are driven largely by assumptions made about trade flow patterns, commodity prices, land supply types, and crop yields. Land-use change impacts are in turn influenced by the soil profiles of the world’s agricultural land supply, as broken down into agricultural ecological zones (AEZs). Soil organic carbon content varies from 1.3% to 20% in the 6,000 or so AEZs around the world. Combined with soil profiles and trade flow patterns, analysts’ choice of land supply types can have a big influence on the induced land-use change effect attributed to corn or soybeans diverted for biofuels production.

The areas Kline et al. highlight also identify opportunities for considering alternative treatment of this important topic. One such approach envisions separating direct and indirect emissions instead of simply combining them under current practice. Under this approach, regulations could be based on direct and indirect emissions as determined by the best available methods, models, and data, subject to a minimum threshold. Above the defined threshold, the indirect emissions of a fuel pathway would be offset in the near term, reduced in the medium term, and eliminated in the long term. This approach is attractive because it incorporates both direct and indirect emissions in an equitable framework, which treats both emission components with the importance they warrant, while allowing improvements in the science characterizing indirect emissions, without excluding any fuels from participating in the market and stranding investments. Under this framework, the marketplace moves toward biofuels and other low-carbon fuels that use feedstocks unburdened with the food-versus-fuel or induced land-use emissions impacts. On the other hand, as estimates of indirect land-use change emissions improve and indirect emissions of all fuel pathways are considered on an even playing field, it is possible that the conventional and alternative approaches might produce similar results.

Kline et al. have provided an invaluable perspective for understanding factors not considered in current analyses regarding the benefits of biofuels. Program analysts, researchers, and regulators will find their low-carbon fuel goals and objectives strengthened by taking a closer look at these issues in the shared desire to capture the benefits of biofuels in a sustainable manner. They’ll find that biofuels can be “done right.”

MCKINLEY ADDY

Program Manager, Fuel Cycle Analysis Research

California Energy Commission

Sacramento, California


A strenuous argument is going on about the proper role of agricultural biofuels in helping to respond to climate change. One group argues that agricultural biofuels will lead to an increase in atmospheric CO2, threaten food production and the poor, and increase the loss of biodiversity. Others argue that agricultural biofuels can make an important contribution to energy independence, provide for increased rural prosperity, and contribute to less atmospheric CO2.

Keith Kline and his coauthors favor the development of agricultural biofuels. They are convinced that biofuel feedstock production and development in many third-world countries could stimulate local improvements in agricultural technology and promote related social developments that increase prosperity and the long-term prospects of many poor farmers. They argue that large amounts of land are available for this purpose, the use of which would not significantly affect areas of high biotic diversity or result in the loss of significant quantities of terrestrial carbon as CO2 to the atmosphere when used for biofuels. Under certain plausible conditions, the development of biofuel production could increase the terrestrial storage of carbon, while also substituting for the carbon released from petroleum. To support their assertions, they cite data sources and provide reasonable narratives for the ways in which land use undergoes changes in many tropical or semitropical locations in the developing world.

Their narrative contradicts one offered last year by Searchinger et al. (2008) and adopted by Sperling and Yeh (2009), who asserted that biofuel production from cropland in the United States, the European Union, and elsewhere results in large releases of carbon in remote locations, produced by the conversion of tropical forests and pastures to farmland. Market-mediated pressures felt in remote areas far from the point of biofuel production are the drivers of this effect. This argument caused a rethinking about the benefits of agricultural biofuels in the environmental community and some policy circles.

California is farthest along in developing carbon policies affecting the use of alternative transportation fuels. Its low-carbon fuel standard (LCFS) will probably soon be adopted. As proposed, the regulations embrace the logic of the Searchinger et al. argument by calculating an indirect land-use change (ILUC) carbon cost that is added to any biofuel produced from feedstock grown on currently productive agricultural land. These costs may be high enough to deter blenders from using crop-derived biofuels, effectively making them useless in attaining the reduction in fuel carbon intensity required by the LCFS. The GTAP model (the Global Trade Analysis Project, https://www.gtap.agecon.purdue.edu/default.asp), a computable global equilibrium (CGE) model developed at Purdue University, is used in California’s proposed LCFS to estimate this indirect carbon cost by inferring land change in developed agricultural regions, where land rents are available to support inferences about the operation of markets. No claim is made about land change in any specific instance. If adopted as currently defined, and if weighted in ways proposed by Monahan and Martin (2009) that emphasize the immediate potential effects of adding terrestrial carbon to the atmosphere, the use of ILUC will probably exclude crop-based biofuels from use in meeting the LCFS and similar standards.

That agricultural markets have effects on land use around the world is unarguable. But the scale and local significance are not. It is the use of generalized arguments and modeling methods for policy that Kline et al. suggest is misleading, resulting in harmful policies that neither save endangered landscapes nor help reduce carbon emissions from petroleum, but do stifle economic opportunity for many, especially in developing countries. They offer an alternative and compelling assessment of land-use change and its likely effects that differs from that resulting from the use of a CGE for regulatory purposes. The choice of method in this instance is strongly connected to the modeling outcome. Land-use change is specified in the CGE model, not discovered, with the amount determined from current available data on land in countries where land rents and market values are known. Alternatively, Kline et al. assert that the most important land to consider when discussing new biofuel production in third-world settings is not adequately accounted for in the GTAP or the original Searchinger et al. assessment.

At a minimum, strongly opposing yet reasonable views underscore the complexity of assessing the causes of landuse change and the limits of any single modeling approach. If Kline et al. are correct, prudence suggests that land-use change should be assessed using more than one approach and the results compared, before a single method is adopted. But prudence in adopting assessment methods may not prevail. The need for quantifiable policy instruments has been made urgent by deadlines imposed by legislative or executive timelines, based in turn on a sense of urgency about the potential effects of climate change. These are forcing rapid policy adoption, before alternative and comparable methods of modeling indirect land-use change become available. Monahan and Martin are concerned that sound science be used for the LCFS, but might be impatient with the caution urged by scientists such as Kline et al. about the science they prefer.

STEPHEN KAFFKA

Department of Plant Sciences

University of California, Davis

California Biomass Collaborative


Nurturing the U.S. high-tech workforce

Ron Hira’s analysis of America’s changing high-tech workforce (“U.S. Workers in a Global Job Market,” Issues, Spring 2009) makes several timely and important points. As he quotes IBM CEO Sam Palmisano, “businesses are changing in fundamental ways—structurally, operationally, culturally—in response to the imperatives of globalization and new technology.” Companies can pick and choose where to locate based on proximity to markets; the quality of local infrastructure; and the cost of labor, facilities, and capital. When historically U.S.-centered companies diversify globally and integrate their operations, the interests of the companies and their shareholders may diverge from the interests of their U.S. employees.

The first loyalty of a company is to its shareholders. Rather than pitting employees against shareholders (some will be both), we should make it more advantageous, economically and operationally, for companies to invest in and create high-value jobs in the United States. We need to make domestic sourcing more attractive than off-shoring.

Hira offers some good suggestions. The most important is that we should work harder to build and maintain our high-technology workforce. Brainpower is the most important raw material for any technology company, yet today we squander a lot of it. Many of our high-school graduates are not academically prepared to pursue college degrees in science, technology, engineering, and mathematics (STEM) fields. We need to improve the quality of STEM education and we need to make it accessible to a much broader spectrum of our young people. We should encourage the best and brightest young technologists from around the world to build their careers here, contribute to our economy, and become Americans. And we should discourage foreign companies from temporarily bringing employees to the United States to learn our business practices and then sending them home to compete with us.

Several things he did not mention could also encourage domestic sourcing. The digital communications technology that makes it possible to outsource many high-tech jobs to India can do the same for low-cost regions of America. We need to extend affordable and effective broadband service to all parts of the country.

WE MUST EMPOWER TALENTED CITIZENS THROUGHOUT OUR COUNTRY AND ENCOURAGE OTHERS ATTRACTED TO OUR CULTURE AND VALUES TO JOIN US AND PURSUE THEIR CAREERS HERE.

We also need to distribute our research investments more widely. University research creates jobs, many of them in small companies near campuses. So we need to increase investments in universities in areas with low living costs. Adjustments in the Small Business Innovative Research Grant Program could also help stimulate development in low-cost areas.

And we need to restore and maintain a healthy banking and investment community, and make sure it extends to low-cost regions. Given risk-tolerant capital, technologists strengthen companies and start new ones, creating jobs and ratcheting the economy upward.

The best way for the United States to be competitive in the 21st century is to build on its strengths. We must empower talented citizens throughout our country and encourage others attracted to our culture and values to join us and pursue their careers here. And we must encourage and stimulate development in low-cost regions, as both a competitive tactic and an initiative to spread prosperity more widely.

GORDON W. DAY

President, IEEE-USA

Institute of Electrical and Electronics Engineers

Washington, DC

[email protected]


Ron Hira’s insightful article adds a new perspective to globalization and workforce discussions. Too often these discussions are polarized and reduced to fruitless debates about protectionism and immigration. Instead, Hira argues that we need to recognize the new conditions of globalization, correctly noting that policy discussions have not kept pace with the realities of those changes.

It is important to note the two flawed premises of many current discussions: that the rise of China, India, and other countries threatens U.S. science and engineering dominance, and that the solution is to sequester more of the world’s talent within U.S. borders while beggaring competitors by depriving them of their native talent. Although this approach was successful in fostering U.S. innovation and economic growth in the past century, it is woefully inadequate for this century.

Hira makes a number of quite reasonable policy recommendations, as far as they go. However, underlying his analysis is the much larger question of how the United States might ensure its prosperity in the new global system. I would add a few other factors to his analysis of the nature of global labor markets and U.S. workforce strategies.

First, global labor markets are different from those contained within national borders. This means that we need to separate policy discussions about guest worker programs from immigration policy discussions. Our immigration programs are not designed primarily to advance labor or economic development, but instead reflect a wide range of social objectives. Our guest worker program, however, was specifically developed to supply labor for occupations that experienced domestic skill shortages. Neither program is key to strengthening the U.S. workforce or boosting prosperity within global labor markets. U.S. competitiveness will not be enhanced by reviving a colonial brain drain policy for building the U.S. STEM workforce while depriving other countries of the benefit of the best of their native population. Moreover, I would suggest that it is not a good way to build global economic alliances.

Second, the guessing game about which jobs can be made uniquely American does not provide a clear roadmap for future workforce development. Some jobs in their current form can go offshore; other jobs can be restructured so that a large portion of them can go offshore; and in other cases, customers themselves can go offshore for the service (as in “medical tourism” and no one can predict which jobs are immune. Instead, STEM workforce development should focus on strengthening U.S. education across the spectrum of disciplines and ability levels, rather than the impossible task of targeting one area or another and focusing only on the top tier of students. Having a greater breadth and depth of workforce skills to draw on will propel the U.S. economy successfully through the unpredictable twists and turns of global transformation.

America’s key competitive advantage is its interrelated system of innovation, entrepreneurship, talent, and organizations—not one based on a few imported superstars. Public policy aimed at short-term labor market advantage is therefore a precarious growth strategy. Whether positive or negative, impacts of policy on the labor market need to be discussed in context and for the full range of impacts on U.S. workers as well as national prosperity.

HAL SALZMAN

Bloustein Policy School and Heldrich Center for Workforce Development

Rutgers University

New Brunswick, New Jersey


Manufacturing’s hidden heroes

Susan Helper’s “The High Road Fo U.S. Manufacturing” (Issues, Winter 2009) raises many excellent points. Similar arguments for a manufacturing renaissance were made in Eamonn Fingleton’s landmark book In Praise of Hard Industries—Why Manufacturing, Not the Information Economy, Is the Key to Future Prosperity (Houghton Mifflin, 1999).

Many companies have retooled with the most advanced robotics, lasers for precision inspection, and computer software. Today’s factory workforce is largely made up of technicians with two-year associate’s degrees in advanced manufacturing or electronic control systems.

Two-year community and technical colleges are on the front lines of training tomorrow’s manufacturing workforce. Corporate recruiting emphasis is shifting away from elitist fouryear universities to the two-year colleges. Associate’s degree graduates are redefining the entire landscape of U.S. manufacturing.

Organizations such as SkillsUSA (www.skillsusa.org), the Council for Advanced Manufacturing (www.nacfam.org), and the American Technical Education Association (www.ateaonline.org), along with state workforce boards, are leading the way.

The people who are propelling U.S. manufacturing to world-class status are not engineers or MBA managers, but the technicians. They have the technical expertise to overlap with engineers and innovate with improved manufacturing technologies, and are the true driving forces of our “new” manufacturing age.

GLEN W. SPIELBAUER

Dallas, Texas


Environmental data

In “Closing the Environmental Data Gap” (Issues, Spring 2009), Robin O’Malley, Anne S. Marsh, and Christine Negra provide a good summary of the need for improvements in our environmental monitoring capabilities. The Heinz Center has been a leader in this field for more than 10 years and has developed a very deep understanding of both the needs and the processes by which they can be addressed.

I would like to emphasize two additional points:

  1. Adaptation to climate-driven environmental change will require much more effective feedback loops, in which environmental monitoring is just the initial step. Better feedback is crucial because we must learn as a society, not just as scientists or policy wonks, how to respond to the changes. Better feedback starts with better monitoring, but includes statistical reporting, interpretation, and focused public discourse.
  2. O’Malley et al. recommend that “Congress should consider establishing a framework … to decide what information the nation really needs …” Such a framework is only a part of a broader set of institutional arrangements for developing and operating a system that can produce comprehensive, high-quality, regularly published indicators and other statistics on environmental conditions and processes.

In order to have an effective system of feedbacks, we will need to develop new organizations and new relationships among existing organizations so that the mission of regular reporting on the environment can be carried out with the same sort of repeated impact on public understanding as is regular reporting on economic conditions.

The Obama administration and Congress have an opportunity to take important steps in this direction in the coming months because of the past efforts of the Heinz Center and its partners in the State of the Nation’s Ecosystems Project. Other efforts to provide the foundations for a statistical system on the environment have been underway in federal agencies for many years, including the Environmental Protection Agency’s Report on the Environment and a current effort, the National Environmental Status and Trends Indicators.

THEODORE HEINTZ JR.

Atlanta, Georgia

Former indicator coordinator at the White House Council on Environmental Quality.

To Teach Science, Tell Stories

Charles Darwin turned 200 in 2009, and his myriad admirers marked the occasion at events throughout the Western world. Some of the speakers examined the man himself, whereas others focused on what has been learned since Darwin published his epochal On the Origin of Species By Means of Natural Selection in 1859. Nearly all alluded, at least in passing, to a startling statistic: Fully half the people in some countries profess not to believe that we human beings have ourselves evolved. That list includes, of course, the United States, the ostensible world leader in science and medicine.

Darwin challenged the world to rethink the questions: Who am I? Where do I come from? And how do I fit into the scheme of things? Although he merely hinted at the possibility of human evolution, Darwin nonetheless tellingly remarked that “there is grandeur in this view of life.” In his own elegant, subtle way, Darwin was inviting us to compare his storyline with the much older one lying at the heart of the Judeo-Christian tradition. Resistance to evolution still often comes in the simplistic, stark terms of science versus religion: an either-or; take-it-or-leave it; I’m right, you’re wrong collision between two utterly incompatible world views. It is, I think, more productively seen as a preference for stories, and the Judeo-Christian account had a nearly 2,000-year head start in commanding our attention. I no longer think it is especially surprising that Darwin’s take on things still meets with resistance in some quarters.

Why does this matter? In the end, what any individual chooses to believe about the ultimate origins of human existence in itself does not change the nature of things. As the saying goes, “every creationist wants his flu shot”: a witty reference to the arms-race interaction between mutable viruses (and pathogens generally) and the ability of the medical profession to devise the correct vaccine for a particular year’s expected dominant flu strain (not to mention the unanticipated eruption of newly evolved, never-before-seen viruses such as the H1N1 flu of 2009). Although this interaction is the quintessence of evolution in action, patients can accept their inoculations without pondering the intricacies of evolving drug resistance, much as I can use my cell phone without stopping to think about the physics that underlies its operation. Yet understanding the dynamics of such evolving systems is essential to holding up the medical end of the battle.

But the importance of Darwin and the resistance his ideas still meet lie far deeper. I was a freshman in high school when the Soviet Union launched Sputnik into space in October 1957, an event that unleashed a debate about U.S. shortcomings in science education. Among these, of course, was the absence of evolution from the curricula of many school systems, the lasting hangover of the 1925 Scopes trial in Tennessee. Indeed, the lack of adequate, comprehensive, unfettered teaching of Darwin’s legacy has since become synonymous with the inadequacies of science teaching in the United States.

Why does this matter? Frankly, when we ask teachers to ignore evolution, or to explore its purported weaknesses as a scientific theory, or to give equal time to religiously imbued pseudoscience such as intelligent design, what we are actually doing is asking our teachers to lie to our kids. In a science curriculum, kids simply must be taught what scientists currently think about the universe, how it is constructed, how it came to be, and where we think it is going. If we do this for gravity but not for biological (including human) evolution, we water down and distort the heart of the scientific enterprise. That makes us liars, and keeps U.S. science teaching inherently weak.

Not that there has been no post-Sputnik movement to counter creationist initiatives against Darwin’s legacy in the classroom. The Biological Sciences Curriculum Study, a group founded in 1959, is still operating. Notably, so too is the National Center for Science Education (NCSE), with founding director Eugenie Scott still at the helm. Since 1983, NCSE has been a positive force for improving primarily the evolutionary side of science curricula, as well as a literal friend in need to school boards, teachers, parents, and students faced with an onslaught of creationism- and intelligent design–inspired attacks on evolution. And most recently, I have joined forces with my son Gregory, a special education science teacher, as co–editors-in-chief of the journal Evolution: Education and Outreach (EEO) (free online at www.Springer.com/12052). Our aim is to strengthen the connections between the professional world of evolutionary science, broadly construed, and the K-16 classroom as we reach out to teachers, curriculum developers, and other educational professionals. Now in its second year, EEO has made rapid strides in attracting the attention of scientists and educators alike. Coupled with a growing number of educational materials available online, teachers are probably more equipped with useful classroom materials than ever before.

Yet something is still missing from the rich array of texts, lesson plans, and other presentations available in traditional print and new media formats. Many kids approach science warily. For every child who sees the beauty of a simple addition problem or chemical formula, or learns why birds sing songs in spring, there seem to be many more who find it hard—hard, I suspect, primarily because it strikes them as alien and seemingly irrelevant to their lives. This need for meaning, in the form of personal relevance, to help ensure actual learning has become a dominant theme in modern educational philosophy.

All of which makes me think back to stories and the fact that the Judeo-Christian storyline of creation has retained such a firm grip on our cultural psyches for so long. Scientists tend to react in horror to the suggestion that their results can be rendered as mere stories. But what’s wrong, as my late colleague Stephen Jay Gould used to tell me in graduate school, with good writing? Indeed, Steve used to say that there should be no discernible gap in style between hard-core technical and more popular writing—except that jargon is useful and inevitable in technical writing.

One way to inject a sense of story into the K-16 curriculum would be to fold the lives of scientists more directly and frequently into the narrative. As the curator of the wildly popular American Museum of Natural History exhibition “Darwin,” I was struck by the comments of friends and relatives who were not scientists—but rather teachers of English, writers of novels, or professional musicians—who kept telling me how human Darwin seemed, and how his creative work in evolutionary biology reminded them of their own working experiences in supposedly wholly disparate, even estranged fields in the humanities. The storyline in the exhibition was that Darwin was a human being, with his own circumstances of birth, upbringing, education, and experiences. Darwin came across as something of a workaholic, totally engrossed in the patterns of the natural world, as well as a devoted family man and father of 10 children. Above all, he came across as a man of passion who, being in the right places at the right times, managed to see farther than his predecessors and contemporaries had. Now that is a story! We managed to show many people that science is really not all that different a category of creative human endeavor than, say, writing a beautiful haiku.

I learned something else from mounting that exhibit. As my colleague Ian Tattersall at the American Museum told me years ago, people come to museums to see stuff—real things. Part of our success in conveying the message that science is a creative human endeavor much like any other came from having 36 original pieces of writing from Darwin’s own hand in the exhibition. But of course we had a lot more: real specimens, some of which Darwin collected in South America as a young voyager, and others from our general collections. These were augmented by models, films, and simulations—a multimedia experience.

Strikingly, though, what seemed to work best were the bones of vertebrates. A display that compared the arms of whales, bats, and humans immediately made it clear to the least tutored eye the common theme underlying them all. One student, a professed evolutionary skeptic, said something to the effect of “say no more … I get it!” after seeing the skeleton of a baby chimp in the Darwin show. Stories lie in the eye and mind of the beholder, whether in three-dimensional “real” time, a teacher’s verbal account, or the lines on a printed page.

So stories—well-crafted stories—are an important way to bridge the gap between children and the content of science. Which suggests another look at the three questions that Darwin invited the world to reconsider, three questions that people are still having trouble grappling with: Who am I? Where did I come from? How do I fit in?

Kids usually think where they live, who their friends are, who their family is, are “nothing special.” All the really cool places and people are living someplace else. At least that is what I thought, even though I grew up in the suburbs of New York City, a totally “cool” place. What I think we need now is a curriculum that teaches kids about the world: an integrated history of the Earth and life incorporating plate tectonics and evolution, recounting the origins of the continents, oceans, and living things, and then moving the story forward to the present day. We need to simultaneously teach all of this from the ground up as well as the top down. As we tell kids the story of Earth and the life it supports, we need to tell them their story as well as giving them the big picture. Big pictures tend to be too abstract, too impersonal. They need to investigate their own stories to see how it all fits together.

Every place is unique and offers a piece of the puzzle. Some places are near dinosaur quarries; others are in deserts that tell the story of extreme environments. Cities are phenomena in themselves, a riot of human diversity. Every place is a part of the puzzle, and kids should learn how where they live fits into that puzzle. In learning the story of humans evolving on the planet, then moving around in a vast exodus, and eventually showing up where they find themselves living, they will be located as dramatis personae in the Big Story.

With modern publishing and online media techniques, workbooks specific to a locale can be easily devised that will dovetail with a grander text that will tell the story of life and much of the rest of the sciences. Such an approach could help to reverse the trend that has seen chasms steadily growing between academic fields. Too much specialization can be utterly detrimental to true intellectual growth and, even more so, to the teaching of children who wonder why any of this increasingly recondite stuff should matter to them in the first place.

So for evolution I want to see a vertical integration of genes with, say, dinosaurs. Or better yet, of genes with humans as we trace Darwin’s grandeur in the ineffable story of hominid evolution, initially on the plains of Africa, later throughout the world: a story that is about each and every one of us. I want to see a horizontal integration of fields: of geology and plate tectonics with paleontology, ecology, and evolutionary biology, and (why not?) with chemistry, physics, and mathematics.

Our narratives—our stories—should give kids a sense of the intellectual (and sometimes derring-do!) adventures of actually doing science. If we let storytelling like this into the science curriculum, we instantly humanize science, make it relevant to the random child, and automatically make it seem more inviting, less hard. We can do this without watering down scientific rigor, with its canons of evidence that are justly the hallmark of scientific research, innovation, and progress.

Two approaches to jump-starting an enhanced quality of science education are within our grasp, offering at least the potential for major improvements in the next five years.

First, the Obama administration has announced a strong, innovative science education initiative, embracing the STEM concept (science, technology, engineering, and mathematics). The aim is to graduate students from high school with strong backgrounds in science and, more generally, to “unleash the creativity and curiosity of (all) our students.” Specifically, the president proposes to attract more and better-qualified teachers to science through a scholarship program tied to service in communities of need; to create a matching Technology Investment Fund to integrate technology more fully into the classroom, teacher assessment practices, and science curricula; and to create a national committee to “develop coherence among federal STEM education efforts.” The hope is to stimulate a national dialogue on the importance of science education. These are realistic, immediate goals.

Second, from a grass-roots perspective, advances in electronic publishing have recently brought books and other materials geared to the needs of specific locales and interests closer to affordable reality. We can use this new technology to link student’s personal lives more closely to the bigger, more abstract generalities of science. We can deliver a coherent storyline representing our most up-to-date information, conclusions, and theories with the places and lives of our kids. All of us involved in science education can immediately begin to use these resources, devising better-integrated science curricula.

All of this will take lots of imagination as well as simple, clear, engaging writing. Who knows? If we tell the stories well enough, they might be good enough to become parts of the English and history curricula as well. That would be good education.

Five-Year Plans

The comically ambitious and perennially unrealized five-year plans that were a hallmark of the dysfunctional Soviet political system throughout the middle of the 20th century have discouraged serious thinkers from using the phrase. Perhaps because of fear of being associated with Soviet blunders, forward-looking policy gurus seem to prefer 10-year or 25-year outlooks. But having experienced the dizzying pace of developments in information technology and genetics in recent years, it requires a rich blend of hubris and foolishness to attempt long-term forecasts based on anticipated scientific and technological progress.

Although we have never used the phrase five-year plan, for the past 25 years Issues in Science and Technology has been publishing such plans. As a quarterly, we cannot keep pace with action on pending legislation. And because the problems we address are complex and the scientific and technological components often not yet well understood by policymakers, it does not make sense to expect effective action to be taken quickly. But because U.S. election cycles range from two to six years, it is difficult to convince members of Congress or administration officials to consider actions that extend beyond five years. Our only practical option has been to publish five-year plans.

For our special 25th anniversary edition, we decided to come clean. We specifically asked a very distinguished group of leading thinkers in science, technology, and health policy to produce five-year plans.

Part of our inspiration can be found in our lead article. The election of Barack Obama as president of the United States and his enthusiastic statements about the importance of science and technology (S&T) have produced an audible buzz in the S&T policy community. A wave of optimism has produced a flood of ideas about what can be achieved with S&T, and the highly respected scientists whom President Obama has appointed to a number of key positions are open, even eager, to hear suggestions for action. We hope that the articles published in this issue will receive attention, and since presidential science advisor John Holdren was a longtime member of the Issues editorial board, our hope seems reasonable.

Even though most Issues articles aim to influence U.S. policymakers, a large number also aim to understand a topic in its global context. In some cases, global action is necessary to tackle a problem effectively; in other cases, national policies have effects far beyond that country’s borders. Two of the authors in this issue are particularly alert to the global dimensions of S&T policy. Koji Omi is the founder and director of the Science and Technology in Society forum, which convenes an international annual meeting of leaders in government, industry, academia, and nongovernmental organizations to address the world’s most pressing problems. Ismail Serageldin chaired the Consultative Group on International Agricultural Research, founded and chaired the Global Water Partnership, and served on the board of the Academy of Sciences of the Developing World.

Other authors also attack problems with a keen awareness that U.S. action alone will not be sufficient. A global perspective clearly informs the articles by Vaclav Smil on energy, Pamela Matson on sustainability, Carl Safina on fisheries, Gilbert Omenn on genetics in medicine, Lewis Branscomb on goal-focused research, Michael Nelson on cloud computing, and Bruce Alberts and Niles Eldredge on science education. Indeed, the global dimensions of many U.S. policy discussions is reflected in Issues Web site traffic, about one-third of which comes from outside the United States.

But a clear vision of the future requires not just a transcendence of national boundaries, but a determination to escape the blinders of disciplinary boundaries. Scientific researchers are reaping the benefits of interdisciplinary work, and we are erasing the artificial divide between basic and applied work, between science and engineering. But research breakthroughs and novel products and processes will not be sufficient to address the world’s needs. Eliminating hunger will require economic and institutional change. The world’s fisheries will not be saved unless we understand the needs of fishing communities. Electronic health records will not advance unless we understood the individual’s desire for and right to privacy. Advances in computing will deliver their full potential only when we attend to the social structure of innovation that takes place not just in corporate research labs but in thousands of small companies and in the creativity of millions of individual inventors and entrepreneurs. S&T undoubtedly can contribute much to creating a healthier, more prosperous, and more equitable world, but only if they are integrated with the insights of the social sciences and the humanities. The authors in this issue are working to break down the disciplinary walls.

The final ingredient in tackling big problems that transcend boundaries and disciplines is in a sense to think small; that is, to focus on self-knowledge and self-interest. Words of praise from an inspiring new U.S. president make a heady potion, but as Daniel Sarewitz points out, we as individuals and as a community are not immune to self-delusion. As he warns, “a scientific-technological elite unchecked by healthy skepticism and political pluralism may well indulge in its own excesses.” Indeed, many of the problems that we aim to solve with S&T are to a large extent the result of the unwise use of S&T. Making wise use of the ever more powerful tools we are developing will require concomitant growth in our understanding of our social relations and of our individual nature.

After devoting most of his speech to the potential of S&T to help address the world’s problems, President Obama ended with an acknowledgement of its limits: “Science can’t answer every question, and indeed, it seems at times the more we plumb the mysteries of the physical world, the more humble we must be.” I suspect that most us have expressed a similar sentiment. The challenge is to apply this principle to our own behavior.

Perhaps the exercise of developing five-year plans can introduce the discipline necessary to confront our human plight in all its humbling complexity. When gazing far into the future, one can soar beyond considerations such as institutional intransigence, ideological rancor, religious certainty, political machinations, and general human pettiness and myopia. But if one needs to accomplish something in five years, the messy realities of the humanity we are stuck with cannot be ignored. It adds new meaning to the phrase think globally, act locally.

A key reason for the failure of the Soviet five-year plans was their stubborn belief that they could escape history, create a completely different social and economic order, and transform human nature. They simply narrowed their field of vision to screen out anything that might derail their single-minded notion of progress. A laser-like focus on a single problem can be an effective strategy in some types of scientific research, but tunnel vision is almost always a liability in conceiving public policy. As the poet/songwriter Leonard Cohen writes in his song “Anthem”:

Ring the bells that still can ring

Forget your perfect offering

There is a crack in everything

That’s how the light gets in.

On a 25th anniversary it is common to talk about the next 25 years of triumph and glory. That sounds wonderful, but let’s begin with another five years of muddling through and see what we can learn along the way.

Electronic Health Records: Their Time Has Come

In 1991, when portable computers were the size of sewing machines and the World Wide Web was aborning, the Institute of Medicine proposed a plan for how emerging technologies could be used to improve medical recordkeeping. The plan highlighted the potential of health information systems in general, and computer-based patient records specifically, to support health care professionals as they make decisions at the point of care. It also called for developing a national health information infrastructure. The goal was to achieve ubiquitous use of such patient records by all U.S. health care delivery organizations by 2001.

The goal was overly ambitious. But the proposed plan proved to be an important milestone in the evolution of thinking about patient data and the health information infrastructure needed within organizations and the nation. And such thought is now turning into action. In early 2009, with passage of the American Recovery and Reinvestment Act, the government committed its first serious investment in electronic health records (EHRs) and in developing a national health information infrastructure. The act calls for achieving widespread use of EHRs by 2014, and it provides $36 billion to support the use of EHRs in clinical settings and another $2 billion to coordinate their implementation.

EHRs are much more than computer-based versions of paper medical records or stand-alone data repositories, and their successful implementation is not without challenges. Indeed, the federal government’s newly appointed national coordinator for health information technology, David Blumenthal, said in his first public statement that technical assistance is a “critical factor” in advancing EHRs to reduce health risks.

As an illustration of how EHRs and EHR systems may bring about multiple benefits in medicine, consider how two other industries have used similar technologies to provide convenient, efficient, and customer-centered services. In the banking industry, automatic teller machines and online Web sites provide customers with ways to conduct their banking when and where they choose and with confidence that their personal information is protected. Banks also provide alerts to customers about sensitive activity in their accounts and reminders about payment deadlines. These easy-to-use tools depend on a secure, seamless information infrastructure that enables data to cross organizational and national lines. In the online retail industry, companies such as Amazon.com not only offer convenience in shopping but also provide personalized shopping recommendations based on past purchases or selections made by other customers who have shown similar interests. This feature depends on the ability to capture and analyze data on individual and population levels. Amazon also provides a mechanism for used-book sellers to offer their products via its Web site—a process that is possible, in part, because there is a shared format (technically, interoperability standards) for the information presented to customers.

Now consider how data, information, and knowledge could securely and seamlessly flow through health care organizations. As a case in point, begin with a patient who has a chronic condition and is tracked by an electronic record of her health history, including any unusual symptoms, an accurate list of numerous medications, and reminders of when lab work is needed to ensure that the medications are not causing kidney damage. Lab results are forwarded directly to her electronic health record, which is maintained by her primary care clinician. If lab results are outside of the normal range, the physician receives an alert and, in turn, sends the patient an e-mail requesting that she repeat the lab work and schedule an appointment.

During the appointment, the physician has a comprehensive view of the patient’s health history and receives a reminder that the recommended protocol for treating her condition has been changed. After reviewing options with the patient, the physician prescribes a new medication, and the prescription is sent directly to the patient’s preferred pharmacy and to the patient’s EHR. The physician also recommends increased physical activity, and so the patient elects to receive weekly exercise programs and commits to recording her daily exercise in her health record.

After the visit, selected data elements without personal identifiers are automatically forwarded to a larger population data set maintained by the health organization in which the physician works. The organization can use the data to compare outcomes for its patients to regional or national benchmarks. For example, a hospital may learn that its post-surgery infection rate is higher than the national trend and then compare its practices to those used by other organizations with lower infection rates. Or a physician practice group may learn that its outcomes for a particular diagnosis meet national norms, but that there are less expensive alternatives that yield comparable results.

On a broader scale, outside authorized users—say, university researchers—can access the population data in conducting clinical research. Pooling data from an entire region or state, or even nationwide, will enable more comprehensive and efficient research on the effectiveness of treatments and clinical processes. For example, bioinformaticians might benefit from using large data sets as they seek to advance the intellectual foundation of medicine from its current focus on organs and systems to one based on molecules. Public health professionals can use the data to monitor health trends across various populations. Further, selected EHR data elements may flow into biosurveillance systems so that analysts can detect new outbreaks of disease, whether due to natural infections or bioterrorism.

For the full impact of EHRs and EHR systems to be realized, the results from these studies, when fully verified, must flow back to clinical professionals and patients so that they can base their decisions on the most current knowledge available. This cycle of using knowledge to support decisions, capturing data on the outcomes of those decisions, analyzing the data, and using insights gained to refine the knowledge base is the essence of how to develop a “learning” health care system that is safe, timely, efficient, effective, equitable, and patient-centered. EHRs are the beginning and end of that cycle. Each time that an EHR is used, there is an opportunity to enhance current and future decisions. But EHRs are only a part of the complex infrastructure that is needed to enable learning cycles within health care organizations across the country.

The challenges

Because information and communications technologies continue to evolve at a rapid pace, technology itself is not a current constraint. There is, however, a need to improve the design of EHR systems to take advantage of technological capabilities. For instance, user interfaces (how the system presents information to the user and how the user interacts with the system) must support rather than interfere with clinician-patient encounters. An EHR system that requires a clinician to focus on a screen rather than a patient may adversely affect the quality of care and will probably result in user resistance to adopting the system. One cure for this is for the clinician and patient to share the screen and work as a team to ensure that data are both accurate and understood. It may also result in clinicians recording observations on paper and then entering the data into the EHR, adding extra time to the care process. EHR interfaces must be easy to learn and use, capture data with minimal intrusion during a patient visit, and provide information in ways that are intuitive to the user. Further, EHR functions such as clinical decision support tools (such as alerts or patient-specific recommendations) must be a byproduct of the user’s professional routine rather than multiple add-on tasks. Thus, system designers must work with clinicians to understand the information and work needs of caregivers at various points in the care process and align the processing power of computers with the cognitive abilities of people.

The medical community also needs to establish a set of accepted terminologies that will underpin consistent understanding of their meanings across settings, among countries, and eventually over time. The terminology standards will need to be continually updated to reflect advancements in the practice of medicine. Although there has been genuine progress nationally and internationally in this area, the efforts have been chronically underfunded.

In addition, new standards are needed to improve the flow of data within health systems. A variety of data exchange (or interchange) standards now enable EHRs to receive data from or send data to certain segments of the health care system, such as physicians’ offices, pharmacies, or laboratories. But considerable work remains to develop standards that go beyond the routine exchange of data to enable users to query distributed databases and process responses. Advancing this area of research will require additional public funding.

The ability to access and transfer health data also depends on consistency in state laws governing such activities. A long and sometimes heated debate on the appropriate approach to the privacy of patients and confidentiality of their data has stalled the development of a framework that protects confidentiality while supporting the legitimate use of data for improving quality, research, and public health. Unquestionably, privacy is a valuable and valued social good. But so too are altruism, health, and freedom. Currently, health information policy seems to be giving too much weight to privacy at the expense of freedom and health.

For example, it is peculiar that national public policy does not help citizens create and maintain individualized personal health identifiers to support their health care and their privacy, while at the same time citizens, at public expense, can call a telephone number to keep phone calls from disturbing the privacy of their evening meal. Public policy must find a way to allow citizens who are so inclined to opt to share their data with researchers for legitimate biomedical and health research. Similarly, citizens should be allowed to choose to let researchers use their genetic information for research purposes.

To secure a foundation for progress, the nation needs to expand efforts to develop a health care workforce that is prepared to develop robust EHRs and EHR systems and to use the technologies to their full potential. To capitalize on the clinical transformations promised by EHRs and EHR systems, clinical teams will need to undergo a basic culture shift in their beliefs and working habits. Achieving this shift will require rigorously training some clinicians to become “champions” who will lead their organizations into the new way of working together, networked via EHRs. The American Medical Informatics Association has demonstrated the potential of this approach, as its 10×10 program has trained over 1,000 health care professionals in applied health and biomedical and health informatics. (The program takes its name from its stated goal: to train 10,000 health care professionals in these skills by the year 2010.) In another effort to train more clinical informaticians, the American Board of Preventive Medicine is working with various other groups to develop a new medical subspecialty of clinical informatics. Other training and certification efforts will be needed to prepare workers across the health care disciplines, including nursing, dentistry, pharmacy, public health, and psychology, among others.

The current cadre of public health informaticians, joined by coming additions to their ranks, will need to help states move into the future world of medical information technology. States will need to be encouraged to develop and maintain public health reporting requirements linked to EHRs as well as to develop regional networks for information exchange. Health care workers and patients need help in learning how to use Web portals and how to protect patients’ personal health data. All health professionals need to know how to use information and communications technology to work in teams, to practice evidence-based medicine, to continuously improve quality, and to keep the patient at the center of their care.

Underlying all of these challenges is a critical need to develop a robust clinical research capacity to take advantage of the expected deluge of data. Thus, research and evaluation must be part of the national agenda. The aim should be not only to improve how EHRs and other related technologies are used, but also to develop new products and processes that will provide even greater health benefits.

The path forward

The challenges are clear, and fortunately the newly enacted American Recovery and Reinvestment Act creates a formal structure for addressing them. The act codifies the Office of the National Coordinator for Health Information Technology, housed in the Department of Health and Human Services (HHS), and assigns a focus on “development of a nationwide health information technology infrastructure that allows for electronic use and exchange of information.” In addition, the Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as part of the larger stimulus package, establishes within the national coordinator’s office two committees to directly oversee policies and standards development and creates a position of chief privacy officer. Further, the HITECH Act stipulates that the HHS develop an initial set of standards covering various aspects of health information technology, as well as implementation specifications and certification criteria, by December 31, 2009. The standards will be voluntary for the private sector.

Another important provision of the HITECH Act provides assistance to educational institutions “to establish or expand health informatics education programs, including certification, undergraduate, and masters degree programs for both health care and information technology students.” This funding is essential to address workforce needs and particularly to foster implementation of EHR systems and the competent use of EHRs.

Perhaps most important, the HITECH Act provides rewards for those who make meaningful use of EHRs and penalties for those who fail to act. Beginning in 2011, approximately $17.2 billion will be distributed through Medicare and Medicaid to physicians and hospitals who use EHRs at least for electronic prescribing and information exchange to improve health care quality. Beginning in 2015, Medicare fees will be progressively reduced for physicians who do not use EHRs. This policy could provide the impetus for those wary of EHRs to take the plunge. Expanding demand for EHRs will also probably increase pressure to resolve the other barriers to EHR use.

So what might the next five years bring? It is my hope that the nation will be in the midst of a true transformation in health care and that the benefits of EHRs will spread to organizations across the land. I do not expect that every physician group or practice will be using EHRs, but the majority of them should be. And all hospitals should be moving toward adopting a learning culture through the implementation of robust EHRs; this learning curve should include expanding technical capabilities to support the use of EHRs and doing more to encourage patients to participate.

Advocates of expanded use of EHRs and EHR systems are emerging in all quarters. Growing numbers of health care professionals and organizations are expressing support. Students are showing increasing interest in careers in informatics and information technology. Patients are voicing their demands that EHRs be added to their arsenals for managing their own health. In government, officials at many levels are recognizing the growing support for developing a framework that protects patient confidentiality while enabling authorized use of patient data for public health and research.

For those of us who have committed our professional lives to the use of information and communications technologies to advance health and health care, the next five years are certain to be exciting. Some observers compare the current and future challenges to the Project Apollo Moon-shot program of the 1960s. Indeed, the tasks in aggregate are comparable in complexity. Changing the health care system will involve change at the most basic level in how health professionals do their daily work, while also empowering the public and patients to take a more active role in protecting their health. Both national efforts represent major investments in people and technology to achieve transformative visions. But as history shows, the United States did successfully reach for the Moon.

A Focused Approach to Society’s Grand Challenges

The United States faces a number of “grand challenges”: technically complex societal problems that have stubbornly defied solution. At or near the top of any list is the need to develop new energy sources that are clean, affordable, and reliable. The list also would include creating high-quality jobs for the nation’s population, finding cures for cancer, developing more effective ways of teaching and learning, identifying new ways to improve health and reduce the cost of care, improving the management of water resources, developing more nutritious foods and improving food safety, and speeding up the development of vaccines for deadly diseases, among others.

Many observers at many times have declared that such challenges are best solved by bringing science to bear on them. But how effective has this approach been? Regarding energy, for instance, the answer is, not very. Among the two most favored—and ineffective—tools that have been tried are appointing a White House “czar” to coordinate the fruitless technological efforts already in place and creating publicly funded technology demonstrations that are supposed to convince industry to adopt the technologies, even while evidence is lacking that they are affordable or effective.

Still, the Obama administration has declared its intent to try a new level of reliance on science to solve the energy mess. The administration has expressed its commitment to changing the nation from an energy economy based on fossil fuels to one based on renewable sources of energy. This transition is expected to benefit from a major investment in science, which must create the ideas on which radical new technologies rest. Such a government effort must also ensure that these technologies are affordable and will be exploited, under appropriate incentives, by a transformed energy industry.

On a broader level, the administration plans to harness science to achieve other public goods as well, as expressed in the newly enacted American Recovery and Reinvestment Act. Plus, the government is backing up this policy commitment with increased budgets for the 2010 fiscal year for research, as well as for educating students for an increasingly technological future.

Without doubt, these actions are valuable. Few critics would deny that improving schools, encouraging more U.S. students to take up science and engineering, and devoting more resources for research by the nation’s best academic laboratories will lead to new science and some valuable innovations. They are critical inputs to the needed system of innovation that creates new industries.

But is this policy focus on science sufficient to the tasks at hand? I believe that two other policy elements are necessary if the grand challenges are to be met in a reasonable amount of time. The first element involves a new type of policy that is geared to promoting science that is more focused than pure research, yet more creative than most applied research. Advocates of this tool call it “Jeffersonian science.” The second element involves a new set of policies that are tailored to moving the products of science into innovations and from there to new industries. This is the heart of the innovation challenge. Advocates of these tools call them research and innovation (R&I) policies, as contrasted with more conventional research and development (R&D) policies. Such R&I policies would cover institutions and organizations that finance and perform R&D; finance investments in technology-based startups and new ventures; attend to the unanticipated consequences of new technologies; and manage matters related to such issues as intellectual property, taxes, monetary functions, and trade.

Nevertheless, many members of the science community continue to try to persuade politicians that sufficient investments in basic science will, in due course, lead to discoveries that create inventions. In turn, the argument goes, these inventions will be snatched up by angel investors who create new companies that someday may become industries that solve the nation’s problems.

History shows that this has, indeed, happened. Studies that have traced current technologies back to their scientific origins have often found an interval of around 30 years between discovery (the laser, for example) and industrial maturity (the LASIK tool for treating cataracts in the eye). But it is instructive to examine such historic events, note their origins, and explore how more effective government policies might have accelerated this process, reducing the elapsed time from 30 years to perhaps 10 years.

Growth of deep pockets

A quick review of how U.S. science policy has evolved will help reveal a new approach to shortening the time and cost of the innovations needed to meet grand challenges. Over the past half century, the U. S. government became a deeppockets source of support for science. This development happened, in large measure, because of the contributions of applied science and engineering to winning World War II, the vision of people such as Vannevar Bush, and the emergence of a threatening Soviet Union, At first, many academic science administrators were deeply suspicious of government as a sponsor, fearing constraints on their intellectual freedom and uncertain continuity of support. But the research universities, under leaders such as Frederick Terman of Stanford, soon saw the opportunity and took the risk to build their science and engineering programs around “soft” government research grant support.

Government, for its part, saw science as a means of sustaining its military primacy. In 1960, the Department of Defense funded fully one-third of all R&D in the Western world. The Office of Naval Research (ONR) set the standard for government funding of academic science, encouraging unsolicited proposals from universities in disciplines the Navy thought might be useful. This approach was necessary because the academic environment, where the best talent could be found, insisted on freedom to be creative; and because the military, with its long development cycles, could afford to be patient. ONR managers, themselves often former heads of wartime research programs, also understood that until civilian institutions willing to support academic science were created (the National Science Foundation was not created until nine years after World War II), the Department of Defense would have to make that academic investment. Finally, the military was content to fund basic science without much concern for the translation of scientific knowledge to manufactured products, because the military services make their own markets.

As the nation’s commercial economy grew, however, it became clear that economic progress depended on both innovations born of government-promoted science and the development of the innovations into viable new industries, accomplished through commercial markets and private investment. Thus, a marriage was consummated by two partners—science and politics—who needed each other, but with quite different motives and quite different systems of values.

In a government where conservatives were reluctant to see government funds used to prime commercial markets and liberals were eager to see public funds used to accelerate industrial innovation, the compromise was self-evident. The government would support academic science, engineering, and medical research, leaving the management and finance for transforming scientific discoveries into economic value to the incentives of private financial markets. By this route, the United States has built the most powerful science knowledge engine in the world. But now the issue has become whether the nation’s science policies and institutions can meet the various grand challenges quickly enough.

Policy scholars are rising to the task of answering this question. Michael Crow, president of Arizona State University and a former adviser on science and technology to various government agencies, criticizes “a culture that values ‘pure’ research above all other types, as if some invisible hand will steer scientists’ curiosity toward socially useful inquiries.” He continues: “This is not about basic versus applied research; both are crucial…. Rather it is about the capacity of our research institutions to create knowledge that is as socially useful as it is scientifically meritorious.”

Irwin Feller, a senior visiting scientist at the American Association for the Advancement of Science (AAAS) and professor emeritus of economics at Pennsylvania State University, and Susan Cozzens, director of the Technology Policy and Assessment Center at the Georgia Institute of Technology School of Public Policy, argue that “the government role in science and innovation extends far beyond money.” They cite a National Academy of Sciences study titled A Strategy for Assessing Science (2007) that says, “No theory exists that can reliably predict which activities are most likely to lead to scientific advances for societal benefit.” Reliable prediction of utility from basic science is surely the province of applied science and engineering, not of basic science. On the other hand, if the nation abandons the creativity of basic science because its outcomes are unpredictable, the nation also will abandon a great reservoir of talent and opportunity for solving tough problems.

A third way forward

Here is where Jeffersonian science offers a third way. Proposed by Gerhard Sonnert and Gerald Holton of Harvard University (the former is a sociologist of science, the latter a physicist and historian of science), this approach combines top-down and bottom-up strategies to encourage basic science that may lead to knowledge that might make a grand challenge easier to solve.

Unfortunately, people who write about public policy too often fail to distinguish between the problems of policymaking for new knowledge (science policy) and policy- making for finding solutions to problems (innovation policy). They neglect the middle-ground strategy—the heart of Jeffersonian science—aimed at making the task of finding those solutions less difficult.

This approach begins with the top-down identification by experts of the technical, financial, and political issues to be faced in confronting a challenge, and then asks which bottom-up solutions are likely to be long delayed for lack of basic knowledge and the new tools that research might create. Government science agencies then provide competitive basic research funding in the disciplines thought most likely to yield critical knowledge.

The United States has some good experience with this approach. When Richard Klausner was director of the National Institutes of Health’s National Cancer Institute, from 1995 to 2001, he set the institute’s mission as finding cures for cancers. He knew that this goal was not likely to be reached in less than several decades unless science could find a way to make the cancer problem more tractable. The biomedical research community was then challenged to make discoveries and create new tools in fields of immunology, cellular biology, genetics, and other areas that might make the problem easier to solve by others. This approach yielded new ways of diagnosing and treating cancer that would never have followed from focusing on a more short-term, clinical research approach.

If such a Jeffersonian approach were to be applied to fostering a revolution in the U.S. energy economy, a good place to start would be to build on an analysis made by Charles Weiss of Georgetown University and William Bonvillian of the Massachusetts Institute of Technology, both longtime experts and advisers to governments on issues of science and technology. They propose a new four-step analytical policy framework. The first step is to assess current energy technologies with a view to identifying the weaknesses and problems they might face in a world market. Second, each technology path must be studied to identify policy measures, from research to regulatory incentives, that can overcome barriers to commercialization. Third, it will be necessary to identify functional gaps in the current government policies and institutions that would, if not changed, distort the ability to carry out the two previous steps. The fourth step is to propose public- and private-sector interventions to close those gaps.

To see how this might work in practice, consider one item on which most experts already agree; that is, steps one and two have, in essence, already have taken place. It is widely accepted that a breakthrough in energy storage that promised to be economical and sustainable would change the options completely for wind and photovoltaic electricity generation and in electric vehicle propulsion. But to produce the payoffs from the Jeffersonian approach, the analysis must be taken to a much greater depth scientifically, so that ideas at the disciplinary level that might seem remote from the practical problem can garner the attention they deserve. Experts would identify subdisciplines of promise: those that are intellectually lively and are struggling with questions that might have a relationship to some part of the energy system analysis. Scientists would be encouraged to submit unsolicited proposals to a Jeffersonian science program. The motives of grant recipients would be largely indistinguishable from those funded to pursue pure science.

Even if the research never yielded a game-changing new idea for sustainable energy, good scientific knowledge would contribute to the broad progress of science. However, experience suggests that if the normal communications among scientists in a Jeffersonian science program are healthy, they will create an intellectual environment in which curiosity will bring to light many new ideas for making the energy grand challenge easier and quicker to master.

Archives – Summer 2009

Charles Darwin

In honor of the contributions made by Charles Darwin, the National Academy of Sciences commissioned a bronze replica of a bust of Darwin created by Virginia sculptor William Couper (1853-1942). The original bronze was commissioned by the New York Academy of Sciences (NYAS) in 1909 and was given to the American Museum of Natural History to inaugurate its Darwin Hall of Invertebrate Zoology. The bust of Darwin has since been returned to the offices of the NYAS, where it resides today. The March 1909 issue of The American Museum Journal reported that “The bust is pronounced by those who knew Darwin personally and by his sons in England… the best portrait in the round of the great naturalist ever made.”

The process used to reproduce the statue for the NAS combined traditional techniques with innovative digital technology. A virtual model was created by scanning the original sculpture in situ at the NYAS. Using a rapid prototype process, a form was created from which the bronze was cast. This process reduces potential damage to the original and gives artisans more flexibility in refining the details of the final work.

Restoring Science to Science Education

I love biology, and nothing in my four decades as a professional biological scientist has given as much satisfaction as seeing that spark of passion for the subject ignited in a young person. So it should be no surprise that nothing frustrates me more than to see that spark extinguished by misguided educators and mind-numbing textbooks. As I write this article, I have just returned from a discussion with 7th-grade students in San Francisco, at which they described their year-long biology class that they found tedious and anything but inspiring. The course was structured around a textbook that was among those officially selected by the state of California two years ago, after an elaborate and expensive process that California repeats every eight years. The exploration of the wonderful world of living things should be a fascinating delight for students. But in California, as in so many other parts of the United States and the world, most students gain no sense of the excitement and power of science, because we adults have somehow let science education be reduced to the memorization of “science key terms.”

How did this happen? And what can we do to recover from this tragic misuse of our young people’s time and effort in school?

Part of the answer to the first question lies in the fact that producing and selling textbooks is a big business, and the prevailing market forces have invariably led to mediocrity. Twenty years ago, the situation was elegantly described in a book whose title says it all: A Conspiracy of Good Intentions: America’s Textbook Fiasco. Sadly, the situation has not changed. Much of the problem lies in the simplistic ways in which these books are usually evaluated, stressing the coverage of science terms and computerized text analyses.

In response to the education standards movement of the 1990s, the 50 states set about establishing their own very different sets of detailed science education standards. Because of this heterogeneity, textbook companies are forced to waste great amounts of time and resources on producing books that can satisfy the needs of as many states as possible. Even before the standards movement made things worse, U.S. textbooks had become known around the world for being “an inch deep and a mile wide.” The result today is what I call science education as mentioning.

Take for example my field of cell biology, where for grades 5 to 8, the National Science Education Standards produced by the National Academies in 1996 emphasized understanding the essence of cells as the fundamental units of life, rather than learning the technical names of cell parts. The California state standards, on the other hand, stress all of these names. As a result, the adopted textbook for 7th grade contains five pages with 12 cell parts highlighted as key terms: including endoplasmic reticulum, Golgi body, lysosomes, mitochondria, and ribosomes. Because this 700-page book is forced by the California state standards to cover much of biology in similar detail, there is not enough room to explain most of these cell parts. Thus, for example, for the highlighted word “endoplasmic reticulum,” the book simply states that “The endoplasmic reticulum’s passageways help form proteins and other materials. They also carry material throughout the cell.” Why should memorizing these two sentences be of any interest or importance to a 12-year-old? And what if anything will even the best students remember a year later?

Another part of the answer to why the United States has let science education go badly astray is that it is much easier to test for science words than it is to test for science understanding. The new age of accountability in U.S. education has led to a massive increase in testing, and the individual states have generally selected simple, low-cost, multiple-choice tests that can be rapidly scored. Because these high-stakes tests drive teachers to teach to them, they are thereby defining what science education means in our schools. This is a great tragedy, inasmuch as it trivializes education for young people. For far too many of them, education appears to be a largely senseless initiation ritual that is imposed on them by adults.

Consider, for example, the following question that is offered in California as a sample item for its 5th-grade science test:

A scientist needs to take a picture of the well-ordered arrangements of the atoms and molecules within a substance. Which of the following instruments would be best for the scientist to use?

  1. A laser light with holograph
  2. A seismograph
  3. An electron microscope
  4. A stereoscope

There are two major problems with this question. The first is that there is no right answer; an electron microscope does not generally have the resolution to decipher the relative arrangement of atoms. But much more important to me is the fact that learning the names of the different machines that scientists use is neither interesting nor relevant to the education of 10-year-olds.

The following anecdote illustrates how far we have strayed from what should be the central purpose of education: empowering students to learn how to learn on their own. A scientist parent notices that her elementary school child has thus far not been exposed to any science in school. As a volunteer teacher, she begins a science lesson by giving the children samples of three different types of soil. Each child is told to use a magnifying glass to examine the soils and write down what they observe in each sample. She waits patiently, but the children are unwilling to write anything. Her probing reveals that after three years of schooling, the students are afraid to express their views because they don’t know “the right answer.”

In fact, we know that life is full of ambiguous situations and that as citizens and workers we will have to solve many problems to which there is no right answer. To quote former Motorola CEO Robert Galvin, “Memorized facts, which are the basis for most testing done in schools today, are of little use in an age in which information is doubling every two or three years. We have expert systems in computers and the Internet that can provide the facts we need when we need them. Our workforce needs to utilize facts to assist in developing solutions to problems.”

Life is nothing like a quiz show. If we adults allow students to believe that we think being educated means knowing all of the right answers, is it any wonder that nearly half of U.S. middle- and high-school students are found to be disengaged from their schooling?

The four strands of science learning

Ten years after producing the National Science Education Standards, the National Academies convened a distinguished committee of scientists and science education experts to take a fresh look at science education, considering all that had been learned in the interim. In 2007, this group produced the valuable report Taking Science to School: Learning and Teaching Science in Grades K-8. This analysis proposes that students who are proficient in science be expected to:

  • know, use, and interpret scientific explanations of the natural world;
  • generate and evaluate scientific evidence and explanations;
  • understand the nature and development of scientific knowledge; and
  • participate productively in scientific practices and discourse.

These four strands of science education were judged in the report to be of equal importance. Yet what is taught in most schools today, from kindergarten through introductory college classes, focuses almost exclusively on only a portion of the first of the four strands: teaching students to know scientific explanations of the natural world. Adopting the agenda in Taking Science to School will therefore require an ambitious effort to redefine the term “science education.”

The source of the problem is college. For the most part, those of us who are scientists have made a mess of science education. Scientists are deeply engaged in attempting to unscramble the puzzle of how the world works, and we are thrilled to read about each year’s startling advances that increase our understanding of the universe that surrounds us. It seems that each new finding raises new questions to be answered, providing an endless frontier for the next generation of scientists to explore. We believe passionately in the power of science to create a better world, as well as in the critical importance for everyone in society of the values and attitudes that science demands of scientists: honesty, a reliance on evidence and logic to make judgments, a willingness to explore new ideas, and a skeptical attitude toward simple answers to complex problems. But very little of this is conveyed to students in our teaching.

It is college science, both because of its prestige and because it is the last science course that most adults will take, that defines science education for future teachers and parents. And yet, when my science colleagues in academia teach a first-year course to college students, most will at best attempt to cover only the first of the four strands of science proficiency recommended in the National Academies report. Any redefinition of science education at lower levels will therefore require a major change in the basic college courses in biology, chemistry, physics, and earth sciences. Each must add an emphasis on the other three strands: on enabling college students to generate and evaluate scientific evidence and explanations; to understand the nature and development of scientific knowledge; and to participate productively in scientific practices and discourse. This requires that students actively experience science as inquiry in their classes, being challenged to collect data and solve problems in the way that scientists do. They will also need to explore a few aspects of the subject in depth and be challenged to come up with some of their own explanations, rather than simply parroting back what they have been told in lectures or in textbooks.

A four-part recipe for action

As in science, strategy is everything when attempting to tackle a difficult problem. And redefining science education along the lines recommended in the Academies’ Taking Science to School report will certainly be difficult. To be effective, we need focus, and I therefore propose the following four-part strategy. Much of what I say here about how to move forward is reflected in the new Opportunity Equation report from the Carnegie Institute for Advanced Study Commission on Mathematics and Science Education, on which I served.

  1. Enlist the National Academies, in collaboration with the National Science Teachers Association and the American Association for the Advancement of Science, to develop a pared-down set of common core standards for science education that reflect the principles in Taking Science to School. We have learned a great deal since 1996 from the response to the standards movement, and the governors and the chief state school officers of a majority of states now recognize the enormous disadvantages of having 50 different state standards for science education. The federal government should provide incentives to the states to sign on to this common standards movement. For example, it can help link the core standards to an energetic, nationwide development of high-quality curricula, to online teacher education and professional development resources, and to the development and continual improvement of a researchbased system of quality assessments and standards, as described below.
  2. Initiate a high-profile effort to produce quality assessments that measure student learning of all four strands of science proficiency. Poor tests are currently driving poor teaching and learning, and the development of much better tests at all levels, from elementary school through introductory college courses, is therefore an urgent and challenging matter. Our nation’s leaders should make this a matter of national service, recruiting a group of the very best scientists and science assessment experts to work together over successive summers, as was done in the post-Sputnik era in the United States. At the K-12 level, two very different types of high-quality tests will need to be developed around the core standards: formative assessments that teachers can use to measure student progress, so as to adjust their teaching appropriately during the school year; and summative assessments that the states will use for accountability purposes. At the college level, I envision an effort to develop and disseminate quality questions to be given on the final exam in introductory science courses. These would be designed to test for an understanding of the last three strands of science proficiency in Taking Science to School and therefore be applicable to courses in a variety of scientific fields. Has the course enabled the students to understand “science as a way of knowing”, and has it prepared them to use scientific processes and evidence as adults? The professors who teach these courses are scientists and should therefore care deeply about the answer.
  3. Link the core science standards and their associated assessments to an intensive research program in selected school districts, so as to provide the “ground truth” neededfor their continuous improvement. Education is much too complex to ever expect to get it permanently right. What is the effect of the use of these standards and assessments in actual schools? In what ways are they driving high-quality teaching and learning of science? How should they be revised and improved? Answers to these types of questions require collaborations between skilled researchers and teachers, and they are critical if we are to develop the science of education that our nation needs. The Strategic Education Research Partnership (SERP) is a nonprofit institution that resulted from two successive studies by the National Academies that addressed the question, why is research knowledge used effectively to improve health, agriculture, and transportation, but not education? Now in its fourth year, SERP has demonstrated how highly effective research can be produced when groups of academics and practitioners collaborate in real school settings, setting an example for the substantial research effort that is essential to continuously improve science education.
  4. Work to strengthen to strengthen the human resources systems of states and school districts so as to recruit, retain, and deploy a corps of highly qualified science and math teachers. We must improve teacher retention by making school districts more attractive places to work. Teachers must be treated as professionals and teacher leaders recruited to help incorporate the wisdom of outstanding teachers into school, school system, and state education practices and polices. Without such advice from a district’s best teachers, continual improvement cycles are unlikely to be maintained. The United States should consider international models, such as Singapore’s, that incorporate rotating groups of outstanding teachers into the highest levels of the education policymaking apparatus. We should also consider the possibility of recruiting outstanding Ph.D. scientists into state and district office, so as to readily connect our schools to national and local resources in the scientific and science education communities.

The broad goal for science education must be to provide students with the skills of problem solving, communication, and general thinking required to be effective workers and educated citizens in the 21st century. Business and industry need problem solvers throughout the enterprise, as witnessed by many studies. These same skills are also crucial to enable everyone to navigate the increasingly complex and noisy world that we live in. Thus, they are essential to empower the citizens in a democracy to make wise judgments for themselves and their communities, which they are required to do in the midst of a cacophony of voices striving to sway rather than enlighten them.

What Science Can Do

It is a great privilege to address the distinguished members of the National Academy of Sciences, as well as the leaders of the National Academy of Engineering and the Institute of Medicine who’ve gathered here this morning.

And I’d like to begin today with a story of a previous visitor who also addressed this august body. In April of 1921, Albert Einstein visited the United States for the first time. And his international credibility was growing as scientists around the world began to understand and accept the vast implications of his theories of special and general relativity. And he attended this annual meeting, and after sitting through a series of long speeches by others, he reportedly said, “I have just got a new theory of eternity.” So I will do my best to heed this cautionary tale.

The very founding of this institution stands as a testament to the restless curiosity, the boundless hope so essential not just to the scientific enterprise, but to this experiment we call America.

A few months after a devastating defeat at Fredericksburg, before Gettysburg would be won, before Richmond would fall, before the fate of the Union would be at all certain, President Abraham Lincoln signed into law an act creating the National Academy of Sciences—in the midst of civil war.

Lincoln refused to accept that our nation’s sole purpose was mere survival. He created this academy, founded the land grant colleges, and began the work of the transcontinental railroad, believing that we must add—and I quote—“the fuel of interest to the fire of genius in the discovery…of new and useful things.”

This is America’s story. Even in the hardest times, against the toughest odds, we’ve never given in to pessimism; we’ve never surrendered our fates to chance; we have endured; we have worked hard; we sought out new frontiers.

Today, of course, we face more complex challenges than we have ever faced before: a medical system that holds the promise of unlocking new cures and treatments, attached to a health care system that holds the potential for bankruptcy to families and businesses; a system of energy that powers our economy, but simultaneously endangers our planet; threats to our security that seek to exploit the very interconnectedness and openness so essential to our prosperity; and challenges in a global marketplace which links the derivative trader on Wall Street to the homeowner on Main Street, the office worker in America to the factory worker in China—a marketplace in which we all share in opportunity, but also in crisis.

At such a difficult moment, there are those who say we cannot afford to invest in science, that support for research is somehow a luxury at moments defined by necessities. I fundamentally disagree. Science is more essential for our prosperity, our security, our health, our environment, and our quality of life than it has ever been before.

And if there was ever a day that reminded us of our shared stake in science and research, it’s today. We are closely monitoring the emerging cases of swine flu in the United States. And this is obviously a cause for concern and requires a heightened state of alert. But it’s not a cause for alarm. The Department of Health and Human Services has declared a public health emergency as a precautionary tool to ensure that we have the resources we need at our disposal to respond quickly and effectively. And I’m getting regular updates on the situation from the responsible agencies. And the Department of Health and Human Services as well as the Centers for Disease Control will be offering regular updates to the American people. And Secretary Napolitano will be offering regular updates to the American people, as well, so that they know what steps are being taken and what steps they may need to take.

But one thing is clear—our capacity to deal with a public health challenge of this sort rests heavily on the work of our scientific and medical community. And this is one more example of why we can’t allow our nation to fall behind.

Unfortunately, that’s exactly what’s happened. Federal funding in the physical sciences as a portion of our gross domestic product has fallen by nearly half over the past quarter century. Time and again we’ve allowed the research and experimentation tax credit, which helps businesses grow and innovate, to lapse.

Our schools continue to trail other developed countries and, in some cases, developing countries. Our students are outperformed in math and science by their peers in Singapore, Japan, England, the Netherlands, Hong Kong, and Korea, among others. Another assessment shows American 15-year-olds ranked 25th in math and 21st in science when compared to nations around the world. And we have watched as scientific integrity has been undermined and scientific research politicized in an effort to advance predetermined ideological agendas.

We know that our country is better than this. A half century ago, this nation made a commitment to lead the world in scientific and technological innovation; to invest in education, in research, in engineering; to set a goal of reaching space and engaging every citizen in that historic mission. That was the high water mark of America’s investment in research and development. And since then our investments have steadily declined as a share of our national income. As a result, other countries are now beginning to pull ahead in the pursuit of this generation’s great discoveries.

I believe it is not in our character, the American character, to follow. It’s our character to lead. And it is time for us to lead once again. So I’m here today to set this goal: We will devote more than 3 percent of our GDP to research and development. We will not just meet, but we will exceed the level achieved at the height of the space race, through policies that invest in basic and applied research, create new incentives for private innovation, promote breakthroughs in energy and medicine, and improve education in math and science.

This represents the largest commitment to scientific research and innovation in American history. Just think what this will allow us to accomplish: solar cells as cheap as paint; green buildings that produce all the energy they consume; learning software as effective as a personal tutor; prosthetics so advanced that you could play the piano again; an expansion of the frontiers of human knowledge about ourselves and the world around us. We can do this.

The pursuit of discovery half a century ago fueled our prosperity and our success as a nation in the half century that followed. The commitment I am making today will fuel our success for another 50 years. That’s how we will ensure that our children and their children will look back on this generation’s work as that which defined the progress and delivered the prosperity of the 21st century.

This work begins with a historic commitment to basic science and applied research, from the labs of renowned universities to the proving grounds of innovative companies. Through the American Recovery and Reinvestment Act, and with the support of Congress, my administration is already providing the largest single boost to investment in basic research in American history. That’s already happened.

This is important right now, as public and private colleges and universities across the country reckon with shrinking endowments and tightening budgets. But this is also incredibly important for our future. As Vannevar Bush, who served as scientific advisor to President Franklin Roosevelt, famously said: “Basic scientific research is scientific capital.”

The fact is an investigation into a particular physical, chemical, or biological process might not pay off for a year, or a decade, or at all. And when it does, the rewards are often broadly shared, enjoyed by those who bore its costs but also by those who did not.

Energy is our great project, this generation’s great project. And that’s why I’ve set a goal for our nation that we will reduce our carbon pollution by more than 80% by 2050.

And that’s why the private sector generally under-invests in basic science, and why the public sector must invest in this kind of research—because while the risks may be large, so are the rewards for our economy and our society.

No one can predict what new applications will be born of basic research: new treatments in our hospitals, or new sources of efficient energy; new building materials; new kinds of crops more resistant to heat and to drought. It was basic research in the photoelectric field—in the photoelectric effect that would one day lead to solar panels. It was basic research in physics that would eventually produce the CAT scan. The calculations of today’s GPS satellites are based on the equations that Einstein put to paper more than a century ago.

In addition to the investments in the Recovery Act, the budget I’ve proposed—and versions have now passed both the House and the Senate—builds on the historic investments in research contained in the recovery plan. So we double the budget of key agencies, including the National Science Foundation, a primary source of funding for academic research; and the National Institute of Standards and Technology, which supports a wide range of pursuits from improving health information technology to measuring carbon pollution, from testing “smart grid” designs to developing advanced manufacturing processes.

And my budget doubles funding for the Department of Energy’s Office of Science, which builds and operates accelerators, colliders, supercomputers, high-energy light sources, and facilities for making nanomaterials, because we know that a nation’s potential for scientific discovery is defined by the tools that it makes available to its researchers.

But the renewed commitment of our nation will not be driven by government investment alone. It’s a commitment that extends from the laboratory to the marketplace. And that’s why my budget makes the research and experimentation tax credit permanent. This is a tax credit that returns two dollars to the economy for every dollar we spend by helping companies afford the often high costs of developing new ideas, new technologies, and new products. Yet at times we’ve allowed it to lapse or only renewed it year to year. I’ve heard this time and again from entrepreneurs across this country: By making this credit permanent we make it possible for businesses to plan the kinds of projects that create jobs and economic growth.

Science, technology, and innovation proceed more rapidly and more cost-effectively when insights, costs, and risks are shared; and so many of the challenges that science and technology will help us meet are global in character.

Second, in no area will innovation be more important than in the development of new technologies to produce, use, and save energy, which is why my administration has made an unprecedented commitment to developing a 21st century clean energy economy, and why we put a scientist in charge of the Department of Energy.

Our future on this planet depends on our willingness to address the challenge posed by carbon pollution. And our future as a nation depends upon our willingness to embrace this challenge as an opportunity to lead the world in pursuit of new discovery.

When the Soviet Union launched Sputnik a little more than a half century ago, Americans were stunned. The Russians had beaten us to space. And we had to make a choice: We could accept defeat or we could accept the challenge. And as always, we chose to accept the challenge. President Eisenhower signed legislation to create NASA and to invest in science and math education, from grade school to graduate school. And just a few years later, a month after his address to the 1961 Annual Meeting of the National Academy of Sciences, President Kennedy boldly declared before a joint session of Congress that the United States would send a man to the moon and return him safely to the Earth.

The scientific community rallied behind this goal and set about achieving it. And it would not only lead to those first steps on the moon; it would lead to giant leaps in our understanding here at home. That Apollo program produced technologies that have improved kidney dialysis and water purification systems; sensors to test for hazardous gasses; energy-saving building materials; fire-resistant fabrics used by firefighters and soldiers. More broadly, the enormous investment in that era in science and technology, in education and research funding produced a great outpouring of curiosity and creativity, the benefits of which have been incalculable. There are those of you in this audience who became scientists because of that commitment. We have to replicate that.

There will be no single Sputnik moment for this generation’s challenges to break our dependence on fossil fuels. In many ways, this makes the challenge even tougher to solve and makes it all the more important to keep our eyes fixed on the work ahead.

But energy is our great project, this generation’s great project. And that’s why I’ve set a goal for our nation that we will reduce our carbon pollution by more than 80% by 2050. And that is why I’m pursuing, in concert with Congress, the policies that will help us meet this goal.

My recovery plan provides the incentives to double our nation’s capacity to generate renewable energy over the next few years, extending the production tax credit, providing loan guarantees, and offering grants to spur investment. Just take one example: Federally funded research and development has dropped the cost of solar panels by tenfold over the last three decades. Our renewed efforts will ensure that solar and other clean energy technologies will be competitive.

My budget includes $150 billion over 10 years to invest in sources of renewable energy as well as energy efficiency. It supports efforts at NASA, recommended as a priority by the National Research Council, to develop new space-based capabilities to help us better understand our changing climate. And today, I’m also announcing that for the first time, we are funding an initiative—recommended by this organization—called the Advanced Research Projects Agency for Energy, or ARPA-E.

This is based, not surprisingly, on DARPA, the Defense Advanced Research Projects Agency, which was created during the Eisenhower administration in response to Sputnik. It has been charged throughout its history with conducting high-risk, high-reward research. And the precursor to the Internet, known as ARPANET, stealth technology, the Global Positioning System all owe a debt to the work of DARPA.

So ARPA-E seeks to do the same kind of high-risk, high-reward research. My administration will pursue, as well, comprehensive legislation to place a market-based cap on carbon emissions. We will make renewable energy the profitable kind of energy. We will put in place the resources so that scientists can focus on this critical area. And I am confident that we will find a wellspring of creativity just waiting to be tapped by researchers in this room and entrepreneurs across our country. We can solve this problem.

Now, the nation that leads the world in 21st century clean energy will be the nation that leads in the 21st century global economy. I believe America can and must be that nation. But in order to lead in the global economy and to ensure that our businesses can grow and innovate, and our families can thrive, we’re also going to have to address the shortcomings of our health care system.

The Recovery Act will support the long overdue step of computerizing America’s medical records, to reduce the duplication, waste and errors that cost billions of dollars and thousands of lives. But it’s important to note, these records also hold the potential of offering patients the chance to be more active participants in the prevention and treatment of their diseases. We must maintain patient control over these records and respect their privacy. At the same time, we have the opportunity to offer billions and billions of anonymous data points to medical researchers who may find in this information evidence that can help us better understand disease.

History also teaches us the greatest advances in medicine have come from scientific breakthroughs, whether the discovery of antibiotics or improved public health practices, vaccines for smallpox and polio and many other infectious diseases, antiretroviral drugs that can return AIDS patients to productive lives, pills that can control certain types of blood cancers, so many others.

Because of recent progress, not just in biology, genetics and medicine, but also in physics, chemistry, computer science, and engineering, we have the potential to make enormous progress against diseases in the coming decades. And that’s why my administration is committed to increasing funding for the National Institutes of Health, including $6 billion to support cancer research, part of a sustained, multiyear plan to double cancer research in our country.

Next, we are restoring science to its rightful place. On March 9th, I signed an executive memorandum with a clear message: Under my administration, the days of science taking a back seat to ideology are over. Our progress as a nation and our values as a nation are rooted in free and open inquiry. To undermine scientific integrity is to undermine our democracy. It is contrary to our way of life.

That’s why I’ve charged John Holdren and the White House Office of Science and Technology Policy with leading a new effort to ensure that federal policies are based on the best and most unbiased scientific information. I want to be sure that facts are driving scientific decisions, and not the other way around.

As part of this effort, we’ve already launched a web site that allows individuals to not only make recommendations to achieve this goal, but to collaborate on those recommendations. It’s a small step, but one that’s creating a more transparent, participatory, and democratic government.

We also need to engage the scientific community directly in the work of public policy. And that’s why, today, I am announcing that we are filling out the President’s Council of Advisors on Science and Technology, known as PCAST, and I intend to work with them closely. Our co-chairs have already been introduced: Dr. Varmus and Dr. Lander along with John [Holdren]. And this council represents leaders from many scientific disciplines who will bring a diversity of experiences and views. And I will charge PCAST with advising me about national strategies to nurture and sustain a culture of scientific innovation.

In biomedicine, just to give you an example of what PCAST can do, we can harness the historic convergence between life sciences and physical sciences that’s under way today; undertaking public projects in the spirit of the Human Genome Project to create data and capabilities that fuel discoveries in tens of thousands of laboratories; and identifying and overcoming scientific and bureaucratic barriers to rapidly translating scientific breakthroughs into diagnostics and therapeutics that serve patients.

In environmental science, it will require strengthening our weather forecasting, our Earth observation from space, the management of our nation’s land, water, and forests, and the stewardship of our coastal zones and ocean fisheries.

We also need to work with our friends around the world. Science, technology, and innovation proceed more rapidly and more cost-effectively when insights, costs, and risks are shared; and so many of the challenges that science and technology will help us meet are global in character. This is true of our dependence on oil, the consequences of climate change, the threat of epidemic disease, and the spread of nuclear weapons.

And that’s why my administration is ramping up participation in and our commitment to international science and technology cooperation across the many areas where it is clearly in our interest to do so. In fact, this week, my administration is gathering the leaders of the world’s major economies to begin the work of addressing our common energy challenges together.

Fifth, since we know that the progress and prosperity of future generations will depend on what we do now to educate the next generation, today I’m announcing a renewed commitment to education in mathematics and science. This is something I care deeply about. Through this commitment, American students will move from the middle to the top of the pack in science and math over the next decade, for we know that the nation that out-educates us today will out-compete us tomorrow. And I don’t intend to have us out-educated.

We can’t start soon enough. We know that the quality of math and science teachers is the most influential single factor in determining whether a student will succeed or fail in these subjects. Yet in high school more than 20% of students in math and more than 60% of students in chemistry and physics are taught by teachers without expertise in these fields. And this problem is only going to get worse. There is a projected shortfall of more than 280,000 math and science teachers across the country by 2015.

And that’s why I’m announcing today that states making strong commitments and progress in math and science education will be eligible to compete later this fall for additional funds under the Secretary of Education’s $5 billion Race to the Top program. And I’m challenging states to dramatically improve achievement in math and science by raising standards, modernizing science labs, upgrading curriculum, and forging partnerships to improve the use of science and technology in our classrooms. I’m challenging states, as well, to enhance teacher preparation and training, and to attract new and qualified math and science teachers to better engage students and reinvigorate those subjects in our schools.

And in this endeavor, we will work to support inventive approaches. Let’s create systems that retain and reward effective teachers, and let’s create new pathways for experienced professionals to go into the classroom. There are, right now, chemists who could teach chemistry, physicists who could teach physics, statisticians who could teach mathematics. But we need to create a way to bring the expertise and the enthusiasm of these folks—folks like you—into the classroom.

There are states, for example, doing innovative work. I’m pleased to announce that Governor Ed Rendell of Pennsylvania will lead an effort with the National Governors Association to increase the number of states that are making science, technology, engineering, and mathematics education a top priority. Six states are currently participating in the initiative, including Pennsylvania, which has launched an effective program to ensure that the state has the skilled workforce in place to draw the jobs of the 21st century. And I want every state, all 50 states, to participate.

But as you know, our work does not end with a high school diploma. For decades, we led the world in educational attainment, and as a consequence we led the world in economic growth. The G.I. Bill, for example, helped send a generation to college. But in this new economy, we’ve come to trail other nations in graduation rates, in educational achievement, and in the production of scientists and engineers.

That’s why my administration has set a goal that will greatly enhance our ability to compete for the high-wage, high-tech jobs of the future and to foster the next generation of scientists and engineers. In the next decade, by 2020, America will once again have the highest proportion of college graduates in the world. That is a goal that we are going to set. And we’ve provided tax credits and grants to make a college education more affordable.

My budget also triples the number of National Science Foundation graduate research fellowships. This program was created as part of the space race five decades ago. In the decades since, it’s remained largely the same size, even as the numbers of students who seek these fellowships has skyrocketed. We ought to be supporting these young people who are pursuing scientific careers, not putting obstacles in their path.

So this is how we will lead the world in new discoveries in this new century. But I think all of you understand it will take far more than the work of government. It will take all of us. It will take all of you. And so today I want to challenge you to use your love and knowledge of science to spark the same sense of wonder and excitement in a new generation.

I’m challenging states to dramatically improve achievement in math and science by raising standards, modernizing science labs, upgrading curriculum, and forging partnerships to improve the use of science and technology in our classrooms.

America’s young people will rise to the challenge if given the opportunity, if called upon to join a cause larger than themselves. We’ve got evidence. You know, the average age in NASA’s mission control during the Apollo 17 mission was just 26. I know that young people today are just as ready to tackle the grand challenges of this century.

So I want to persuade you to spend time in the classroom, talking and showing young people what it is that your work can mean, and what it means to you. I want to encourage you to participate in programs to allow students to get a degree in science fields and a teaching certificate at the same time. I want us all to think about new and creative ways to engage young people in science and engineering, whether it’s science festivals, robotics competitions, fairs that encourage young people to create and build and invent—to be makers of things, not just consumers of things.

I want you to know that I’m going to be working alongside you. I’m going to participate in a public awareness and outreach campaign to encourage students to consider careers in science and mathematics and engineering, because our future depends on it.

And the Department of Energy and the National Science Foundation will be launching a joint initiative to inspire tens of thousands of American students to pursue these very same careers, particularly in clean energy. It will support an educational campaign to capture the imagination of young people who can help us meet the energy challenge and will create research opportunities for undergraduates and educational opportunities for women and minorities who too often have been underrepresented in scientific and technological fields but are no less capable of inventing the solutions that will help us grow our economy and save our planet. And it will support fellowships and interdisciplinary graduate programs and partnerships between academic institutions and innovative companies to prepare a generation of Americans to meet this generational challenge.

For we must always remember that somewhere in America there’s an entrepreneur seeking a loan to start a business that could transform an industry, but she hasn’t secured it yet. There’s a researcher with an idea for an experiment that might offer a new cancer treatment, but he hasn’t found the funding yet. There’s a child with an inquisitive mind staring up at the night sky. And maybe she has the potential to change our world, but she doesn’t know it yet.

As you know, scientific discovery takes far more than the occasional flash of brilliance, as important as that can be. Usually, it takes time and hard work and patience; it takes training; it requires the support of a nation. But it holds a promise like no other area of human endeavor.

In 1968, a year defined by loss and conflict and tumult, Apollo 8 carried into space the first human beings ever to slip beyond Earth’s gravity, and the ship would circle the moon 10 times before returning home. But on its fourth orbit, the capsule rotated and for the first time Earth became visible through the windows.

Bill Anders, one of the astronauts aboard Apollo 8, scrambled for a camera, and he took a photo that showed the Earth coming up over the moon’s horizon. It was the first ever taken from so distant a vantage point, and it soon became known as “Earthrise.”

Anders would say that the moment forever changed him, to see our world—this pale blue sphere—without borders, without divisions, at once so tranquil and beautiful and alone. “We came all this way to explore the moon,” he said, “and the most important thing is that we discovered the Earth.”

Yes, scientific innovation offers us a chance to achieve prosperity. It has offered us benefits that have improved our health and our lives, improvements we take too easily for granted. But it gives us something more. At root, science forces us to reckon with the truth as best as we can ascertain it.

And some truths fill us with awe. Others force us to question long-held views. Science can’t answer every question, and indeed, it seems at times the more we plumb the mysteries of the physical world, the more humble we must be. Science cannot supplant our ethics or our values, our principles or our faith. But science can inform those things and help put those values—these moral sentiments, that faith—can put those things to work—to feed a child or to heal the sick, to be good stewards of this Earth.

We are reminded that with each new discovery and the new power it brings comes new responsibility; that the fragility, the sheer specialness of life requires us to move past our differences and to address our common problems, to endure and continue humanity’s strivings for a better world.

As President Kennedy said when he addressed the National Academy of Sciences more than 45 years ago: “The challenge, in short, may be our salvation.”

Thank you all for all your past, present, and future discoveries. May God bless you. God bless the United States of America.

From the Hill – Summer 2009

Climate change legislation advances

In a major step forward for advocates of climate change action, the House Energy and Commerce Committee on May 21 passed a bill that would create a national cap-and-trade system to reduce greenhouse gas (GHG) emissions.

The American Clean Energy and Security Act (H.R. 2454), approved by a 33 to 25 vote, would set a cap on GHG emissions, which would be reduced over time, and would create permits to emit GHGs, which would be traded in a new market aimed at reducing emissions in the most economically efficient manner. Emissions would have to be reduced 17% below 2005 levels by 2020 and 83% below 2005 levels by 2050. Initially, 85% of the emissions permits would be given away and 15% auctioned. The amount of permits auctioned would increase over time. The decision to initially give away most of the permits results from attempts to win the support of moderate Democrats, many of whom come from states with coal-related industry and are worried about the effects of the bill on their local economies.

Although a climate change bill has now passed one committee, it is not clear when and if the legislation will reach the House floor. Eight other committees have claimed jurisdiction over the issue. Nonetheless, Majority Leader Steny Hoyer (D-MD) has said that he expects the bill to be on the floor before the July 4 recess.

In another announcement with a potentially major effect on the climate change issue, the U.S. Environmental Protection Agency (EPA) said on April 17 that automobile emissions endanger public health and welfare and therefore must be regulated under the Clean Air Act. According to the EPA, “In both magnitude and probability, climate change is an enormous problem. The greenhouse gases that are responsible for it endanger public health and welfare within the meaning of the Clean Air Act.”

The EPA found that high concentrations of six gases (carbon dioxide, methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride) due to human activities have caused increases in average temperatures and other climate changes. The EPA cited effects that include increases in drought, floods, heat waves, wildfires, sea-level rise, and intense storms, as well as harm to water resources, agriculture, wildlife, and ecosystems.

Once finalized, the EPA finding would trigger regulation under the Clean Air Act. Development of these regulations would be conducted through a separate rulemaking process, and no proposed rules are contained in the endangerment finding. The press release accompanying the finding notes the preference of President Obama and EPA administrator Lisa Jackson “for comprehensive legislation to address this issue and create the framework for a clean energy economy,” but EPA will be legally required to act in the absence of such a framework.

FY 2010 budget proposal backs more increases for R&D

President Obama’s fiscal year (FY) 2010 budget proposal includes a 3.6% increase for nondefense R&D and 3.4% increase for basic research. Under the proposal, overall R&D spending would hit $147.6 billion, up 0.4% from FY 2009, excluding the stimulus funding in the American Recovery and Reinvestment Act (ARRA). Defense R&D would decline 2%, and applied research would decline by 2.2%.

Agencies slated for significant R&D increases above the FY 2009 estimate (not including ARRA funds) include the National Aeronautics and Space Administration (NASA), up 10%; the National Science Foundation (NSF), up 9.4%; the National Institute of Standards and Technology (NIST), up 15.8%; and the Department of Education, up 18.9%. Some agencies would receive cuts: the Department of Defense (DOD), down 2.4%, the U.S. Department of Agriculture (USDA), down 6.32%, and the National Oceanic and Atmospheric Administration (NOAA), down 8%.

Some agency highlights:

NSF. With a goal of tripling the number of Graduate Research Fellowships by FY 2013, the administration would increase this program by 6% in FY 2010.

NASA. The Exploration Directorate would receive the biggest increase: $458 million, or 13%, to almost $4 billion, almost all of which is targeted to the Constellation Systems program. Although the R&D budget would rise by 10%, the Science Directorate would see a cut of 0.6%. The Aeronautics Directorate budget would rise by 1.4%.

Department of Energy (DOE). DOE’s R&D portfolio ($10.74 billion) would receive slightly more than 40% of the overall DOE budget, compared to a 32% share in FY 2009. Spending at the Office of Science would rise by 3.5%, with all major areas of research in that office seeing increases. The Advanced Research Projects Agency-Energy (ARPA-E), which was first authorized in the America COMPETES Act of 2007, would receive $10 million for FY 2010, after receiving start-up funding of $400 million from the ARRA and $15 million in the final FY 2009 omnibus appropriation. DOE’s Office of Energy Efficiency and Renewable Energy (EERE) would receive a 39.4% increase to just over $2 billion.

Fossil energy programs would be cut 29.5% to $618 million, after being given a $3.4 billion boost in the ARRA. The majority of the ARRA funds would go toward carbon capture and storage and clean coal initiatives, whereas the reduction in the FY 2010 request reflects congressionally designated projects funded in 2009 that will not be continued in 2010. Nuclear energy R&D would be cut by 22% to $403 million in order to allow for an increased focus on renewable energy programs. Funding would be eliminated for the Yucca Mountain nuclear waste repository, with some funding redirected to studying alternatives to the Yucca Mountain site.

National Institutes of Health (NIH). NIH’s R&D budget would rise 1.5% to $30.2 billion. A key administration priority is to double funding for cancer research over eight years. The FY 2010 budget request includes a 6% increase to $6 billion. Another priority is research into the causes and treatments of autism spectrum disorders, which would receive $141 million.

NIST. Once targeted for elimination by the Bush administration, two programs would receive generous increases. The budget for the Technology Innovation Program (a descendant of the Advanced Technology Program) would increase 7.5% to almost $70 million, and the Manufacturing Extension Partnership program would receive almost $125 million, up 13%.

NOAA. The $568 million in R&D spending includes increases in initiatives on ocean acidification, drought early warning, models for decadal climate predictions, and priorities in the Ocean Research Priorities Plan.

DOD. The R&D portfolio would decrease $1.9 billion to $79.7 billion because of the proposed cancellation of a number of weapon programs. DOD’s basic research would remain at the FY 2009 enacted levels for a total request of $1.8 billion, whereas applied research would decrease to $4.25 billion in FY 2010. Medical Research programs at DOD would decline by 32% to $613 million.

Department of Homeland Security (DHS). The R&D portfolio would increase $29 million to $1.1 billion. The Science and Technology Directorate would receive an overall boost of approximately 4% to $968 million.

USDA. Overall R&D funding would fall 6.2% to $2.3 billion. The White House Office of Science and Technology Policy said, however, that the decrease stems from congressionally earmarked projects that will not continue in FY 2010. Specific areas of R&D at USDA that will see the largest funding increases include biomass R&D, which would jump 40% to $28 million, and research on organic agriculture, which would rise by 11% to $20 million.

Department of the Interior. R&D activities would receive $730 million in FY 2010, up 5.5%. The largest portion of this total is accounted for by the U.S. Geological Survey, which would receive a 6.4% increase to $649 million. Global Change Science would receive $58.2 million, 43% higher than in FY 2009 and more than double its of FY 2008 budget.

Environmental Protection Agency (EPA). The R&D portfolio would increase 7% to $619 million.

Department of Transportation (DOT). R&D funding would hit $939 million, up 3%. Within DOT, the Engineering, Research, and Development Fund at the Federal Aviation Administration would rise 5.3% to $180 million, and funding for vehicle safety research and highway safety R&D at the National Highway Traffic Safety Administration would rise by 1.7% to $237 million.

R&D in the FY 2010 Budget by Agency

(budget authority in millions of dollars)

FY 2008 Actual FY 2009 Estimate FY 2009 ARRA* FY 2010 Budget Change FY 09-10
Amount Percent
Total R&D (Conduct and Facilities)

Department of Defense 80,278 81,616 300 79,687 -1,929 -2.4%

Dept. of Health and Human Services 29,265 30,415 11,103 30,936 521 1.7%

Nat’l Institutes of Health 28,547 29,748 10,400 30,184 436 1.5%

All Other HHS R&D 718 667 703 752 85 12.7%

NASA 11,182 10,401 925 11,439 1,038 10.0%

Department of Energy 9,807 10,621 2,446 10,740 119 1.1%

National Science Foundation 4,580 4,857 2,900 5,312 455 9.4%

Department of Agriculture 2,336 2,421 176 2,272 -149 -6.2%

Department of Commerce 1,160 1,292 411 1,330 38 2.9%

NOAA 625 700 1 644 -56 -8.0%

NIST 498 550 410 637 87 15.8%

Department of the Interior 683 692 74 730 38 5.5%

U.S. Geological Survey 586 611 74 649 38 6.2%

Department of Transportation 875 913 0 939 26 2.8%

Environmental Protection Agency 551 580 0 619 39 6.7%

Department of Veterans Affairs 960 1,020 0 1,160 140 13.7%

Department of Education 313 323 0 384 61 18.9%

Department of Homeland Security 995 1,096 0 1,125 29 2.6%

All Other 761 818 0 947 129 15.8%

Total R&D 143,746 147,065 18,335 147,620 555 0.4%

Defense R&D 84,337 85,426 300 83,760 -1,666 -2.0%

Non-defense R&D 59,409 61,639 18,035 63,860 2,221 3.6%

Non-defense R&D excluding NASA 48,227 51,238 17,110 52,421 1,183 2.3%

Basic Research 28,613 29,881 11,365 30,884 1,003 3.4%

Applied Research 27,413 28,766 1,920 28,139 -627 -2.2%

Total Research 56,026 58,647 13,285 59,023 376 0.6%

Development 83,254 83,887 1,408 84,054 167 0.2%

R&D Facilities and Equipment 4,466 4,531 3,642 4,543 12 0.3%

Source: AAAS, based on OMB and OSTP data for R&D for FY 2010, agency budget justifications, and information from agency budget offices.

Note: The projected inflation rate between FY 2009 and FY 2010 is 1.0 percent.

FY 2010 figures exclude pending supplementals.

* Based on preliminary distribution of funding from the American Recoveryand Reinvestment Act of 2009 (P.L. 111-5). Figures may change.

NIH releases guidelines for stem cell research

On April 17, the National Institutes of Health (NIH) released draft guidelines for federal funding of human embryonic stem cell research, just over a month after President Obama signed an executive order expanding federal support for the research.

According to the guidelines, NIH would permit funding for research on stem cells derived from embryos left over from fertility treatments, provided that certain conditions are met. Because provisions in annual appropriations bills prevent NIH from funding the destruction or creation of embryos, the actual derivation of the cells must be done in the private sector. In addition, NIH does not plan to fund research on embryos created specifically for research or on stem cells derived by research-cloning techniques or by parthenogenesis (a method that uses unfertilized egg cells). Acting Director Raynard Kington justified the approach by stating that the method approved by NIH has broad public support.

Scientists appeared to be divided in their opinions of the new rules, with some applauding the guidelines as a step forward and others disappointed that they did not allow funding for enough types of research. Obama’s execu tive order mandates that NIH periodically revisit the guidelines.

The guidelines would require strict informed consent provisions that appear to be modeled largely on NIH guidelines from 2000 as well as guidelines devised by the National Academies in 2005. Donors cannot receive money or other incentives for their embryos, and the decision to donate must be free of the influence of researchers and separate from the decision to seek fertility treatments. Researchers and their institutions must provide documentation for several requirements, including that the donor was aware of all options for use of the embryos, that the donor understood what would occur to the embryos in research, and that the donor was not able to direct use of the stem cells to any particular individual’s medical care.

Although virtually all science organizations recognize the importance of informed consent rules, the fact that the specific requirements for the documentation of informed consent have changed over the years has made many groups nervous about the eligibility of stem cell lines that are already in use. The American Association for the Advancement of Science, the International Society for Stem Cell Research, and other groups have asked NIH to grandfather in stem cell lines that met the ethical requirements in place at the time of their derivation, including the lines that were eligible for funding under Bush administration policy.

NASA to review human space flight activities

The White House said it would organize a review of human space flight activities at the National Aeronautics and Space Administration (NASA), and President Obama nominated a new leadership team for the agency.

On May 23, Obama nominated Gen. Charles Bolden as the next administrator of NASA. Bolden, who flew four times on the space shuttle, would be the second astronaut at the helm of the agency and the first African American. The president also nominated former NASA official and campaign adviser Lori Garver as deputy administrator.

On May 7, John Holdren, the president’s science advisor and director of the Office of Science and Technology Policy (OSTP), said an independent review commission will be chaired by Norman Augustine, a former Lockheed Martin chief executive who led a NASA review in 1990. NASA acting administrator Christopher Scolese will name the other members of the panel in consultation with OSTP.

Holdren said the commission would assess how best to support use of the International Space Station and planned missions to the Moon and other destinations; how to stimulate commercial space flight capabilities; and how best to fit NASA exploration activities into the agency’s budget. The panel will also assess the amount of R&D and complementary robotic activity needed for human space flight and evaluate opportunities for missions extending International Space Station operations beyond 2016. It will present its findings by August 2009.

Meanwhile, Scolese testified at three congressional hearings and answered questions about the five-year gap that NASA anticipates between the scheduled retirement of the space shuttle in 2010 and the advent of a new human spaceflight vehicle. NASA has planned eight more missions before the shuttle is retired, but members of Congress are skeptical that the plan can be completed. After the shuttle’s retirement, the Russian Soyuz vessel will transport astronauts to and from the International Space Station until at least 2015. “There is no Plan B,” Scolese said, noting that in addition to its standard transportation capabilities, the Soyuz could function as an escape vehicle if necessary.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

The Sustainability Transition

One of the greatest challenges confronting humanity in the 21st century is sustainability: how to meet the basic needs of people for food, energy, water, and shelter without degrading the planet’s life support infrastructure, its atmosphere and water resources, the climate system, and species and ecosystems on land and in the oceans on which we and future generations will rely. Although the precise definition of sustainability continues to be discussed and debated, general agreement has emerged about what areas deserve most attention, and actions are being taken in all of them. Although we won’t meet the sustainability goal overnight, humanity is beginning to make decisions based on criteria that show concern both for people and for our life support systems. We are embarked on a transition toward sustainability.

With a still-growing human population, rapidly increasing consumption, and ever-increasing stresses on the environmental services on which we rely, however, this transition needs to accelerate. The engagement of the science and technology (S&T) community will be essential, though not sufficient, for achieving that acceleration. Like the fields of medical science and agricultural science, the emerging field of sustainability science is not defined by disciplines but rather by problems to be addressed and solved. It encompasses and draws on elements of the biophysical and social sciences, engineering, and medicine, as well as the humanities, and is often multi- and interdisciplinary in effort. The substantive focus of sustainability science is on the complex dynamics of the coupled human/environment system. The field reaches out to embrace relevant scholarship on the fundamental character of interactions among humans, their technologies, and the environment, and on the utilization of that knowledge by decisionmakers to address urgent problems of economic development and environmental and resource conservation.

Sustainability research is appearing in scientific journals and influencing some real-world decisions, and the field is now in need of a well–thought-through plan that engages the broad research, educational, and funding communities. Although sustainability, like all long-term societal challenges, will ultimately benefit from an S&T approach that “lets a thousand flowers bloom,” such an approach is not enough. We need solution-oriented, use-inspired R&D for a sustainability transition, and we need it fast, so we need a clear plan for research. And although literally every discipline can and needs to contribute gems of scientific and technological knowledge to help meet the sustainability challenge, none can make sufficient progress working alone; thus, we need concerted efforts to bring those disciplines together to work on integrative challenges.

A research plan

The need for integrative, problem-focused research becomes clear when addressing some of the grand challenges of sustainability. Consider energy. The vast majority of the world’s energy is provided by fossil fuels, and demand for energy is rapidly increasing in the developing nations. Much attention has been paid to the “end of oil” and the security concerns about increasing worldwide competition for oil and gas, but the most critical and immediate sustainability challenge is the energy system’s effect on climate and on air and water pollution. Research endeavors that focus simply on new energy resources are critical, but from a sustainability standpoint, they’re just part of the puzzle. Research must focus at the interface of the technology/environment/social system to develop energy sources that reduce environmental consequences and are broadly implementable and available to the world’s poorest people. The challenge is to understand not just what new technologies are necessary, but also how to implement them in a way that avoids unintended consequences for people and the planet. Our recent experience with biofuels from food crops shows what can happen when we focus too narrowly on a specific energy goal in isolation from its interaction with food production, water and air pollution, trade, climate, and other environmental and social needs.

Another key challenge in sustainable development is biodiversity conservation. The disciplines of evolutionary biology and ecology have provided fundamental insights into factors that maintain species or prevent them from flourishing, but research on biodiversity is by itself not sufficient for sustainability science. Efforts that focus on the connections between biodiversity conservation and the economic and social needs of people are also needed. The emergence of research and on-the-ground efforts to account for benefits derived from ecosystems and use them explicitly for the well-being of people is a clear illustration of sustainability science.

The climate challenge similarly provides an illustration of the particular needs of sustainability science. A research endeavor that addresses climate change needs to focus not only on understanding change in the physical climate system through observations and models, but also on the ways in which people and ecosystems respond, adapt, and mitigate. Doing these things separately from each other, without a coherent program, leads to critical gaps. Moreover, many such challenges in the coupled human/environment system play out at different scales. Efforts focused on evaluating and reducing the vulnerability of human/environment systems to climate change, for example, require interdisciplinary efforts at local, regional, and national scales, with vulnerability potentially playing out very differently across space, scale, and time.

As these examples suggest, research for the sustainability transition needs to be integrative and coordinated. Great progress in one narrow area does not ensure success. We need to make sure we are covering the right research territory so that solutions are possible and new technologies and approaches can actually be useful and used. This is going to require a balanced and coordinated research endeavor with balanced research fund allocations.

Such a coordinated plan is important not just for its near-term practical benefits in challenging resource areas, but also for the fundamental development of the field and its longer-term advances. Although sustainability science addresses a broad and sometimes seemingly unrelated range of specific science needs, it is linked by core themes and questions that emerge no matter what set of resources or environmental challenges is being addressed. For example, questions about driving forces of change such as consumption, behavior and values, and population trends that underlie resource use and depletion are common to all. Likewise, questions about the responsiveness of the human/environment system—its vulnerability and adaptive capacity, its limits and thresholds—are relevant whether one is addressing the climate/energy nexus or the interactions between food security and environment. Underlying questions about institutions, incentives, and governance structures are critical across all. Although the core questions of sustainability science have been discussed in the literature and around the conference tables of the National Academy of Sciences and other institutions, there has been no recent international effort to outline the fundamental components of the research agenda for sustainability science. This needs to be a part of the five-year plan and perhaps can best be led by organizations such as the U.S. National Academies and their international equivalents.

Institutional changes

The research agenda of sustainability science will be an enormous one. Mobilizing the S&T community to support the sustainability transition will entail a concerted effort and will require doing things differently than we typically have in the past. A number of changes are needed in our research institutions and in our funding organizations if we are to move more quickly down the path to sustainability.

For many academic institutions, the sustainability challenge is a particularly difficult one because it requires us to work together in ways that are not particularly well supported by the institutional structures of our universities; because it requires a focus on problem solving along with fundamental learning; and because it requires the S&T community to actively engage with decisionmakers rather than assume a one-way handoff of knowledge, followed by its automatic use. These difficulties suggest a number of necessary actions within academia.

First, our academic institutions need to find ways to facilitate interdisciplinary efforts that draw on the strengths of many different disciplines, allowing them to combine and integrate their knowledge around specific sustainability challenges. A number of universities are now engaged in experiments around this theme. Some have identified new schools or colleges within the university with the explicit role of interdisciplinary problem solving. Others have developed umbrella institutions that are meant to harness the dispersed disciplinary strengths of the university and facilitate and incentivize research interactions that integrate them. Still others have instituted free-standing centers that operate more or less independently from the academic portions of the university. And some have employed more than one approach. In many cases, these experiments provide room not just for the coming together of different disciplines, but also for the emergence of new interdisciplinary foci and the development and training of experts who work in them. We should learn from these experiments.

Increasingly, these universities are also training a next generation of leaders who understand and work within the broad context of sustainability; who sometimes carry the strengths of more than one discipline; and who can combine multiple disciplines, either themselves or through team efforts, to address questions that most of us more traditionally trained disciplinarians are challenged to do. There is no doubt that demand for such interdisciplinary programs is on the rise and at rates much greater than demand is growing in many of the core disciplines.

Another key aspect of academic programs around sustainability science is the purposeful intent to link knowledge to action. Much of sustainability science is hard-core fundamental research, but the field is essentially use-inspired and is oriented toward decisionmaking of all sorts. Just as in the agricultural and medical fields, public outreach and knowledge extension are crucial aspects of sustainability science, yet most universities do not have well-honed mechanisms for the kind of dialogue and partnerships that are needed for sustainability science to be actually useful and used in decisionmaking. A multidirectional flow of information is required, both to help the academic community understand the key challenges from the decisionmaker’s perspective and to engage the academic community in integrative efforts that focus not just on the development of new innovations and approaches, but also on their actual implementation. Again, experiments are taking place with new kinds of research partnerships, dialogues and workshops, communication strategies, and the development of in-house “boundary organizations” that purposefully link researchers and decisionmakers. Such efforts are exceptionally challenging, especially to universities, because they represent costs for which there are no traditional sources of funds.

The linking of S&T to decisionmaking is made even more difficult because sustainability challenges differ by location as a function of characteristics of both the social and biophysical systems of the place. It is in specific places that the interactions among local- to global-scale environmental changes, public policies, geographic and resource endowments, and individual decisionmaking and action play out. This argues for place-based analytical frameworks and mechanisms that link work across spatial scales. This is more easily said than done; national-scale programs and centers established to address sustainability challenges may not be effective at the place-based scale if there is no regional or local entity to provide integration and connection with the local actors. There is a need for regional or state-level experiments in the development of sustainability resource and research centers and knowledge systems. Such centers could quite logically be partnerships of academic, public, and private institutions, as has been suggested in discussions about the possible organization of a national climate service.

Perhaps most important, sustainability challenges cannot be dealt with effectively if the federal research and mission agencies are not engaged. Understanding the fundamental functions of human/environment systems and their ability to adapt and respond to multiple environmental and social challenges has emerged as a critical scientific need, yet there is very little focused effort on this need in the agencies. The development of innovative knowledge, tools, and approaches that simultaneously address the needs of people while protecting environment and resources needs to be a focus of attention, but today is being done only piecemeal in various programs across numerous agencies. Again, one can look at the history of the development of biofuels in the United States to see the consequences of a lack of dialogue and coordination among agencies focused on energy, food, water resources, and environment. And within the area of climate science and policy, recent analyses have identified the critical gaps that have resulted from the lack of integration and coordination across the physical and social sciences.

This integration should be another critical step of the five-year plan to develop a coordinated interagency sustainability effort at the national scale, focused on fundamental research that is use-inspired and well linked to decision makers and that ultimately contributes to our understanding of the world and its sustainable management. If sustainability science has as its analog the health or agricultural sciences, perhaps the call should be for a new agency focused specifically on sustainability challenges. I believe, however, that the sustainability challenge requires a coordinated effort that includes all of us, in all fields and disciplines and all programs and agencies. Organizations that leave some of us on the outside run the risk of foregoing critical knowledge, tools, and perspectives. What is needed is a careful coordinating effort that ensures that we are taking into account all the dimensions of a problem. This will involve more than dialogue. It will require coordinated national and international R&D efforts; and in the United States, joint ventures, public/private partnerships, coordinated research programs, and the engagement of new players. Institutional change is hard, but it is needed today at local, national, and international levels if we are to successfully engage the S&T community in a transition to sustainability.

The Cloud, the Crowd, and Public Policy

The Internet is entering a new phase that represents a fundamental shift in how computing is done. This phase, called Cloud computing, includes activities such as Web 2.0, Web services, the Grid, and Software as a Service, which are enabling users to tap data and software residing on the Internet rather than on a personal computer or a local server. Some leading technologists have forecast that within 5 to 10 years, 80% or even 90% of the world’s computing and data storage will occur “in the Cloud.”

Although the move toward the Cloud is clear, the shape of the Cloud—its technical, legal, economic, and security details—is not. Public policy decisions will be critical in determining the pace of development as well as the characteristics of the Cloud.

The evolution of personal computing has occurred in three distinct phases. In phase 1, computers were standalone devices in which software and data were stored; typical applications were word processing and spread sheets. Phase 2 was marked by the emergence of the World Wide Web, which made it possible to access a wealth of data on the Internet, even though most users still relied on software that ran on individual machines; the quintessential application was the Web browser. In phase 3, most software as well as data will reside on the Internet; a wide variety of applications will proliferate because users will no longer have to install applications software on their machines.

Most of the work we do with computers is still done using phase 1 or phase 2 tools, but more and more people, especially among the younger generation, are starting to take advantage of the power of the Cloud, which offers:

  • Limitless flexibility. By being able to use millions of different pieces of software and databases and combine them into customized services, users will be better able to find the answers they need, share their ideas, and enjoy online games, video, and virtual worlds.
  • Better reliability and security. No longer will users need to worry about the hard drive on their computers crashing or their laptops being stolen.
  • Enhanced collaboration. By enabling online sharing of information and applications, the Cloud provides new ways for working (and playing) together.
  • Portability. The ability of users to access the data and tools they need anywhere they can connect to the Internet.
  • Simpler devices. Since both their data and the software they use are in the Cloud, users don’t need a powerful computer to use it. A cell phone, a PDA, a personal video recorder, an online game console, their cars, even sensors built into their clothing could be their interface.

Cloud computing has the potential to reduce the cost and complexity of doing both routine computing tasks and computationally intensive research problems. By providing far more computing power at lower cost, Cloud computing could enable researchers to tackle hitherto impossible challenges in genome research, environmental modeling, analysis of living systems, and dozens of other fields. Furthermore, by enabling large distributed research teams to more effectively share data and computing resources, Cloud computing will facilitate the kind of multidisciplinary research needed to better understand ecosystems, global climate change, ocean currents, and other complex phenomena.

Combining the power of Cloud computing with data collected by thousands or even millions of inexpensive networked sensors will give scientists new and exciting ways to track how our planet and its ecosystems are changing. At the same time, such sensor nets will give entrepreneurs new ways to provide new services, ranging from traffic monitoring to tracking livestock to improving surveillance on the battlefield or in high-crime neighborhoods.

The government role

The pace of development and deployment of the Cloud will depend on many different factors, including how quickly the basic technology matures, how quickly the computer and telecommunications industries agrees on standards, how aggressively companies invest in the needed infrastructure, how many cost-effective, compelling applications are developed, and how quickly potential users accept and adopt this new way of purchasing computing resources.

Government policy can influence each of these factors. And there are other ways in which governments can accelerate or hinder the growth of the Cloud. Just as the pace of development of the Internet has varied by country and industry, the pace of development of the Cloud will vary widely. The key policy factors that will influence the pace of progress include:

Research. Giving researchers around the world access to Cloud computing services will lead to a further internationalization of science and a broadening of the base of first-class research. It will make it much easier to participate directly in multi-site projects and to share data and results immediately.

But how this happens will depend on decisions made by government research agencies. Will they make the investments needed to provide Cloud services to a large portion of the research community? Or will separate Cloud initiatives be funded that are restricted to a narrow subset of researchers with especially large computational needs? Precommercial research is still needed on some of the building blocks of the Cloud, such as highly scalable authentication systems and federated naming schemes. Will there be sufficient funding for this critical R&D? Will government agencies (and the politicians who determine their budgets) be willing to fund Cloud services that will be increasingly international? Will they be willing to invest government money in international collaborative projects when the benefits (and funding) will be spread among researchers and businesses in several countries?

Privacy and security. Many of the most successful and most visible applications of Cloud computing today are consumer services such as e-mail services (Google Mail, Hotmail, and Yahoo Mail), social networks (Facebook and MySpace), and virtual worlds such as Second Life. The companies providing these services collect terabytes of data, much of it sensitive personal information, which is then stored in data centers in countries around the world. How these companies, and the countries in which they operate, address privacy issues will be a critical factor affecting the development and acceptance of Cloud computing.

Who will have access to billing records? Will government regulation be needed to allow anonymous use of the Cloud and to put strict controls on access to usage records of Cloud service providers?

Will government regulators be able to adapt rules on the use of private, personal information when companies are moving terabytes of sensitive information from employees and customers across national borders? Companies that wish to provide Cloud services globally must adopt leading-edge security and auditing technologies and best-in-class practices. If they fail to earn the trust of their customers by adopting clear and transparent policies on how their customers’ data will be used, stored, and protected, governments will come under increasing pressure to regulate privacy in the Cloud. And if government policy is poorly designed, it could stymie the growth of the Cloud and commercial Cloud services.

Access to the Cloud. Cloud computing has the potential to dramatically level the playing field for small and mediumsized businesses (SMBs) who cannot currently afford to own and operate the type of sophisticated information technology (IT) systems found in large corporations. Furthermore, SMBs will also be in a position to offer their local knowledge and specialized talents as part of other companies’ services. Likewise, researchers, developers, and entrepreneurs in every corner of the world could use Cloud computing to collaborate with partners elsewhere, share their ideas, expand their horizons, and dramatically improve their job prospects—but only if they can gain access to the Cloud. Telecommuters and workers who are on the road will also have access to the same software and data used by those in the office, provided that we increase broadband access in the home and over wireless connections.

As a result, development of the Cloud will increase pressure on governments to bridge the digital divide by providing subsidies or adopting policies that will promote investment in broadband networks in rural and other underserved areas. Unfortunately, the main impact of many previous efforts to promote network deployment has been to distort the market or protect incumbent carriers from competition. As Cloud computing become critical for a large percentage of companies, governments will need to find cost-effective ways to ensure that homes and businesses have affordable access to the Cloud no matter where they are located.

E-government and open standards. Cloud computing could provide huge benefits to governments. The Cloud is not a magic wand for solving hard computing and managerial problems, but it will reduce barriers to implementation, eliminate delays, cut costs, and foster interagency cooperation. A few pioneers, such as the government of Washington, DC, have already demonstrated the huge potential of Cloud computing for e-government. Vivek Kundra, then the chief technology officer for DC, led an effort to migrate thousands of DC government employees to Google e-mail and office software based in the Cloud. “Why should I spend millions on enterprise apps when I can do it at one-tenth the cost and ten times the speed?” he said in 2008. “It’s a win-win for me.”

Cloud computing will be particularly attractive to government users because of its increased reliability and security, lower maintenance costs, and increased flexibility. Running government operations on a unified Cloud infrastructure will be more secure and reliable, and less costly, than trying to maintain and manage hundreds of different systems. In addition, if done right, Cloud computing can help governments avoid being locked in to a small number of vendors.

Governments have the potential to be model users of Cloud computing. As the largest economic entity in most countries, government has the leverage to set standards and requirements that can influence actions throughout the economy. Just as U.S. federal government Web sites demonstrated the power of the Web and inspired state and local governments and companies to create online presences, national governments can be early adopters of Cloud computing, which would demonstrate and publicize the technology. But if governments are going to become early adopters of Cloud services, they must overcome bureaucratic, regulatory, and cultural barriers to resource sharing that could slow the adoption of Cloud computing. Government IT procurement rules covering purchase of hardware and software must be updated to enable purchase of Cloud services.

U.S. government procurement decisions in the 1980s, which led to the widespread use of the Internet Protocol to link together previously unconnected agency networks, were a critical driver at a crucial time in the development of the Internet. Likewise, major government users could play an important role by compelling industry to quickly reach consensus on open, international Cloud standards so that government suppliers, contractors, and partners would be able to easily tap into government-funded Cloud services.

Today, many different grid and Cloud architectures rely on incompatible proprietary software. Achieving the full potential of Cloud computing will require a “Cloud of Clouds”: different network-based platforms all linked together by common middleware, so that data and applications software residing on one company’s piece of the Cloud can be seamlessly combined with data and software on systems run by another Cloud service provider.

Competition and antitrust. The structure of the Cloud will be defined over the next few years as key players establish the standards and technologies for Cloud services and as business models and business practices evolve. Perhaps the most important factor determining how the Cloud evolves is whether one company or a handful of companies are able to achieve a dominant position in the market for Cloud services or whether the Cloud becomes an open interoperable system where hundreds or even thousands of different companies are able to build and run part of an interlinked, interoperable Cloud capable of running different applications developed by millions of developers around the globe.

With the Internet, strong economic benefits and customer demand both pushed network service providers to link their different networks and create a network of networks. The situation may not be as clear-cut with the Cloud, and some companies building the infrastructure of the Cloud may be able to use economies of scale, ownership of key intellectual property, and first-mover advantage to block or slow competitors. Governments will need to watch carefully to see that companies do not use their dominant position in one sector of the IT or telecommunications market to gain an unfair advantage in the market for Cloud services. A Cloud built by only one or two companies and supporting only a limited set of applications would not be in the best interest of either individuals or corporate customers.

Governments need to take cautious rather than radical actions at this time, and to promote open international standards for the Cloud so that users will be able to switch Cloud service providers with a minimum of cost and risk. Flexible, far-sighted government policy and procurement decisions could promote interoperability, without dictating a particular architecture or set of standards for the Cloud. Since the Cloud is still evolving rapidly, governments need to allow and encourage different companies and groups to experiment. For instance, in government procurements for cloud services, governments can require interoperability and migration plans in case an agency wishes to change Cloud service providers at a later date, without specifying a particular standard or a particular company’s service. In the 1980s and 1990s, when personal computers were being widely adopted, some governments took the wrong approach; they chose Microsoft Word as their government-wide word-processing standard, rather than embracing an open standard such as the Open Document Format and requiring all vendors to support it. Later, some of those same governments had to resort to antitrust actions against the Microsoft monopoly they helped create.

Wiretapping and electronic surveillance. One of the thorniest issues related to the Cloud may be electronic surveillance, particularly when it spans international borders. In the United States, citizens are protected by the Constitution against unreasonable search and seizure. In most cases, the police must get a search warrant to examine data on someone’s home computer. It is not at all clear that the same data are protected if they are backed up in a data center in the Cloud, particularly if that data center is in another country. And if the situation within the United States is unclear, it is even less clear how and when U.S. or other intelligence services can access data from noncitizens stored in the Cloud. If users believe that governments will be monitoring their activities, their willingness to use the Cloud for important functions will surely decrease.

Intellectual property and liability. Related to the question of wiretapping is whether governments will try to enforce laws against online piracy in ways that limit or slow the development of Cloud services. By giving customers access to almost unlimited computing power and storage, Cloud services could make it even easier to share copyrighted material over the Internet. Will Cloud service providers be required to take special measures to prevent that? Will they be liable for illegal activities of their customers? Would doing so make it impractical for companies to provide Cloud services to the general public?

Consumer protection. If companies and individuals come to rely on Cloud services such as e-mail, word processing, and data backup, and then discover that the services are down for a protracted period of time, or worse, that their data are lost, they will seek recourse—most likely in court. If the reliability of Cloud services becomes a serious problem, state and national governments may step in to ensure that customers get the service they expect.

What kind of liability will a company that provides Cloud services be expected to assume in the event that there are serious outages? If a program running in the Cloud malfunctions, it could affect other users. Yet tracking problems in the Cloud and assigning responsibility for failures will be difficult. The Internet is already causing telecommunications companies and the courts to adopt new approaches to assigning liability for outages and security breaches.

Crafting a consistent global approach to this problem will not be easy, but if it can be done, it could increase consumer trust and significantly accelerate the adoption of Cloud services. Given the difficulty of finding an international governmental approach to consumer protection in the Cloud, a global self-regulatory approach based on best practices, insurance, and contract law may be faster, more flexible and adaptable as technology evolves and new services are offered, and more effective.

Taking the lead

Governments will play a critical role in shaping the Cloud. They can foster widespread agreement on standards, not only for the basic networking and Cloud communication protocols, but also for service-level management and interaction. By using the power of the purse in their IT procurement policies, governments can pressure companies to find consensus on the key Cloud standards.

Governments need to assess how existing law and regulations in a wide range of areas will affect the development of the Cloud. They must both “future-proof” existing law and ensure that new policy decisions do not limit the potential of this revolutionary new approach to computing.

The greatest concern would be premature regulation. The Cloud will be a fundamental infrastructure for the economy, national security, and society in general. A natural reaction would be to demand uniformly high quality and to regulate a number of features and services that use it. But without a lot more experience, we simply do not know enough about what the right set of underlying services will be, what are appropriate differences in price and quality of services, what techniques will be best for providing reliable service, and where the best engineering tradeoffs will be.

Governments can add value by encouraging experimentation and new services. They must avoid locking in the wrong technology, which will either put a country at a competitive disadvantage or reduce the value of the Cloud as a whole. Governments must follow industrial practice as much as possible rather than mandating untried solutions.

Like the Internet itself, the Cloud is a disruptive technology that challenges existing business models, institutions, and regulatory paradigms. As a result, there is likely to be resistance from many different quarters to the widespread deployment of Cloud technologies. Governments must be willing to challenge and change existing policies that could be used to hinder the growth of the Cloud. Simply trying to adapt existing regulations to the Cloud might allow entrenched interests to significantly delay the investment and effort needed for widespread use of Cloud computing. Because Cloud computing is a fundamentally different approach to computing and communications, governments should consider fundamentally new approaches to telecommunications and information policy.

Many of the public policy issues, including privacy, access, and copyright protection, raised by Cloud computing are similar to Internet policy issues that governments have been struggling with for at least 15 years. However, addressing these issues for the Cloud will be at least twice as difficult—and five times more important. Because the Cloud is inherently global, policy solutions must be cross-jurisdictional. Because the Cloud is a many-to-many medium, it is not always easy to determine who’s responsible for what. And because the Cloud technology and Cloud applications are evolving so quickly, government policy must be flexible and adaptable. Because the challenges are so great and the opportunities so widespread, it is imperative that policymakers and the technologists developing the Cloud start now to look for innovative technical and policy solutions.

High-Performance Computing for All

The United States faces a global competitive landscape undergoing radical change, transformed by the digital revolution, globalization, the entry of emerging economies into global commerce, and the growth of global businesses. Many emerging economies seek to follow the path of the world’s innovators. They are adopting innovation-based growth strategies, boosting government R&D, developing research parks and regional centers of innovation, and ramping up the production of scientists and engineers.

As scientific and technical capabilities grow around the world, the United States cannot match the traditional advantages of emerging economies. It cannot compete on low wages, commodity products, standard services, and routine or incremental technology development. Knowledge and technology are increasingly commodities, so rewards do not necessarily go to those who have a great deal of these things. Instead, rewards go to those who know what to do with knowledge and technology once they get it, and who have the infrastructure to move quickly.

These game-changing trends have created an “innovation imperative” for the United States. Its success in large measure will be built not on making small improvements in products and services but by transforming industries; reshaping markets and creating new ones; exploiting the leading edge of technology creation; and fusing diverse knowledge, information, and technology to totally transform products and services.

The future holds unprecedented opportunities for innovation. At least three profound technological revolutions are unfolding. The digital revolution has created disruptive effects and altered every industrial sector, and now biotechnology and nanotechnology promise to do the same. Advances in these fields will increase technological possibilities exponentially, unleashing a flood of innovation and creating new platforms for industries, companies, and markets.

In addition, there is a great and growing need for innovation to solve grand global challenges such as food and water shortages, pandemics, security threats, mitigating climate change, and meeting the global need for cheap, clean energy. For example, the energy and environmental challenges have created a perfect storm for energy innovation. We can move to a new era of technological advances, market opportunity, and industrial transformation. Energy production and energy efficiency innovations are needed in transportation, appliances, green buildings, materials, fuels, power generation, and industrial processes. There are tremendous opportunities in renewable energy production, from utility-scale systems and distributed power to biofuels and appropriate energy solutions for the developing world.

Force multiplier for innovation

Modeling and simulation with high-performance computing (HPC) can be a force multiplier for innovation as we seek to answer these challenges and opportunities. A simple example illustrates this power. Twenty years ago, when Ford Motor Company wanted safety data on its vehicles, it spent $60,000 to slam a vehicle into a wall. Today, many of those frontal crash tests are performed virtually on high-performance computers, at a cost of around $10.

Imagine putting the power and productivity of HPC into the hands of all U.S. producers, innovators, and entrepreneurs as they pursue innovations in the game-changing field of nanotechnology. The potential exists to revolutionize the production of virtually every human-made object, from vehicles to electronics to medical technology, with low-volume manufacturing that could custom-fit products for every conceivable use. Imagine the world’s scientists, engineers, and designers seeking solutions to global challenges with modeling, simulation, and visualization tools that can speed the exploration of radical new ways to understand and enhance the natural and built world.

These force-multiplying tools are innovation accelerators that offer an extraordinary opportunity for the United States to design products and services faster, minimize the time to create and test prototypes, streamline production processes, lower the cost of innovation, and develop high-value innovations that would otherwise be impossible.

Supercomputers are transforming the very nature of biomedical research and innovation, from a science that relies primarily on observation to a science that relies on HPC to achieve previously impossible quantitative results. For example, nearly every mental disease, including those such as Alzheimer’s, schizophrenia, and manic-depressive disorders, in one way or another involves chemical imbalances at the synapses that cause disorders in synaptic transmission. Researchers at the Salk Institute are using supercomputers to investigate how synapses work (see http://www.compete.org/publications/detail/503/breakthroughs-in-brain-research-with-high-performance-computing/). These scientists have tools that could run a computer model to produce a million different simulations to produce an extremely accurate picture of how the brain works at the molecular level. Their work may open up pathways for new drug treatments.

Farmers around the world need plant varieties that can withstand drought, floods, diseases, and insects, and many farmers are shifting to crops tailored for biofuels production. To help meet these needs, researchers at DuPont’s Pioneer Hi-Bred are conducting leading-edge research into plant genetics to create improved seeds (see http://www.compete.org/publications/detail/683/pioneer-is-seeding the-future-with-high-performance-computing/). But conducting experiments to determine how new hybrid seeds perform can often take years of study and thousands of experiments conducted under different farm management conditions. Using HPC, Pioneer Hi-Bred researchers can work with astronomical numbers of gene combinations and manage and analyze massive amounts of molecular, plant, environmental, and farm management data. HPC speeds up obtaining answers to research problems from times of days and weeks to a matter of hours. HPC has enabled Pioneer Hi-Bred to operate a breeding program that is 10 to 50 times bigger than what would be possible without HPC, helping the company better meet some of the world’s most pressing needs for food, feed, fuel, and materials.

Medrad, a provider of drug delivery systems, magnetic resonance imaging accessories, and catheters, purchased patents for a promising interventional catheter device to mechanically remove blood clots associated with a stroke (see http://www.compete.org/publications/detail/497/high-performance-computing-helps-create-new-treatment-for-stroke-victims/). But before starting expensive product development activities, they needed to determine whether this new technology was even feasible. In the past, they might have made bench-top models, testing each one in trial conditions, and then moved to animal and human testing. But this approach would not efficiently capture the complicated interaction between blood cells, vessel walls, the clot, and the device. Using HPC, Medrad simulated the process of the catheter destroying the clots, adjusting parameters again and again to ensure that the phenomenon was repeatable, thus validating that the device worked. They were able to look at multiple iterations of different design parameters without building physical prototypes. HPC saved 8 to 10 months in the R&D process.

Designing a new golf club at PING (a manufacturer of high-end golf equipment) was a cumbersome trial-and-error process (see http://www.compete.org/publications/detail/684/ping-scores-a-hole-in-one-with-high-performance-computing/). An idea would be made into a physical prototype, which could take four to five weeks and cost tens of thousands of dollars. Testing might take another two to three weeks and, if a prototype failed to pass muster, testing was repeated with a new design to the tune of another $20,000 to $30,000 and six more weeks. In 2005, PING was using desktop workstations to simulate some prototypes. But one simulation took 10 hours; testing seven variations took 70 hours. PING discovered that a state-of-the-art supercomputer with advanced physics simulation software could run one simulation in 20 minutes. With HPC, PING can simulate what happens to the club and the golf ball when the two collide and what happens if different materials are used in the club. PING can even simulate materials that don’t currently exist. Tests that previously took months are now completed in under a week. Thanks to HPC, PING has accelerated its time to market for new products by an order of magnitude, an important benefit for a company that derives 85% of its income from new offerings. Design cycle times have been cut from 18 to 24 months to 8 to 9 months, and the company can produce five times more products for the market, with the same staff, factory, and equipment.

At Goodyear, optimizing the design of an all-season tire is a complex process. The tire has to perform on dry, wet, icy, or snowy surfaces, and perform well in terms of tread wear, noise, and handling (see http://www.compete.org/publications/detail/685/goodyear-puts-the-rubber-to-the-road-with-high-performance-computing/). Traditionally, the company would build physical prototypes and then subject them to extensive environmental testing. Some tests, such as tread wear, can take four to six months to get representative results. With HPC, Goodyear reduced key product design time from three years to less than one. Spending on tire building and testing dropped from 40% of the company’s research, design, engineering, and quality budget to 15%.

Imagine what we could do if we could achieve these kinds of results throughout our research, service, and industrial enterprise. Unfortunately, we have only scratched the surface in harnessing HPC, modeling, and simulation, which remain largely the tools of big companies and researchers. Although we have world-class government and university-based HPC users, there are relatively few experienced HPC users in U.S. industry, and many businesses don’t use it at all. We need to drive HPC, modeling, and simulation throughout the supply chain and put these powerful tools into the hands of companies of all sizes, entrepreneurs, and inventors, to transform what they do.

Competing with computing

The United States can take steps to advance the development and deployment of HPC, modeling, and simulation. First, there must be sustained federal funding for HPC, modeling, and simulation research and its application across science, technology, and industrial fields. At the same time, the government must coordinate agency efforts and work toward a more technologically balanced program across Department of Energy labs, National Science Foundation–funded supercomputing centers, the Department of Defense, and universities.

Second, the nation needs to develop and use HPC, modeling, and simulation in visionary large-scale multidisciplinary activities. Traditionally, much federal R&D funding goes to individual researchers or small single-discipline groups. However, many of today’s research and innovation challenges are complex and cut across disciplinary fields. For example, the Salk Institute’s research on synapses brought together anatomical, physiological, and biochemical data, and drew conclusions that would not be readily apparent if these and other related disciplines were studied on their own. No matter how excellent they may be, small single-discipline R&D projects lack the scale and scope needed for many of today’s research challenges and opportunities for innovation.

Increasing multidisciplinary research within the academic community should overcome a host of barriers such as single-discipline organizational structures; dysfunctional reward systems; a dearth of academic researchers collaborating with disciplines other than their own; the relatively small size of most grants; and traditional peer review, publication practices, and career paths within academia. Federal policy and funding practices can be used as levers to increase multidisciplinary research in the development and application of HPC, modeling, and simulation.

Third, the difficulty of using HPC, modeling, and simulation tools inhibits the number of users in academia, industry, and government. And since the user base is currently small, there is little incentive for the private sector to create simpler tools that could be used more widely. The HPC, modeling, and simulation community, including federal agencies that support HPC development, should work to create better software tools. Advances in visualization also would help users make better use of scientific and other valuable data. As challenges and the technologies to solve them become more complex, there is greater need for better ways to visualize, understand, manage, monitor, and evaluate this complexity.

Fourth, getting better tools is only half the challenge; these tools have to be put into the hands of U.S. innovators. The federal government should establish and support an HPC center or program dedicated solely to assisting U.S. industry partners in addressing their research and innovation needs that could be met with modeling, simulation, and advanced computation. The United States should establish advanced computing service centers to serve each of the 50 states to assist researchers and innovators with HPC adoption.

In addition, the nation’s chief executives in manufacturing firms of all sizes need information to help them better understand the benefits of HPC. A first step would be to convene a summit of chief executive officers and chief technical officers from the nation’s manufacturing base, along with U.S. experts in HPC hardware and software, to better frame and address the issues surrounding the development and widespread deployment of HPC for industrial innovation and next-generation manufacturing.

If it takes these steps, the United States will be far better positioned to exploit the scientific and technological breakthroughs of the future and to fuel an age of innovation that will bring enormous economic and social benefits.

After the Motor Fuel Tax: Reshaping Transportation Financing

Congress will soon begin considering a new transportation bill that is expected to carry a price tag of $500 billion to $600 billion to support a huge number of projects nationwide. Public debate over the bill is certain to be intense, with earmarks and “bridges to nowhere” being prominently mentioned. But what could become lost in the din is that Congress may well take an important first step in changing the very nature of how the nation raises funds to support its roads and other components of the transit system. Or Congress may lose its courage. If so, the nation will miss a critical opportunity to gain hundreds of billions of dollars in needed revenue for transportation, to reduce traffic congestion, and to price travel more fairly than has been the case for a century.

At issue is whether Congress will continue to rely on the federal motor fuel tax and other indirect user fees as the primary source of revenue for transportation projects, or whether it will begin a shift to more direct user fees. Many observers expect that Congress will step up to the job, but it is far from a done deal. If Congress does act, it will begin what is likely to be a decades-long transition to some form of direct charging on the basis of miles driven.

In its reliance on user fees to support transportation projects, the United States operates differently from most other nations. Most countries tax fuels and vehicles, but they put the proceeds into their general funds and pay for roads and transit systems from the same accounts they use for schools, health care, and other government programs. The United States has preferred to link charges and payments for the transportation system more directly, through a separate system of user-based financing. User fees include gasoline taxes, tolls, vehicle registration fees, and truck weight fees. User fees, imposed by all 50 states as well as the federal government, are intended to charge more to those who benefit from the transportation system and who also impose costs on the system by using it. At the federal level, the largest source of revenue from users for half a century has been the federal excise tax on gasoline and diesel fuel. Proceeds are kept separate from the general budget at the federal level and in most states. Revenues are deposited into separate trust funds, with this money reserved for building, operating, and maintaining transportation systems to directly benefit those who paid the fees. User fees at the federal level, for example, paid more than 90% of the cost of building the national interstate highway system.

One problem, however, is that the federal motor fuel tax, which is a major source of transportation system support, has not been raised for many years; it has been set at 18.4 cents per gallon since Ronald Reagan was president. As the price of gasoline rose during this period, Congress proved reluctant to charge drivers more for road improvements. In fact, when the price of gasoline spiked recently, Congress briefly considered lowering the federal motor fuel tax but backed away after considering the enormous backlog of infrastructure needs and the deteriorating condition of the nation’s transportation system. In addition to losing value because of inflation with the passage of time, motor fuel tax revenue is falling in relation to road use because of improved vehicle fuel economy. Higher miles-per-gallon ratings are good for the economy, energy independence, and reduced air pollution. But better fuel economy also means that motorists drive more miles with each fill up at the pump and actually pay substantially less through fuel taxes per mile of driving than they did in past years.

Many supporters of transportation investments continue to believe that the best way to raise desperately needed money to maintain and expand highways and mass transit would be to raise those user fees rather than to turn to general taxes, which are also under stress and are used to fund many other critical programs. But the trend is in the opposite direction. Gradually, faced with a genuine national shortage of funds for transportation infrastructure because fuel taxes have not kept pace with costs, voters in several states have been asked to approve increases in sales taxes to fill the growing gap between transportation needs and the revenues available from user fees. Also, as the balance in the federal highway trust fund dipped below zero in September 2008, Congress approved a “one-time” transfer of $8 billion from the nation’s general fund into the trust fund to avoid the complete shutdown of federal highway programs. Another such transfer may soon be needed because the transit account within the trust fund is now approaching a zero balance as well.

A century of taxes

In their common form, motor fuel taxes were invented before 1920. With intercity auto and truck traffic growing dramatically, states were strapped in their efforts to pay from general funds for desperately needed highways. Because the need for and costs of state roads varied roughly in proportion to traffic levels, it made sense to cover the costs of those roads by charging the users. Tolls were considered at the time the fairest way to charge users, but they had a major drawback. The cost of collecting tolls—constructing toll booths, paying toll collectors, revenue losses from graft and pilfering, and delays imposed on travelers—absorbed such a large proportion of toll revenue that in some instances they exceeded the revenue generated. Further, developing interconnected road networks required the construction and maintenance of expensive-to-build links (over waterways or through mountain passes) and some lightly used links that could not be financed entirely by locally generated toll revenues.

The solution to this dilemma came when states, starting with Oregon in 1918, adopted an alternative form of user fee: motor fuel taxes. The state charged for road use in rough proportion to motorists’ travel, and charged heavier vehicles more than lighter vehicles because they used more fuel per mile of travel. Still, fuel taxes did not quite match tolls in terms of fairness, because they did not levy charges at precisely the time and place of road use. However, fuel taxes cost much less to collect and administer than tolls, and they soon became the nation’s principal means of financing its main roads. When the federal government decided in 1956 to implement intercity highways on a national scale, it increased federal fuel taxes and created the Federal Highway Trust Fund, emulating the user-pays principle that had been successful in the states.

Recently, however, two major changes suggest that even if the people and government of the United States prefer to continue to rely on user-based financing, the time may have come to end reliance on motor fuel taxes and to introduce a new approach. The first change is the result of recent improvements in technology. There no longer is a need to rely on toll booths and the manual collection of coins and bills to implement a more direct system of user fees. By charging road users more precisely for particular trips at particular times on specific roads, electronic toll collection—known in some regions as EZPass and FASTRAK—is efficient and widely accepted by motorists.

The second change is more subtle but probably more important. Reliance on the taxation of motor fuels as a source of program revenue in an era of growing concern about fuel efficiency and greenhouse gas emissions creates an unacceptable conflict among otherwise desirable public policy goals. Although higher taxes on fuels might in the near term generate more revenue and encourage the production of more fuel-efficient vehicles that emit less carbon dioxide, the seemingly beneficial relationship between taxation and the achievement of environmental goals breaks down in the longer term. If the nation succeeds in encouraging the vast majority of truckers and motorists to rely on plug-in hybrids and, later, on electric vehicles, vehicles powered by fuel cells, or even vehicles powered by solar energy, it will still be necessary to pay for road construction, maintenance, and transit systems. It can be argued that users should still logically be responsible for bearing their costs, even if they drive nonpolluting vehicles. The nation should not continue programs that discourage government pursuit of dramatic gains in energy efficiency over the longer term for fear that it will lose the revenue needed to build and operate highways and mass transit systems. And quite simply, the nation cannot rely on the gas tax as a road user fee when cars are no longer powered by gasoline.

The road to direct user charges

Motor fuel taxes can continue to provide a great deal of needed revenue for a decade or two. But several types of more efficient, and more equitable, user charges are ready to be phased in. For example, current technology will enable government agencies to institute vehicle miles traveled (VMT) charges as flat per-mile fees. Gradually, agencies could charge higher rates on some roads and lower rates on others to reflect more accurately than do fuel taxes the costs of providing facilities over different terrain or of different quality. This would end cross subsidies of some travelers by others and make travel more efficient by encouraging the use of less congested roads. Unlike gasoline taxes, more direct road user charges also could vary with time of day, encouraging some travelers to make a larger proportion of their trips outside of peak periods, easing rush hour traffic.

In the short term, direct user fees could simply replace fuel taxes in a revenue-neutral switch, but they are attractive, in part, because they can become more lucrative as travel increases, while allowing charges to be distributed more fairly among road users. Initially, some vehicle operators might be allowed to continuing paying motor fuel taxes rather than the newer direct charges, but eventually gas and diesel taxes would be phased out.

Several countries in Europe already are electronically charging trucks directly for miles they drive on major highways, and the Netherlands intends to expand its program to passenger cars. In the United States, Oregon and the Puget Sound Regional Council in the Seattle area have conducted operational trials demonstrating the feasibility of VMT fees, and the University of Iowa is carrying out six additional trials in other parts of the country. The results of these trials are quite encouraging, but questions remain, including questions about optimal technologies.

One thing is clear: Innovation is afoot. In the Oregon trial, for example, a clever innovation allowed drivers of vehicles equipped for the trial program to “cancel” their ordinary fuel taxes when filling up their tanks at service stations and to instead charge VMT fees as part of the bill. This enabled participating and nonparticipating vehicles to function in similar ways.

The most sophisticated trial systems make use of vehicles that are equipped with global positioning system (GPS) satellite receivers and digital maps that enable charges to be varied across political boundaries, by route, and by time of day. But GPS signals are not always available, and these systems also incorporate redundant means for metering mileage. For example, they may have a connection to the vehicle odometer or a link to an onboard diagnostic port that has been included in cars manufactured since 1996 to comply with environmental regulations. None of these systems is perfect, all have implementation costs, and not every vehicle is yet equipped to accommodate each device.

It also is clear that any technological innovation affecting hundreds of millions of vehicles is bound to be complicated by many social and political concerns. Indeed, one of the greatest barriers to the implementation of VMT fees may well be the widespread perception that this approach constitutes an invasion of privacy. It is not yet apparent that metering road use is any more threatening to privacy than using cell phones to communicate, but there is genuine concern that somehow the government will be able to track the travel of each citizen without his or her knowledge. Most technology and policy experts agree, however, that these systems can be structured so that privacy is maintained—for example, by maintaining records in individual vehicles rather than in a central repository and by erasing them after payments are made. It also is possible that many motorists would prefer to forgo privacy protection in order to have access to detailed bills showing each and every trip so that they can audit their charges to be sure they are paying for trips they actually made.

Such issues will need to be addressed sooner rather than later in a reasoned public discussion. For its part, Congress, as it debates the new transportation bill, should consider alternative paths that can be followed in order to ease the adoption of direct user fees. Of course, Congress could still reject such a transition and instead simply raise motor fuel taxes to provide needed revenue. Or in a less likely move, it could commit the nation to funding an increasing portion of its road and transit bills from general revenues.

But the hope in many quarters is that Congress will accept the opportunity and begin specifying the architecture of a national system of direct user charges. This early effort could address a number of questions, such as whether there should be a central billing authority, whether travelers should be able to charge their road use fees to their credit cards, and whether drivers should pay VMT fees each time they fill up the tank or pay them periodically, as with vehicle registration fees. Congress also should consider expanding the current trials in various locations to demonstrate some technology options on a much larger scale. Even better, it should complement such efforts by putting an early system into actual application on a voluntary or limited basis.

For numerous reasons, then, the time is near for Congress to act, and for citizens to ensure that it does. The debate that is about to begin will indicate whether the nation’s system of governance has the ability to make complex technological choices that are both cost-effective and just.

U.S. Energy Policy: The Need for Radical Departures

Five years may be an entire era in politics, and as the recent global economic upheavals have shown, it is also a span long enough to hurl nations from complacent prosperity to panicky fears. Five years might also suffice to usher in, however belatedly, a sober recognition of the many realities that were previously dismissed or completely ignored. But five years is too short a period to expect any radical large-scale changes in the way in which affluent economies secure their energy supplies and use their fuels and electricity. Indeed, the same conclusion must apply to a span twice as long. This may be unwelcome news to all those who believe, as does a former U.S. vice president, that the United States can be repowered in a decade. Such a completely unrealistic claim is rooted in a fundamental misunderstanding of the nature of technical innovation.

Most notably, the process of accelerating innovation, habitually illustrated with Moore’s famous graph of an everdenser packing of transistors on a microchip, is an entirely invalid model for innovations in producing large amounts of commercial energies, bringing them reliably to diverse markets, and converting them in convenient and efficient ways. The principal reason for this difference is the highly inertial nature of energy infrastructure, a reality that is especially germane for the world’s largest and exceptionally diversified energy market, which is also very dependent on imports. U.S. energy production, processing, transportation, and distribution—coal and uranium mines; oil and gas fields; pipelines; refineries; fossil fuel–fired, nuclear, and hydroelectric power plants; tanker terminals; uranium enrichment facilities; and transmission and distribution lines—constitute the country’s (and the world’s) most massive, most indispensable, most expensive, and most inertial infrastructure, with principal features that change on a time scale measured in decades, not years.

Similarly, as in any modern society, the United States relies on the ubiquitous services of enduring prime movers, some of which are only more efficient versions of converters introduced more than 125 years ago. Parsons’s steam engine, Benz and Maybach’s and Daimler’s Otto-cycle internal combustion engines, and Tesla’s electric motor were all patented during the 1880s. Others have been with us for more than 100 years (Diesel’s engine) or more than 60 years (gas turbines, both in their stationary form and as jet engines). And, of course, the entire system of electricity generation/transmission/distribution originated during the 1880s and had already matured by 1950. Even more remarkable than the persistence of these concepts and machines is the very low probability that they will be displaced during the next 20 to 25 years.

But for scientists and engineers with an urgent need to engage in public matters and for policymakers responsible for charting a new course, the next five years should be a period long enough to accomplish three essential steps:

  • Create a broad consensus on the need for embarking on the protracted process of phasing out fossil fuels.
  • Engage in an intensive education effort that would make clear the transition’s true nature and requirements as a complex, protracted, and nonlinear process that is unpredictable in its eventual technical and managerial details; will last for decades; and will require sustained attention, continuous R&D support, and enormous capital expense for new infrastructure.
  • Offer a minimalist agenda for deliberate long-term action that would combine a no-regrets approach with bold departures from the existing policy prescriptions. This means that the agenda’s success would not be contingent on a single major variable influencing long-term energy actions, such as the actual progress and intensity of global warming or the future state of the Middle East, and that its eventual goals would envisage a system radically different from anything that would result from marginal tweaking of the existing arrangements.

What follows is a brief outline of an approach that I would advocate based on more than 40 years of interdisciplinary energy studies. Although it rests on first principles and on indisputable biophysical realities, it has a stamp of personal convictions, and its ultimate goal calls for a fundamental rethinking of basic positions and propositions.

Although the first of the three just-outlined near-term tasks has no formal policy standing, few would disagree that, as with all other affluent societies, the United States must reduce its overwhelming dependence on fossil fuels. The most common conviction is that the coming energy transition must rest on increasing the share of fuel and electricity derived from renewable energy flows, although no unbiased policymaker should exclude nuclear fission. It must be understood that the magnitude of U.S. energy needs, the diversified nature of its fuel and electricity consumption, and the inherent limits on the use of renewable energy flows will make this coming transition extraordinarily difficult. The public interest is not served by portraying it as just another instance of technological innovation or by asserting that it could be accomplished by a concentrated, governmentsponsored effort in a short period of time. Calling for the energy equivalent of the Manhattan Project is utterly misguided and inevitably counterproductive.

It is also imperative to make clear that this transition should not be driven primarily by fears of the imminent exhaustion of fossil fuels (Earth’s crust contains ample resources) or by the near-term prospects of extraordinarily high energy prices (market adjustments have been fairly effective), but rather by a number of economic, strategic, and environmental factors. Energy trade has been creating large regional payment imbalances, including the now perennial U.S. deficits. Strategic concerns range from the future role of the Organization of Petroleum Exporting Countries and the stability of the Middle East to Russia’s designs. The foremost environmental justification for reducing dependence on fossil fuels is the need to minimize risk by reducing the emissions of carbon dioxide (CO2) from coal and hydrocarbon combustion.

And although all of the above factors apply equally well to the U.S., European, or Japanese situations, it is necessary that Americans understand the extraordinary nature of their energy consumption. In per-capita terms, Americans now consume energy at a rate that is more than twice the average in the European Union (EU)—almost 8.5 tons of oil equivalent (TOE) a year per capita as compared to about 3.7 TOE for the EU—and almost twice the level in the largest and the most affluent EU nations (Germany and France) or in Japan (all of which average about 4.2 TOE per capita). Normalizations taking into account differences in the size of territory and climate reduce this gross disparity, but the net difference remains large, particularly considering that the United States has been deindustrializing, whereas energy-intensive manufacturing remains much stronger in Germany and Japan.

Yet this energy profligacy has not translated into any real benefits for the country. The overall U.S. quality of life is obviously not twice as high as in the EU or Japan. Measured by a number of critical socioeconomic indicators, it actually falls behind that of Europe and Japan. Maintaining this exceptionally high energy consumption in an increasingly globalized economy is both untenable and highly undesirable. Indeed, the greatest challenge for responsible leadership in the years ahead will be making it clear to the public that a deliberate, gradual, long-term reduction in energy use is both desirable and achievable.

A farsighted long-range energy policy would replace the standard call for a combination of increased energy production and improved efficiency of energy conversion with a new quest for gradually declining levels of per-capita energy use, a goal to be achieved by a simultaneous pursuit of two key long-term strategies.

The first should be a more vigorous quest for efficiency gains by established converters in order to achieve substantial efficiency gains in every sector of the economy. This approach must mix diffusing well-established superior techniques and introducing bold innovations. Fortunately, the potential for such gains remains in many ways as promising today as it was at the time of the first energy crisis in 1973 and 1974, with opportunities ranging from mandatory reliance on the best commercially available methods to a targeted introduction of innovative solutions. Adopting new DiesOtto engines (grafting the advantages of the inherently higher efficiency of Diesel’s machines onto standard gasoline-fueled engines), installing high-efficiency (in excess of 95%) natural gas furnaces in all new buildings, using power factor correction for electronic devices, switching to LED lighting, and recovering low-temperature waste heat for electricity generation are just a few prominent examples of this vast potential.

But better conversion efficiencies are not enough. Pursuing them must be combined with relentless enhancement of overall system performance. Above all, we must avoid consuming more energy more efficiently. Thus, the second component of an effective long-range energy policy must be a quest for significant overall reductions in energy use, a goal to be achieved by a gradual adoption of measures leading to a fundamental reshaping of consumption patterns and a redesign of energy-consuming infrastructures.

This would necessarily be a prolonged process, and its success would be impossible without redefining many long-established ways of measuring and judging fundamental realities and policies. For example, one of its key preconditions would be to move away from the existing incomplete and misleading ways of pricing goods and valuing services without examining their real cost, including environmental, strategic, and health costs, and without subjecting them to life cycle analyses. Although these ideas have yet to capture the economic mainstream, a considerable intellectual foundation for a transition to more inclusive valuations is already in place. Pursuing this course would be infinitely more rewarding than bizarre methods of producing more energy with hardly any net energy return (such as the cultivation of energy crops without taking into account the energy inputs and environmental burdens), hiding the emissions of CO2 (as favored by powerful carbon capture and sequestration lobbies), or making impossible claims for nonfossil forms of electricity generation (perhaps most notably, various exaggerated goals concerning the near-term contributions of wind turbines to national and continental electricity generation).

The goal of reduced energy use is actually less forbidding than it appears at first sight. Not only is U.S. energy consumption substantially higher than in any other affluent nation (making reductions without any loss of quality of life easier than in, say, France), but despite profligate use of fuels and electricity, the average per-capita energy use in the United States has increased only marginally during the past two generations (from about 8.3 TOE in 1970 to 8.4 TOE in 2007). Clearly, if more rational regulations (ranging from responsible residential zoning to steadily tightening vehicle mileage standards) had been in place between 1975 and 2005, the country would have avoided the enormous infrastructural burden of exurbia and of 20-mile-per-gallon SUVs, and average per-capita energy use might have already declined by an encouraging margin.

I believe that having in mind an ultimate—distant, perhaps unattainable, but clearly inspirational—goal would be helpful. Years ago, I formulated it as a quest for an economy of 60 gigajoules (GJ) per capita, or roughly 1.5 TOE. This amount is now the approximate global per-capita mean consumption of all fossil fuels. A European initiative led by the Swiss Federal Institute of Technology and coordinated by Eberhard Jochem ended up with a similar ultimate goal of a 2,000-watt society (an annual energy consumption of 60 GJ per capita corresponds to the power of 1,900 watts).

If the United States is to maintain its prosperity and its prominent role in world affairs, it must lead, not follow, and it must provide a globally appealing example of a policy that would simultaneously promote its capacity to innovate, strengthen its economy by putting it on sounder fiscal foundations, and help to improve Earth’s environment. Its excessively high per-capita energy use has done the very opposite, and it has been a bad bargain because its consumption overindulgence has created an enormous economic drain on the country’s increasingly limited financial resources without making the nation more safe and without delivering a quality of life superior to that of other affluent nations.

However, if grasped properly and used effectively, this very weakness contains a promise of new beginnings, but only if the United States’ traditional creativity and innovative drive are combined with a serious commitment to reduce its energy use. I realize that such a call will be seen as a non-starter in the U.S. energy policy debate and that its rejection will be supported by voices extending across most of the country’s political spectrum. Changing long-established precepts is always hard, but the current concatenation of economic, environmental, and strategic concerns offers an excellent opportunity for new departures.

Energy transitions are inherently prolonged affairs, and in large nations with massive and costly infrastructures, their pace cannot be dramatically speeded up even by the most effective interventions. Five or 10 years from now, the U.S. pattern of energy supply and the dominant modes of its conversion will be very similar to today’s arrangements, but the coming years offer an uncommon opportunity to turn the country’s policy in a more sensible direction and to lead along a difficult but rewarding path of global energy transition.

Abolishing Hunger

The first of the Millennium Development Goals, which were adopted by the world’s leaders at the United Nations in 2000, was a promise to fight poverty and reduce the number of the hungry by half by 2015, from 850 million to 425 million hungry souls on this planet. Shame on us all! By 2008, the figure had actually risen to 950 million and is estimated to reach 1 billion in a few years.

It is inconceivable that there should be close to a billion people going hungry in a world as productive and interconnected as ours. In the 19th century, some people looked at slavery and said that it was monstrous and unconscionable; that it must be abolished. They were known as the abolitionists, and they were motivated not by economic self-interest but by moral outrage.

Today the condition of hunger in a world of plenty is equally monstrous and unconscionable, and it too must be abolished. We must become the new abolitionists. We must, with the same zeal and moral outrage, attack the complacency that would turn a blind eye to this silent holocaust, which causes some 40,000 hunger-related deaths every day.

As we celebrate the bicentennial of Abraham Lincoln, the founder of the U.S. National Academy of Sciences and the Great Emancipator, it behooves us to become these new abolitionists. Lincoln said that a house divided cannot stand and that a nation cannot survive half free and half slave. Today, I say a world divided cannot stand; humanity cannot continue living partly rich and mostly poor.

Our global goal should be that all people enjoy food security: reliable access to a sufficient quantity, quality, and diversity of food to sustain an active and healthy life. Most developed countries have achieved this goal through enormous advances in agricultural techniques, plant breeding, and engineering schemes for irrigation and drainage, and these advances are making a difference in developing countries as well. The Malthusian nightmare of famine checking population growth has been avoided. Global population has grown relentlessly, but many lagging societies have achieved a modicum of security that would have been unthinkable half a century ago. India, which could not feed 450 million people in 1960, is now able to provide the food energy for a billion people, plus a surplus, with essentially the same quantities of land and water.

Still, much more needs to be done. Achieving global food security will require progress in the following areas:

  • Increasing production to expand the caloric output of food and feed at rates that will match or exceed the quantity and quality requirements of a growing population whose diets are changing because of rising incomes. This increase must be fast enough for prices to drop (increasing the accessibility of the available food to the world’s poor) and be achieved by increasing the productivity of the small farmers in the less-developed countries so as to raise their incomes even as prices drop.
  • Such productivity increases will require all available technology, including the use of biotechnology, an approach that every scientific body has deemed to be safe but is being bitterly fought by the organic food growers’ lobby and various (mainly European) nongovernmental organizations.
  • Climate change has increased the vulnerability of poor farmers in rain-fed areas and the populations who depend on them. Special attention must be given to the production of more drought-resistant, saline-resistant, and less-thirsty plants for the production of food and feed staples.
  • Additional research is needed to develop techniques to decrease post-harvest losses, increase storability and transportability, and increase the nutritional content of popular foods through biofortification.
  • Biofuels should not be allowed to compete for the same land and water that produce food for humans and feed for their livestock. We simply cannot burn the food of the poor to drive the cars of the rich. We need to develop a new generation of biofuels, using cellulosic grasses in rain-fed marginal lands, algae in the sea, or other renewable sources that do not divert food and feed products for fuel production.
  • Because it is impractical to seek food self-sufficiency for every country, we need to maintain a fair international trading system that allows access to food and provides some damping of sudden spikes in the prices of internationally traded food and feed crops.
  • The scientific, medical, and academic communities must lead a public education campaign about food security and sound eating habits. Just as we have a global antismoking campaign, we need a global healthy food initiative.
  • And we need to convince governments to maintain buffer stocks and make available enough food for humanitarian assistance, which will inevitably continue to be needed in various hot spots around the world.

New technologies to the rescue

No single action is going to help us solve all the problems of world hunger. But several paths are open to us to achieve noticeable change within a five-year horizon. Many policy actions are already well understood and require only the will to pursue them. But there are a few more actions that will become effective only when combined with the development of new technologies that are almost within our grasp. Critical advances in the areas of land, water, plants, and aquatic resources will enable us to take a variety of actions that can help put us back on track to significantly reduce hunger in a few short years.

Land. Agriculture is the largest claimant of land from nature. Humans have slashed and burned millions of hectares of forest to clear land for farming. Sadly, because of poor stewardship, much of our farmland is losing topsoil, and prime lands are being degraded. Pressure is mounting to further expand agricultural acreage, which means further loss of biodiversity due to loss of habitat. We must resist such pressure and try to protect the tropical rainforests in Latin America, Africa, and Asia. This set of problems also calls for scientists to:

  • Rapidly deploy systematic efforts to collect and classify all types of plant species and use DNA fingerprinting for taxonomic classification. Add these to the global seed/gene banks and find ways to store and share these resources.
  • Use satellite imagery to classify soils and monitor soil conditions (including moisture) and launch early warning campaigns where needed.
  • For the longer term, conduct more research to understand the organic nature of soil fertility, not just its chemical fertilizer needs.

Water. Water is life. Humans may need to consume a few liters of water per day for their survival and maybe another 50 to 100 liters for their well-being, but they consume on average about 2,700 liters per day for the food they consume: approximately one liter per calorie, and more for those whose diet is rich in animal proteins, especially red meat. At present, it takes about 1,200 tons of water to produce a ton of wheat, and 2,000 to 5,000 tons of water to produce a ton of rice. Rainfall is also likely to become more erratic in the tropical and subtropical zones where the vast majority of poor humanity lives. Floods alternating with droughts will devastate some of the poorest farmers, who do not have the wherewithal to withstand a bad season. We absolutely must produce “more crop per drop.” Some of what needs to be done can be accomplished with simple techniques such as land leveling and better management of irrigation and drainage, but we will also need plants that are better suited to the climate conditions we expect to see in the future. Much can be done with existing knowledge and techniques, but we will be even more successful if we make progress in four critical research areas:

  • First, we know hardly anything about groundwater. New technologies can now map groundwater reservoirs with satellite imagery. It is imperative that an international mapping of locations and extent of water aquifers be undertaken. New analysis of groundwater potential is badly needed, as it is likely that as much as 10% of the world’s grain is grown with water withdrawals that exceed the recharge rate of the underground reservoirs on which they draw.
  • Second, the effects of climate change are likely to be problematic, but global models are of little help to guide local action. Thus, it is necessary to develop regional modeling for local action. Scientists agree on the need for these models to complement the global models and to assist in the design of proper water strategies at the regional and local scales, where projects are ultimately designed.
  • Third, we need to recycle and reuse water, especially for peri-urban agriculture that produces high-value fruits and vegetables. New technologies to reduce the cost of recycling must be moved rapidly from lab to market. Decision-makers can encourage accelerated private-sector development programs with promises of buy-back at reliable prices.
  • Finally, the desalination of seawater, not in quantities capable of supporting all current agriculture, but adequate to support urban domestic and industrial use, as well as hydroponics and peri-urban agriculture, is possible and important.

Plants. Climate change is predicted to reduce yields unless we engineer plants specifically for the upcoming challenges. We will need a major transformation of existing plants to be more resistant to heat, salinity, and drought and to reach maturity during shorter growing seasons. Research can also improve the nutritional qualities of food crops, as was done to increase the vitamin A content of rice. More high-risk research also deserves support. For example, exploring the biochemical pathways in the mangrove that enable it to thrive in salty water could open the possibility of adding this capability to other plants.

Too much research has focused on the study of individual crops and the development of large monoculture facilities, and this has led to practices with significant environmental and social costs. Research support should be redirected to a massive push for plants that thrive in the tropics and subtropical areas and the arid and semiarid zones. We need to focus on the farming systems that are suited to the complex ecological systems of small farmers in poor countries.

This kind of research should be treated as an international public good, supported with public funding and with the results made freely available to the poor. Such an investment will reduce the need for humanitarian assistance later on.

Aquatic resources. In almost every aspect of food production, we are farmers, except in aquatic resources, where we are still hunter-gatherers. In the 19th century, hunters almost wiped out the buffaloes from the Great Plains of the United States. Today, we have overfished all the marine fisheries in the world, as we focused our efforts on developing ever more efficient and destructive hunting techniques. We now deploy huge factory ships that can stay at sea for months at a time, reducing some species to commercial extinction.

We need to invest in the nascent technologies of fish farming. There is some effort being made to promote the farming of tilapia, sometimes called the aquatic chicken. In addition, integrating some aquaculture into the standard cropping techniques of small farmers has proven to be ecologically and economically viable. The private sector has invested in some high-value products such as salmon and shrimp. But aquaculture is still in its infancy compared to other areas of food production. A massive international program is called for.

Marine organisms reproduce very quickly and in very large numbers, but the scientific farming of marine resources is almost nonexistent. Proper farming systems can be devised that will be able to provide cheap and healthy proteins for a growing population. About half the global population lives near the sea. Given the billions that have gone into subsidizing commercial fishing fleets, it is inconceivable that no priority has been given to this kind of highly promising research. Decisionmakers must address that need today.

Science has been able to eke out of the green plants a system of food production that is capable of supporting the planet’s human population. It is not beyond the ken of scientists to ensure that the bounty of that production system is translated into food for the most needy and most vulnerable of the human family.

Science, technology, and innovation have produced an endless string of advances that have benefited humanity. It is time that we turn that ingenuity and creativity to address the severe ecological challenges ahead and to ensure that all people have that most basic of human rights, the right to food security.

Most of the necessary scientific knowledge already exists, and many of the technologies are on the verge of becoming deployable. It is possible to transform how we produce and distribute the bounty of this earth. It is possible to use our resources in a sustainable fashion. It is possible to abolish hunger in our lifetime, and we need to do so for our common humanity.

The Rightful Place of Science

I stood with the throngs on the Washington Mall on January 20, 2009, watching a young new president (well, truth be told, watching him on a Jumbotron) announce to a discouraged nation the beginning of an era of hope and responsibility. Standing between the Lincoln Memorial and the Washington Monument, I was feeling, in the words I saw emblazoned on t-shirts for sale by hawkers on the street, that indeed it was “Cool to be American Again.”

Just moments into his presidency, Barack Obama promised to “restore science to its rightful place” in U.S. society, a pronouncement that signaled to many a transition from an old world where decisions were dictated by political calculus and ideological rigidity to a new one dedicated to action based on rationality and respect for facts. “The Enlightenment Returns” trumpeted the recent headline of a Science editorial authored by physicist Kurt Gottfried of the advocacy group Union of Concerned Scientists and Nobelist Harold Varmus of the Sloan-Kettering Cancer Center.

Incredibly, science is at the forefront of the national political agenda. Perhaps the last time science figured so prominently in the pronouncements of a president was when Dwight Eisenhower, in his famous “military-industrial complex” farewell speech of 1961, warned in Newtonian fashion that “in holding scientific research and discovery in respect, as we should, we must also be alert to the equal and opposite danger that public policy could itself become the captive of a scientific-technological elite.” President Obama, in contrast, was welcoming that scientific-technological elite back into the political family after eight long years during which, in the words of Gottfried and Varmus, “the precepts of the Enlightenment were ignored and even disdained with respect to the manner in which science was used in the nation’s governance.”

What moved science to center stage? Early in his presidency, George W. Bush alienated many scientists and biomedical research advocates with his decision to significantly limit public funding for embryonic stem cell research. His skepticism about climate change and his opposition to the Kyoto Protocol; a generally anti-regulatory stance on environmental, health, and consumer protection issues; support for abstinence as the preferred method of birth control; advocacy of nuclear power and the fossil fuel industry; and tolerance for the teaching of intelligent design, among other political preferences, contributed to a growing belief that the Bush administration generally found science to be an annoying inconvenience to its political agenda; an inconvenience that needed to be ignored, suppressed, or even manipulated for political purposes. High-profile reports issued in 2003 by Congressman Henry Waxman and in 2004 by the Union of Concerned Scientists highlighted numerous instances where “scientific integrity” had been undermined at federal agencies and in Bush administration policies. A February 2004 “Scientist Statement on Restoring Scientific Integrity to Federal Policy Making,” initially signed by 62 scientists, many with reputations as national leaders in their fields, summarized the charges: “When scientific knowledge has been found to be in conflict with its political goals, the administration has often manipulated the process through which science enters into its decisions … The distortion of scientific knowledge for partisan ends must cease if the public is to be properly informed about issues central to its well being, and the nation is to benefit fully from its heavy investment in scientific research.” The issue made it into the 2004 presidential campaign, where Democratic candidate John Kerry pledged: “I will listen to the advice of our scientists, so I can make the best decisions…. This is your future, and I will let science guide us, not ideology.” The allegations were memorably summed up in the title of Chris Mooney’s 2005 book, The Republican War on Science. In the run-up to the 2008 presidential election, a Democratic party Web site promised: “We will end the Bush administration’s war on science, restore scientific integrity, and return to evidence-based decision-making.”

And so, restoring science to its rightful place became good politics. But how are we to know the “rightful place” when we see it? One way to start might be to look at the young Obama administration for indications of what it is doing differently from the Bush administration in matters of science policy. The obvious first candidate for comparison would be embryonic stem cell research, which for many scientists and members of the public symbolized President Bush’s willingness to sacrifice science on the altar of a particularly distasteful politics: pandering to the religious right’s belief that the sanctity of human embryos outweighs the potential of stem cell research to reduce human suffering. When President Bush announced his stem cell policy in August 2001, he tried to walk a moral tightrope by allowing federal support for research on existing stem cell lines, thus ensuring that no embryos would be destroyed for research purposes, while loosening the ban on embryo research that Congress had established in 1994, though on a very limited basis. The president reported that there were about 60 such existing cell lines, a number that turned out, depending on one’s perspective, to be either overly optimistic or a conscious deception; the actual number was closer to 20.

Lifting the restrictions on stem cell research was a part of the 2004 campaign platform of Democratic presidential candidate John Kerry, as it was of Barack Obama four years later. Less than two months into his presidency, Obama announced that he would reverse the Bush policies by allowing research on cell lines created after the Bush ban. The president instructed the director of the National Institutes of Health (NIH) to “develop guidelines for the support and conduct of responsible, scientifically worthy human stem cell research.”

In announcing the change, President Obama emphasized the need to “make scientific decisions based on facts, not ideology,” yet the new policy, as well as the language that the president used to explain it, underscores that the stem cell debate is in important ways not about scientific facts at all, but about the difficulty of balancing competing moral preferences. The new policy does not allow unrestricted use of embryos for research or the extraction of cell lines from embryos created by therapeutic cloning. In explaining that “[m]any thoughtful and decent people are conflicted about, or strongly oppose, this research,” President Obama was acknowledging that, even in its earliest stages, the small group of cells that constitute an embryo are in some way different from a chemical reagent to be sold in a catalog or an industrially synthesized molecule to be integrated into a widget. Indeed, to protect women from economic and scientific exploitation, and in deference to the moral and political ambiguity that embryos carry with them, no nation allows the unrestricted commodification of embryos, and some, including Germany, have bans on destroying embryos for research purposes. Although most Americans favor a less restrictive approach to stem cell research than that pursued by President Bush, the issue is inherently political and inherently moral. Thus, some of the cell lines approved for research under the Bush restrictions might actually not be approved under the Obama guidelines because they may not have been obtained with the appropriate level of prior informed consent of the donor, a moral constraint on science that apparently did not concern President Bush.

Shortly after President Obama laid out his new approach, a New York Times editorial accused him of taking “the easy political path” by allowing federal research only on excess embryos created through in vitro fertilization. The accusation is ambiguous; it implies either that there is a “hard” political path, or that there is a path that is entirely nonpolitical. Given the state of public opinion, apparently President Bush took the hard political path and paid the political price. And the idea that there is a path beyond politics, one that is paved with “facts, not ideology,” is false—indeed, itself a political distortion—so long as significant numbers of people see human embryos as more than just a commodifiable clump of molecules. Moreover, there is nothing at all anti-science about restricting the pursuit of scientific knowledge on the basis of moral concerns. Societies do this all the time; for example, with strict rules on human subjects research. The Bush and Obama policies differ only as a matter of degree; they are fundamentally similar in that neither one cedes moral authority to science and scientists. When it comes to embryonic stem cells, the “rightful place of science” remains a place that is located, debated, and governed through democratic political processes.

A scientific-technological elite unchecked by healthy skepticism and political pluralism may well indulge in its own excesses.

Another common allegation about Bush administration abuse of science focused on decisions that ignored the expert views of scientists in deference to pure political considerations. An early signal of how President Obama will deal with apparent conflicts between expert scientists and political calculus came when the president decided to slash funding for the Yucca Mountain nuclear waste repository, the only congressionally approved candidate for long-term storage of high-level nuclear waste. Since the late 1980s, the Department of Energy has spent on the order of $13 billion to characterize the suitability of the 230-square-mile site for long-term geological storage, probably making that swath of Nevada desert the most carefully and completely studied piece of ground on the planet. At the same time, because of the need to isolate high-level waste from the environment for tens of thousands of years, uncertainty about the site can never be eliminated. Writing in Science last year, Isaac Winograd and Eugene Roseboom, respected geologists who have been studying the region longer than anyone, explained that this persistence of uncertainty is inherent in the nuclear waste problem itself, not just at Yucca Mountain, and that uncertainties can best be addressed through a phased approach that allows monitoring and learning over time. They suggest that the Nevada site is suitable for such an approach, echoing a recent report of the National Academies. But they also emphasize that the persistence of uncertainties “enables critics … to ignore major attributes of the site while high-lighting the unknowns and technical disputes.”

Perhaps we have now discovered the rightful place of science: not on a pedestal, not impossibly insulated from politics and disputes about morality, but nestled within the bosom of the Democratic Party. Is this a good place for science to be?

Among those critics have been the great majority of the citizens of Nevada, a state that has acted consistently and aggressively through the courts and political means to block progress on the site since it was selected in 1987. A particularly effective champion of this opposition has been Harry Reid, majority leader in the U.S. Senate and one of the most influential and powerful Democratic politicians in the nation. Now add to the mix that Nevada has been a swing state in recent presidential elections, supporting George Bush for president in 2000 and 2004, and Bill Clinton in 1992 and 1996. President Bush strongly supported the Yucca Mountain site, as did 2008 Republican candidate John McCain. All of the major Democratic presidential candidates, seeking an edge in the 2008 election, opposed the site; shutting it down was one of Barack Obama’s campaign promises, which he fulfilled by cutting support for the program in the fiscal year 2010 budget, an action accompanied by no fanfare and no public announcement.

At this point it is tempting to write: “It’s hard to imagine a case where politics trumped science more decisively than in the case of Yucca Mountain, where 20 years of research were traded for five electoral votes and the support of a powerful senator,” which seems basically correct, but taken out of context it could be viewed as a criticism of President Obama, which it is not. But the point I want to make is only slightly more subtle: Faced with a complex amalgam of scientific and political factors, President Obama chose short-term political gain over longer-term scientific assessment, and so decided to put an end to research aimed at characterizing the Yucca Mountain site. This decision can easily be portrayed in the same type of language that was used to attack President Bush’s politicization of science. John Stuckless, a geochemist who spent more than 20 years working on Yucca Mountain, was quoted in Science making the familiar argument: “I think it’s basically irresponsible. What it basically says is, they have no faith in the [scientists] who did the work … Decisions like that should be based on information, not on a gut feeling. The information we have is that there’s basically nothing wrong with that site, and you’re never going to find a better site.”

It turns out that the nostrums and tropes of the Republican war on science are not easily applied in the real world, at least not with any consistency. “In a blow to environmental groups and a boost for ranchers, the Obama administration announced Friday that it would take the gray wolf off the endangered species list in Montana and Idaho,” the New York Times reported on March 7. “This was a decision based on science,” said a spokesperson for the Department of the Interior, as reported in the Washington Post. In contrast, the spokesperson for a Democratic member of Congress from Idaho said, in lauding the de-listing, “I can’t emphasize how important it is to have a Western rancher as secretary of the interior,” presumably implying that it wasn’t really all about science. Soon afterwards, the author Verlyn Klinkenborg, writing on the Times editorial page, argued that the decision “may indeed have been based on the science” but that hunting would quickly drive the wolves back to the brink of extinction. Representative Norm Dicks (DWA) didn’t even buy the science claim: “I don’t think they took enough time to evaluate the science.” The environmental group EarthJustice issued a press release claiming that “independent scientists” did not believe the wolf population was big enough to justify de-listing; Defenders of Wildlife noted that the plan “fails to adequately address biological concerns about the lack of genetic exchange among wolf populations”; and the Sierra Club said that the Department of the Interior “should be working with the state of Wyoming to create a scientifically sound wolf management plan…. It’s inappropriate to delist wolves state-by-state. Wolves don’t know political boundaries.”

The “rightful place” of science is hard to find. Or perhaps we are looking for it in all the wrong places? When President Obama was urgently seeking to push his economic stimulus package through Congress in the early days of his administration, he needed the support of several Republican senators to guard against Republican filibuster and to bolster the claim that the stimulus bill was bipartisan. Senator Arlen Specter, who suffers from Hodgkin’s disease, agreed to back the stimulus package on the condition that it includes $10 billion in additional funding for NIH. For this price a vote was bought and a filibuster-proof majority was achieved.

Now there is nothing at all wrong with making political deals like this; good politics is all about making deals. What’s interesting in this case is the pivotal political importance of a senator’s support for science. If Senator Specter (who, perhaps coincidentally, underwent a party conversion several months later) had asked for $10 billion for a new weapons system or for abstinence-only counseling programs, would his demand have been met? In promoting the stimulus package to the public, one of the key features highlighted by congressional Democrats and the Obama administration was strong support for research, including $3 billion for the National Science Foundation, $7.5 billion for the Department of Energy, $1 billion for the National Aeronautics and Space Administration, and $800 million for the National Oceanic and Atmospheric Administration, in addition to the huge boost for NIH. These expenditures are on one level certainly an expression of belief that more public funding for research and development is a good thing, but they are also a response to the discovery by Democrats during the Bush administration that supporting science (and, equally important, accusing one’s Republican opponents of abusing or undermining science) is excellent politics; that it appeals to the media and to voters and is extremely difficult to defend against. Democrats were claiming not simply that money for science was good stimulus policy but that it was a necessary corrective to the neglect of science under the Bush administration. Speaker of the House Nancy Pelosi quipped: “For a long time, science had not been in the forefront. It was faith or science, take your pick. Now we’re saying that science is the answer to our prayers.”

Is money for science good stimulus policy? Experts on economics and science policy disagreed about whether ramming billions of dollars into R&D agencies in a short period of time was an effective way to stimulate economic growth and about whether those billions would be better spent on more traditional stimulus targets such as infrastructure and increased unemployment benefits. Lewis Branscomb, one of the nation’s most thoughtful observers of U.S. science policy, summed up the dilemma in a University of California, San Diego, newsletter article: “If the new research money is simply spread around the academic disciplines, it will be great for higher education, but will be a long time contributing to national problem-solving.” And beyond the stimulus question, were such sharp increases in science funding good science policy? Writing in Nature, former staff director for the House Science Committee David Goldston observed that “A stimulus bill is not the ideal vehicle for research spending, and if scientists and their proponents aren’t careful, the bill is a boon that could backfire.” Goldston highlighted three concerns: first, “that being included in the stimulus measure could turn science spending into a political football,” second, that “a brief boom could be followed by a prolonged bust,” and “third, and perhaps most troubling … that inclusion in the stimulus bill means the science money must be awarded with unusual, perhaps even reckless, speed.”

As a matter of national politics, however, the immediate benefits were obvious for the president and for Democratic politicians. Democrats are finally discovering a politically powerful symbol of what they want to stand for, a symbol that captures the American reverence for progress and exemplifies a positive role for government that cannot easily be tarred by Republicans as “tax-and-spend” or anti-market but on the contrary is widely believed by Americans to be the key to a better tomorrow. Consider, for example, the complete sentence that President Obama used in his inauguration speech: “We will restore science to its rightful place and wield technology’s wonders to raise health care’s quality and lower its costs.” Science in its “rightful place” is linked to the curing of disease and the reduction of health care costs. Who could be against such things? They stand in for a more general belief in human progress.

Never mind that the president’s claim—that more medical technology created by more scientific research will reduce health care costs—is, to put it mildly, implausible. Every person listening to the inauguration speech has experienced the inflationary spiral of health care costs in recent decades, a spiral that continual scientific and technological advance helps to perpetuate. New medical knowledge and technology will undoubtedly relieve suffering and extend life for many, and it will probably reduce costs in some cases when it replaces current types of care, but the overall effect of progress in medical science and technology will be the same as it as been for decades: to increase total health care spending. But a small inconsistency is the hobgoblin of policy wonks; surely the key point is that science, linked to progress, is change we can all believe in.

Perhaps the best way to understand what seems to be happening to science as a political symbol for Democrats is to consider, in contrast, the value of “national defense” as a political symbol for Republicans. President Bush made powerful use of the idea that Republicans are more concerned about national security, and more able to protect it, than are Democrats, both in justifying his prosecution of the war in Iraq and in attacking John Kerry during the 2004 election campaign. In the 1980 presidential campaign, Ronald Reagan made devastatingly effective use of the notion that President Carter was soft on defense, and a signal priority for the Reagan administration from its earliest days was to greatly increase expenditures on the military, just as President Obama is now doing for science.

Because “national security” and, it now turns out, “science” are tropes that resonate powerfully with significant parts of the voting public, they make highly potent political symbols—not just for communicating values, but also for distinguishing one’s self from the opposition. These sorts of symbols are particularly effective as political tools because they are difficult to co-opt by the other side. It is harder for a Democrat than for a Republican to sound sincere when arguing for a strong national defense. As a matter of ideology, Democrats are often skeptical about the extent to which new weapons systems or new military adventures truly advance the cause of national security or human well-being. And similarly, it is harder for a Republican than a Democrat to sound sincere when arguing for the importance of science. Scientific results are commonly used to bolster arguments for government regulatory programs and policies, and as a matter of ideology Republicans are often skeptical about the ability of government to wisely design and implement such policies or about their actual benefits to society.

Neither of these ideological proclivities amounts to being, respectively, “soft on defense” or “anti-science,” but each provides a nucleus of plausible validity to such accusations. Trying to go against this grain—as when Michael Dukakis, the 1988 Democratic presidential candidate, sought to burnish his defense credentials by riding around in a tank, or when George Bush repeatedly claimed that he would make decisions about climate change and the environment on the basis of “sound science”—inevitably carry with them the aura of insincerity, of protesting a bit too much.

And so perhaps we have now discovered the rightful place of science: not on a pedestal, not impossibly insulated from politics and disputes about morality, but nestled within the bosom of the Democratic Party. Is this a good place for science to be? For the short term, increased budgets and increased influence for the scientific-technological elite will surely be good for the scientific enterprise itself. Serious attention to global environmental threats, to national energy security, to the complex difficulties of fostering technological innovation whose economic outcomes are not largely captured by the wealthy, are salutary priorities of the Obama administration and welcome correctives to the priorities of his predecessor.

But ownership of a powerful symbol can give rise to demagoguery and self-delusion. President Bush overplayed the national defense card in pursuit of an ideological vision that backfired with terrible consequences in Iraq. In turn, a scientific-technological elite unchecked by healthy skepticism and political pluralism may well indulge in its own excesses. Cults of expertise helped bring us the Vietnam War and the current economic meltdown. Uncritical belief in and promotion of the redemptive power of scientific and technological advance is implicated in some of the most difficult challenges facing humans today. In science, Democrats appear to have discovered a surprisingly potent political weapon. Let us hope they wield it with wisdom and humility.

A Future for U.S. Fisheries

For the fishing industry in the United States, and for the fishery resources on which the industry depends, there is good news and bad news. Bad news still predominates, as many commercial fishers and their communities have suffered severe financial distress and many fish stocks have declined considerably in numbers. Poor management by the National Marine Fisheries Service (NMFS), which regulates the fishing industry, and some poor choices by many fishers have contributed to the problems. But there are some bright spots, small and scattered, that suggest that improvements are possible.

Starting with the bad news, the federal government’s fisheries management remains primitive, simplistic, and, in important cases, ineffectual, despite a fund of knowledge and conceptual tools that could be applied. In many regions—New England and the Pacific Northwest, among others—failed management costs more than the receipts from fisheries. This does not suggest that management should be given up as a lost cause, leaving the industry in a free-for-all, although this strategy might, in fact, be cheaper and not much less effective.

As a key problem, most management efforts today are based primarily on catch quotas that regulate how much fishers can harvest of a particular species in some set period, perhaps a season or a year. The problem is that quotas are set according to estimates of how much of the resource can be taken out of the ocean, rather than on how much should be left in. This may sound like two sides of the same coin, but in practice the emphasis on extraction creates a continual bias on the part of fisheries agencies and unrealistic short-term expectations among fishers. For example, a basic tenet of these approaches is that a virgin fish population should be reduced by about two-thirds to make it more “productive.” But this notion is belied in the real world, where it has been proven that larger breeding populations are more productive.

The failure of this approach is readily apparent. The Sustainable Fisheries Act of 1996, reaffirmed by Congress in 2006, states that fish populations may not be fished down below about one-third of their estimated virgin biomass. It also states that in cases where fish stocks already have been pushed below that level, they must be restored (in most cases) to that level within a decade. On paper, this act looked good. (Full disclosure: I drafted the quantitative overfishing and recovery goals and triggers mandated by the act.) Unfortunately, the NMFS wrote implementing regulations interpreting the mandates as meaning that overfishing could continue for some time before rebuilding was required. This too-liberal interpretation blurred the concept and delayed benefits. In its worst cases, it acknowledged that fish populations must be rebuilt in a decade but said that overfishing could continue in the meantime.

Clearly, the nation needs to take a different approach, based solidly on science. As a foundation, regulatory and management agencies must move from basing their actions on “how much can we take?” to concentrating on “how much must we leave?” The goal must be keeping target fish populations and associated living communities functioning, with all components being highly productive and resilient.

The nation must confront another reality as well. So many fisheries are so depleted that the only way to restore them will be to change the basic posture of regulations and management programs to one of recovery. Most fish populations could recover within a decade, even with some commercial fishing. But continuing to bump along at today’s depleted levels robs fishing families and communities of income and risks resource collapse.

Ingredients for success

Moving to a new era of fisheries management will require revising some conventional tools that are functioning below par and adopting an array of new “smart tools.” Regulations that set time frames for overfishing and recovery can play a valuable role, if properly interpreted. For example, traditional catch quotas must be based firmly on scientific knowledge about fish stocks, and they must be enforced with an eye toward protecting the resource. Newer tools, adapted to specific environments and needs, would include:

Tradable catch shares. In this approach, now being used in some regions in varying degrees, fishery managers allot to fishers specific shares of the total allowable catch and give them the flexibility and the accountability for reaching their shares. Thus, fishers do not own the fish; rather, they own a percentage of the total allowed catch, which may fluctuate from year to year if management agencies adjust it up or down.

In expanding the use of such programs, managers must establish the shares based on the advice of independent scientists who are insulated from industry lobbying. Managers also should allot shares only to working fishers, not to corporations or processors. Of course, finding equitable ways of determining which fishers get catch shares will be critical. Methods of allocating shares may vary from location to location, but the key is ensuring an open process that accounts for fishers’ legitimate interests and maintains conservation incentives. In many cases, fewer fishers will be eligible to keep fishing. But those not selected would likely have been forced out of business anyway by the combination of pressure from more successful fishers and reduced fish stocks.

By significantly reducing competition that breeds a race for fish, this approach offers several benefits. For one, it makes for safer fishing. Fishers who own shares know that they have the whole season to fill their quota regardless of what other boats are catching, so they are less likely to feel forced to head out in dangerous weather. In addition, owning a share helps ensure (other factors permitting) that a fisher can earn a decent living, so local, state, or regional politicians will feel less pressure to protect their fishing constituents and push for higher catch quotas. At the same time, marginal operators granted shares would no longer feel trapped, because they would have something to sell if they wished to exit the fishery. By promoting longer-term thinking among fishers and politicians alike, catch-share programs help foster a sense of future investment in which quota holders will benefit from high or recovered fish populations.

The impact of tradable catch shares can be seen in experiences in several regions. In Alaska, where fisheries managers once kept a tight cap on the halibut catch, the fishing season shrank to two days annually because there were so many competing boats. After managers introduced tradable catch shares, the number of boats fell precipitously and the season effectively expanded to whenever the fishers wanted to work toward filling their shares. Safety improved markedly, and the halibut population remained robust. In New England, where the industry resisted tradable shares, the story ended differently. Managers allotted individual fishers a shrinking number of days at sea, which progressively crippled their economic viability, gave them no option to exit the fishery short of foreclosure, and kept fishing pressure so high that the fish stocks never recovered.

Area-based fisheries. Although this concept may be relatively new in Western fisheries management, it has underpinned the management of fishing in Pacific islands for millennia. In practice, this approach is most applicable where fish populations spawn in localized areas and do not migrate far from their spawning area. For example, consider the case of clams, which spawn in limited areas and never move far away. In many regions, clamming is regulated on a township-by-township basis. Thus conserving clams off one port will benefit that port, even if (especially if) the next port eliminates its own clam beds. This model holds promise for greater use with various fish species as well. In New England waters, cod once spawned in many local populations, many of which are now extinct. Overall regional quotas and regional mobility of boats contributed to their extinction. Had managers established local area-based restrictions, these populations might well have been saved, to the benefit of local communities.

In implementing area-based fisheries, managers will need to move deliberately, being mindful of what is scientifically supported and careful not to unduly raise people’s expectations. If managers move too hastily, the restrictions may meet a lot of social skepticism and may not work as well as advertised, setting back not only the health of the fish stocks but also the credibility of the managers and scientists who support such actions.

Closed areas. In recent years, fisheries managers have decided that some stocks are so threatened that the only choice is to close all or part of their habitat to fishing. Such efforts are to be applauded, although they have been too few and too limited in scale to achieve major success. Still, the lessons are instructive, as closures have been found to result in increases in fish populations, in the size of individual fish, and in greater diversity of species.

On Georges Bank in the north Atlantic, for example, success has been mixed, but tantalizing. Managers closed some of the grounds in an effort to protect northern cod, in particular, whose stocks had become severely depleted. So far, cod stocks have not rebounded, for a suite of reasons. But populations of several other important species, notably haddock and sea scallops, have mushroomed. These recovered populations have yielded significant financial benefits to the region, although in the case of sea scallops, fishing interests successfully lobbied to be allowed back into the closed areas, hampering full recovery of the resource.

Mixed zoning. In many resource-based industries, even competing interests often agree on one thing: They do not want an area closed to them. Yet regarding fishing, conservationists too often have insisted that protected areas be closed to all extraction, and their single-minded pursuit of all-or-nothing solutions has made it easy for commercial interests to unite in demanding that the answer be nothing. A more nuanced approach is needed.

A comprehensive zoning program should designate a mix of areas, including areas that are entirely open to any kind of fishing at any time, areas that are closed to fishers using mobile gear, areas that are closed to fishers using gear that drags along the seafloor, areas that are closed in some seasons, and areas that are fully protected no-take zones. Such integrated zoning would better protect sensitive seafloor habitats and aquatic nursery areas from the kinds of activities that hurt those areas, while allowing harmless activities to proceed. For instance, tuna fishing could be banned in tuna breeding or nursery areas, yet allowed in ocean canyons, even those with deep coral and other important sedentary bottom communities. This type of zoning would also be most likely to gain the support of competing interests, as each party would get something it wants.

Reduction of incidental catch. Almost all methods of commercial fishing catch undersized or unmarketable individuals of the target species. Few of these can be returned alive. Fortunately, a number of simple changes in fishing methods and gear, such as the use of nets with larger mesh size, have been developed that can reduce incidental kill by more than 90%, and the government should adopt regulations that require use of these cleaner techniques. In some cases, however, it may be appropriate to require fishers to keep all fish caught—no matter their size, appearance, or even species—in order to reduce the waste that otherwise would result.

Commercial fishers also often catch creatures other than fish, with fatal results. For some creatures, such as sea turtles, capture may endanger their species’ very survival. Here, too, advances in fishing technology are helping, but regulators must pay increased attention to finding ways to reduce this problem.

Protection based on size. Managers may be able to protect some fish stocks by setting regulations based on graduated fish sizes. This approach, taken almost by default, has led to a spectacular recovery of striped bass along the Atlantic coast. At one time, this population had become deeply depleted, and reproduction rates had fallen precipitously. But one year, environmental conditions arose that favored the survival of eggs and larvae and led to a slight bump in the number of young fish. After much rancor and debate, federal fisheries managers forced states to cooperate in shepherding this class of juveniles to adulthood. They did this primarily by placing a continually increasing limit on the minimum size of fish that fishers could keep. Over the course of more than a decade, the limits protected the fish as they grew and, ultimately, began reproducing. The limits also protected fish hatched in subsequent years, and they, too, grew into adulthood. This simple approach—protecting fish until they have had a chance to reproduce—did more to recover a highly valued, highly sought species than all of the complex calculations, models, and confused politics of previous management efforts.

Subsidy reform. The federal government provides various segments of the fishing industry with major subsidies that have resulted in a number of adverse consequences. Improperly designed and sized subsidies have propped up bloated and overcapitalized fisheries that have systematically removed too many fish from the seas. Of course, some subsidies will remain necessary. But in most cases, subsidy amounts should be reduced. Also, many subsidies should be redirected to support efforts to develop cleaner technologies and to ease the social pain that fishers and their communities might face in adopting the improved technologies.

Ecologically integrated management. Perhaps the worst mistake of traditional fisheries management is that it considers each species in isolation. For example, simply focusing on how much herring fishers can take from the ocean without crashing herring stocks does not address the question of how much herring must be left to avoid crashing the tuna, striped bass, and humpback whales that feed on herring. Management regulations must be revised to reflect such broader food-web considerations.

Sustainable aquaculture. During the past quarter-century, many nations have turned increasingly to aquaculture to supplement or even replace conventional commercial fishing. Although not at the head of this effort, the United States offers various forms of assistance and incentives to aid the development of the industry. But fish farming is not a panacea. Some operations raise unsustainable monocultures of fish, shrimp, and other aquatic species. Some destroy natural habitats such as marshes that are vital to wild fish. Some transfer pathogens to wild populations. Some pollute natural waters with food, feces, or pesticides necessary to control disease in overcrowded ponds and pens.

As the nation expands fish farming, doing it right should trump doing it fast. Generally, aquaculture will be most successful if it concentrates on raising smaller species and those lower on the food chain. Fish are not cabbages; they do not grow on sunlight. They have to be fed something, and what most fish eat is other fish. Just as the nation’s ranchers raise cows and not lions, fish farmers should raise species such as clams, oysters, herring, tilapia, and other vegetarian fish, but not tuna. Farming large carnivores would take more food out of the ocean to feed them than the farming operation would produce. The result would be a loss of food for people, a loss of fish to other fisheries, and a loss to the ocean. Done poorly, aquaculture is as much of a ticking time bomb as were overcapitalized fisheries.

Working together

Given the magnitude of the problems facing the nation’s commercial fishers and fisheries, the various stakeholders must draw together. Although some recent experiences may suggest otherwise, fishers and scientists need each other in order to succeed. Fishers might lack the training to understand the scientific techniques, especially data analysis, that underpin improved management tools, and scientists might lack the experience required to understand the valid concerns and observations of fishers. But without more trust and understanding, adversarial postures that undermine wise management will continue to waste precious time as resources continue to deteriorate and communities and economies suffer. This need not be the case.

Similarly, fishers, fishery managers, and scientists should work together to better inform the public about the conditions and needs of the nation’s fishing industry and fish stocks. Consider the example of marine zoning. The less people understand about fishing, the more they insist that closed, no-take marine reserves are the answer. Similarly, the less people understand about conservation, the more they insist that traditional methods of fisheries management, which typically ignore the need for reserves, are adequate tools for protecting fish stocks. As in many other areas, knowledge breeds understanding—and very often solutions.

Alternative Energy for Transportation

Science and technology (S&T) has brought economic growth and contributed to enhancing living standards. In recent years, S&T has progressed very rapidly and brought tremendous benefits to our lives. For example, the development of transportation has dramatically extended the range of human activities, genome research makes personalized medicine possible, and the advancement of information and communications technology (ICT) has minimized time and distance in communications.

However, S&T brings not only these lights but also shadows. Advances in S&T have led to serious problems for humanity, such as climate change, ethical concerns in the biosciences, nuclear proliferation, and privacy and security issues in ICT. Therefore, it is essential to control the negative aspects on the one hand and develop the positive factors on the other.

In this context, we need appropriate midterm strategies to advance two aims: economic growth and sustainability for our planet. S&T must help make economic growth compatible with sustainability, and one current challenge is to develop sources of alternative energy for transportation.

The downside of fossil fuels

In the 20th century, many advanced countries relied on fossil fuels such as coal and oil for generating energy. These energy resources have brought great benefits for large-scale economic activities, mass production, and global transportation. However, fossil fuels have a downside for humankind. Consumption of oil is responsible for emissions of greenhouse gases to the atmosphere, climate change, and air pollution. And because oil is a limited resource, it is subject to great increases in price. Therefore, Japan and the world face a daunting array of energy-related challenges.

In view of the expected increase in global energy needs and of environmental concerns, we need to make rapid progress in energy efficiency and further develop a broad range of clean alternative energy sources to reduce emissions and solve climate change problems.

Many developed countries have been making concentrated efforts to develop alternative energy sources, such as nuclear energy and solar power. I strongly believe that nuclear energy should be the main alternative to fossil fuels. In Japan, power from nuclear generation is less expensive than power generated from oil. Furthermore, climate change and escalating oil prices have persuaded some countries that had adopted a cautious stance toward nuclear energy to change their minds and seriously consider it as an alternative. The importance of power generation using nuclear energy, premised on the “3S’s” of safeguards, safety, and security, is clear and indisputable. Although developing other alternative energy sources, including solar power, is also undoubtedly important, ever-increasing energy demands cannot be met unless we use atomic energy.

Where mobility for humankind is concerned, however, almost all types of transportation are still highly dependent on fossil fuels because gasoline- and diesel-powered vehicles are predominant throughout the world. Even if various countries develop and use alternative energy generation systems, they cannot survive without petroleum-derived fuels, which power transport. In other words, right now there is no effective alternative. This leads to skyrocketing oil prices, and every country’s dependence on gasoline and other petroleum-based fuels has given oil-exporting countries tremendous economic and political clout since the middle of the 20th century. Oil is produced in only a handful of countries, and because it is indispensable for transportation, those countries exert crucial influence on the rest of the world. The oil-producing countries sometimes control production and export volume, leaving other countries to cope with higher oil prices. For harmonious development of the world’s economy, we must take major steps to overcome the problems arising from the uneven distribution of oil.

Some advanced countries have highly developed electric public transportation networks, such as trains or subways, but these have two main drawbacks. First, such largescale public transportation systems are applicable mainly in urban areas. In rural areas where the population is relatively sparse, such systems are not really practical. Second, automobiles offer people the freedom to move about at will. Economic development gives people the freedom to work and engage in leisure as they please, so personal mobility is important. Thus, it is sometimes difficult for people who are accustomed to personal mobility to shift to mass public transportation.

In the last decade of the 20th century, some farsighted automobile manufacturers developed hybrid vehicles. Toyota has been making and selling its petroleum electric hybrid vehicle since 1997. This system improves energy efficiency, but such vehicles still depend on gasoline for fuel. Transportation accounts for nearly 30% of all energy consumption worldwide, but because there are few sources of alternative energy in this sector, demand for oil sometimes causes prices to spike. There is no price elasticity, and the pricing mechanism is not working effectively.

This illustrates the fact that transportation is much more oil-dependent than electricity generation, and existing technologies offer few fundamental solutions for alternatives in the transportation sector.

Alternative energy for transportation

It seems clear that it is necessary to develop alternative energy sources for transportation to replace fossil fuel. Two promising technologies are electric vehicles (EVs) and fuel cell vehicles (FCVs). Developing these two key technologies in the next five years will have a decisive impact on our future and will help establish an economic mechanism with which oil prices can be contained within a reasonable range. If these two systems can be commercialized, they will help lower both oil prices and carbon dioxide emissions.

In Japan, the cost of generating nuclear power is competitive with that of thermal power generation such as oilfired power plants. And in terms of energy for transportation, the energy source for both EVs and FCVs is electricity, which can be generated from nuclear power.

For example, EVs use electricity directly to charge their batteries, and FCVs are powered by hydrogen, which is produced using electricity. In this way, it is possible to inject nuclear energy into transportation. This structure would stabilize the price of oil and at the same time save fossil fuels and alleviate climate change, achieving sustainability for our planet.

R&D of alternative energy systems for EVs and FCVs is under way. If the technical challenges in these two convincing technologies can be overcome, the energy costs of these systems could be an important component for placing an upper limit on oil prices. In other words, it is essential to concentrate on cutting the cost of these new systems, in addition to solving the technical difficulties.

One of the key merits of FCVs is high power-generation efficiency, because unlike a normal generation system, the system does not depend on the Carnot efficiency peculiar to thermo motors. EVs also offer advantages. The first is that electric motors are mechanically very simple and release almost no air pollutants in operation. The second is that, whether at rest or in motion, electric vehicles typically produce less vibration and noise pollution than vehicles powered by an internal combustion engine.

However, there are challenges to overcome before these two technologies can be applied. One main technological hurdle for FCVs is the difficulty of maintaining the integrity of the pressure vessel and the separation membrane, which degenerates throughout the operating period. In the case of EVs, drawbacks are the relatively short travel distance on one charge of the battery, short battery life, and the large amount of electricity needed to charge an EV battery.

Researchers thus need to concentrate their efforts on solving these technological challenges. The sooner these new energy systems become competitive with conventional gasoline-powered vehicles, the further ahead we will be in achieving sustainability. I would like to see this new technology in viable form within the next five years, and that will require more government investment and creation of model projects in these areas.

Developing alternative energy for transportation, in sum, will stimulate competition among fuels, keep fossil fuel prices at reasonable levels, and slow climate change. Therefore, the technologies making this possible must be further developed.

Accepting limits

Up until the 20th century, Earth’s resources were effectively unlimited for our economic activity and our needs. But in the 21st century, we have come to recognize that these resources are finite. With progress in technology, automobiles are everywhere, almost everyone uses electricity, large quantities of energy are consumed, and the population has grown. Humankind has prospered up to now, but for the sake of our future survival, we must change our economic behavior and daily life to reflect the fact that Earth is finite.

A strategy for reducing transportation’s dependence on oil in the next five years is vital. However, this strategy will not become reality unless all of us, including policymakers, scientific experts, and the general public, recognize that we need to preserve our finite and priceless planet. Unless we all accept the finite capacity of Earth to sustain us, governments will not invest large amounts of money in initial R&D of new energy systems for transportation. Similarly, the public will not be motivated to shift completely from gasoline-powered vehicles to new systems if they lack awareness that natural resources are limited and that reducing greenhouse gas emissions is essential for Earth’s survival. If new energy sources for transportation become competitive, public behavior will change and vehicles propelled by new energy will sell. Energy for transportation will become less expensive and carbon dioxide–free transportation will become a reality, thus contributing to sustainability.

At the Science and Technology in Society (STS) forum, which I founded in 2004, many of our discussions have been about the relationship between humankind and nature from the perspective of S&T. Today, some may believe that nature can be controlled as a consequence of the progress of S&T. But we must recognize that human activities are also part of the universe. What we can do to harmonize our lives with nature in the future is the most important issue for humankind today.

We must ensure that economic growth and environmental preservation can coexist. But whether this sustainability will work for 50 or 100 years or whether it will last for 500 or 1,000 years into the future depends on shared awareness that the planet is finite. Our discussions at the STS forum are based on the idea that humankind is part of the universe and on the philosophy of harmony with nature.

The United Nations Climate Change Conference will be held in Copenhagen in December. At this conference, a post–Kyoto Protocol framework should be built up with the participation of all countries, including the United States, China, and India. Everyone must realize that taking this action is for the benefit of humankind.

This year’s STS forum, which will take place in Kyoto in early October, will discuss, among other themes, alternative energy for transportation, including electric- and hydrogen-powered vehicle technologies that will provide new energy sources to make transportation less oil-dependent within the next five years.

Humankind shares a common destiny. I hope that technological progress and policy action on alternative energy for transportation will benefit society and lead us on the road to sustainability, in harmony with nature for a long and bright future for humanity.

From Human Genome Research to Personalized Health Care

“Big Science” in the life sciences was launched in 1986 with a bold plan to develop the technologies to determine the sequence of the 3 billion nucleotide base pairs (letters of DNA code) in the human genome. The Human Genome Project declared success by 2001 and has stimulated a wealth of related research. Analyses of the genomes of many organisms have yielded powerful evidence of sequences conserved during evolution. Analyses of microorganisms set the stage for pathogen/host interaction studies. Essentially all fields of life sciences research have been transformed by knowledge of protein-coding genes, recognition of genomic variation across individuals, findings of new mechanisms of regulation of gene expression, and patterns of proteins and metabolites in generating the features of living organisms. From the beginning, there have been high expectations that such knowledge would enhance clinical and public health practice through understanding of predispositions to disease, identification of molecular signatures and biomarkers for stratification of patients with different subtypes of a disease, earlier diagnoses, and discovery of molecular targets for therapeutic and preventive interventions.

There has been compelling evidence for at least 150 years that genetics plays a major role in many traits and diseases. Identical twins are much more likely to manifest similar traits and develop similar diseases than are fraternal twins (or regular siblings). Modern researchers first tested individual genes that seemed scientifically related to a particular disease. Now gene chips can probe 500,000 sequences throughout the genome for variation in single-nucleotide polymorphisms (SNPs) and segments of chromosomes. Genome-wide association studies have demonstrated genetic influence on height; glucose, cholesterol, and blood pressure levels; and risks for childhood-onset and adult-onset diabetes, macular degeneration of the retina, various cancers, coronary heart disease, mental illnesses, inflammatory bowel disease, and other diseases. Enthusiasm about these statistical associations stimulated the formation of companies to offer testing services with direct-to-consumer promotion. However, the market was leaping way ahead of the science.

Serious limitations in this approach have now been recognized. First, stringent statistical criteria are required to reduce the likelihood of false-positive associations, since such large numbers of genomic variants (SNPs) are tested. Second, very few of the highly associated genomic variants actually alter protein-coding gene sequences; this is no surprise, since our 20,000 protein-coding genes take up only 1.5% of the genome sequence. Tying genomic variants to nearby protein-coding genes is highly speculative, making predictions of the functional effects of the variation quite uncertain. Third, the 20 genomic variants associated with height together account for only 3% of the actual variation in height; similarly, 20 or more genomic variants associated with a risk of diabetes account for less than 10% of the risk. The results are not a sufficient basis for predictive medicine. Undeterred, geneticists are screening a far larger set of SNPs to identify more variants of small effect and are searching for less common variants that might have larger effects on disease risk. They are also using new sequencing methods that aim to find all variation, not just sample the SNP sites. The cost of SNP genotyping is now under $1,000 per person. The cost of sequencing, meanwhile, has dropped from the original investment of $3 billion to obtain the first sequence to an estimated $10,000 to sequence an individual with the latest technology, and may reach $1,000 in the next few years.

I believe that much of the unexplained variation in susceptibility will be explained by nongenetic environmental and behavioral risk factors that interact with genetic variation to mediate the risk and severity of disease. We will return to this topic of “ecogenetics” and its policy implications below.

Functional genomics

DNA sequences code inherited information. Proteins and RNA molecules interact with the DNA and histone proteins in chromosomes to regulate the expression of genes. In fact, all nucleated cells in each individual start with the same DNA; gene regulation and mutations during embryonic and later development and during the rest of life create differences among organs and cells. In concert with nongenetic variables, they influence the risk of various diseases. Just as we now have technologies to sequence genomic DNA and databases and informatics tools to interpret the laboratory output, we have developed proteomics technologies to characterize large numbers of proteins. Proteins are much more challenging to analyze, because they undergo numerous chemical modifications that generate a large number of different forms of the protein, with major differences in function. There may be as many as 1 million protein forms generated from the 20,000 genes. One way that we have evolved to have such complex functions with many fewer genes than the 50,000 to 100,000 that scientists expected to find is alternative splicing of DNA or RNA, generating additional protein products; these splice isoforms represent a new class of potential protein biomarkers for cancers and other diseases.

Powerful computational methods are required for multidimensional analyses that capture variation in genome sequence, chromosome structure, gene regulation, proteins, and metabolites. Such molecular signatures can be useful for deeper understanding of the complex biology of the cell and for tests of diagnosis and prognosis. However, it has been difficult to design and validate clinical tests with the high specificity (few false positives) and high sensitivity (few false negatives) needed to be useful in screening populations with low prevalence of a disease; the Food and Drug Administration (FDA) has approved very few new diagnostic tests in the past decade. Numerous publications have reported molecular signatures based on gene or protein expression for cancers and other diseases, but replication of this work in additional patients and laboratories depends on promising new technologies.

Systems biology/systems medicine

Complex biological functions may be disrupted by mutations in individual genes. Diagnosing these usually rather rare disorders has been quite successful, often with specific gene tests, followed by counseling for families. Common diseases are much harder. We now recognize that the generation of complex functions requires many gene-gene and gene-environment interactions acting over time. The field of systems biology is devoted to identifying and characterizing the pathways, networks, and modules of these genes and gene-regulatory functions. The significance of this field is profound, because therapies or preventive interventions in medicine may require subtle modification of entire networks rather than highly targeted, high-dose action on just one gene product such as a cell receptor or an enzyme. This concept will drastically alter our approaches to drug discovery and may enhance the ratio of therapeutic benefit to adverse effects. Understanding the interactions of pathways in cancers, cardiovascular diseases, the nervous system, or inflammatory disorders is likely to lead us to target more than one pathway. In the case of cancers, we should take a hint from combination therapy for microbial infections and design multi-target therapies that could both hit stem cells and prevent the emergence of resistant cancer cells. These approaches may require major revisions of FDA policies governing the drug-development/drug-approval process, which are barriers to combination therapies.

Pharmacogenetics and pharmacogenomics

Although it is well known that patients vary remarkably in their responses to most drugs, drug development and clinicians’ prescriptions generally are still designed for the average patient. For example, effective tests developed 50 years ago to identify patients at high risk for potentially lethal effects from muscle relaxants used in anesthesia are still not incorporated into standard medical practice. Recently, the FDA recommended the use of two gene tests to help doctors choose the initial dose for the anticoagulant warfarin, but many physicians are rightly skeptical about the practical value of the tests because they may not yield information in time for the initial doses and are often no more informative than the response to a standard first dose. Knowing the genotype in advance might help the few percent of patients with very high (or very low) sensitivity to the drug. Comparative effectiveness/cost-benefit analyses for such testing are under way.

Genomics and proteomics will eventually be important in earlier detection of adverse effects of drugs and dietary supplements in susceptible individuals, transforming toxicology from a descriptive to a predictive science.

Ecogenetics

One of the biggest challenges for realizing the medical and public health benefits of genomics is the capture of the variation in nongenetic environmental and behavioral risk factors for disease and the discovery of gene-environment interactions. Environmental factors include infections, diet, nutrition, stress, physical activity, pollutants, pesticides, radiation, noise and other physical agents, herbal medicines, smoking, alcohol, and other prescription and nonprescription drugs. Such exposures may cause mutations in genes and transient or heritable modifications in the methylation patterns of histone proteins and DNA in the chromosomes. These variables also affect responses to therapy.

Infectious diseases offer numerous opportunities for personalized treatment, because both the patient and the infectious agent can be genotyped, and interactions may be critical for the choice of therapy. The development of vaccines for particularly troublesome infections such as HIV, tuberculosis, malaria, and influenza requires much more knowledge than we currently have about the pathogens and susceptible human subgroups. Genomics is being incorporated into surveillance outposts around the world to detect the emergence of new strains of pathogens in animals and animal handlers, which may reduce the risks of future pandemics.

Personalized health care

This phrase is understandably very popular. It reflects the admirable goal of tailoring the treatment to the patient and the fact that different people with the same diagnosis may have multiple underlying mechanisms of disease and may require quite different therapies. With many widely used drugs, fewer than 30% of patients treated actually experience a benefit, and some of these may be getting better on their own or through placebo effects. The path to the ideal of predictive, preventive, personalized, and participatory (P4) health care must proceed through several complex steps. There must be sufficient evidence at molecular, physiological, and clinical levels to subtype patient groups and stratify them for targeted therapy or prevention. For example, specific subgroups of leukemia, breast cancer, and colon cancer patients can now be treated with molecularly targeted drugs. Conversely, anticancer drugs that target epidermal growth factor receptors have no benefit in the approximately onehalf of colon cancer patients who have a particular gene variant in a complementary pathway. There is a big leap from carefully selected patients in a randomized clinical trial of efficacy to evidence of effectiveness in patients with many coexisting diseases being cared for in the community. Similarly, the comparative effectiveness of medical devices and surgical procedures may depend on many practical details of access to timely care in the real world. In addition, physicians may continue to use a drug or device with even a low probability of benefit if no better therapy is available. Proving no benefit is difficult, and patients often demand a specific treatment.

Information about variation in responses to drugs and tests to guide clinical decisionmaking is available in the PharmacoGenomics Knowledge Base (www.pharmGKB.org). With the present push to install electronic health records, complex results for individual patients could be click-linked to resources for the interpretation of such tests. As information from molecular tests and imaging becomes much more complex, the routine admonition to “ask your doctor” must be supplemented by effective guidance to the doctor to tap into additional online resources.

Policy challenges

The long-awaited passage in 2008 of the Genetic Information Non-discrimination Act (GINA) helps clarify the rules for ensuring the privacy and confidentiality of personal health information and prohibits discrimination in health insurance and employment tied to genetic traits. Senator Ted Kennedy (D-MA) described GINA as “the first major new civil rights bill of the new century.” Of course, such protections should apply to all personal health information, especially in this electronic age; many privacy issues remain unresolved. The Department of Health and Human Services (DHHS) Office of the National Coordinator for Health Information Technology and the DHHS and National Institutes of Health (NIH) Offices of Protection of Participants in Biomedical Research will be important players as medical information becomes increasingly electronic.

A major federal policy plan with commitment to interagency cooperation is needed in the domain of ecogenetics. Linking medical and environmental data sets is complicated because patient information in genomic studies is routinely de-identified to protect patient privacy and confidentiality. Proper management of coded information could facilitate links between genomic labs and large-scale monitoring such as the periodic National Health and Nutrition Examination Study of the Centers for Disease Control and Prevention; the air, water, and waste-site pollution monitoring conducted by the Environmental Protection Agency and state and metropolitan agencies; and population-based epidemiology studies of conditions such as childhood cancers. Statisticians have methods for imputing reasonable estimates of exposures in neighborhoods and for individuals; these data could be merged with information from increasingly affordable molecular and genomic assays. The Genes and Environment Initiative within NIH co-led by the National Institute for Environmental Health Sciences and the National Human Genome Research Institute has invested in new exposure-measurement technologies. Making these links work is critical to realizing the benefits of our rapidly accelerating knowledge of genomic variation.

The new concepts for drug discovery and biomarkers that emerge from systems biology and pharmacogenomics are receiving attention at the FDA. Standardized requirements for clinical chemistry must be incorporated into academic/industry partnerships for drug studies and trials of biomarkers.

The National Coalition for Health Professional Education in Genetics aims to increase genetic literacy as a foundation for consumer discussions and decisions. The focus of several state health departments and consumer protection agencies on the tests and advertisements for tests for personalized genomic risks is timely, because the genome variants associated with particular diseases presently account for too small a portion of the risk to allow credible conclusions about those risks for individuals to be drawn.

With the necessary research of all kinds and with effective interagency partnerships, we can expect to see the following benefits emerge in the near future:

  • Enormous expansion of information about the complex molecular biology of many common diseases from the sequencing of DNAs and RNAs and the study of proteins and metabolites, with costs as low as $1,000 for an individual human genome sequence;
  • Gene-, organ-, and cause-specific molecular signature tests for several diseases;
  • Systems/pathways/networks bases for some new drugs and drug combinations, probably for treatment of cancers, brain disorders, and cardiovascular and liver diseases;
  • Advances in pharmacogenomics for drug approvals and e-prescribing guidelines, providing advice for patients centered on more-refined diagnoses and more-effective, safer therapies;
  • Information about modifiable environmental and behavioral factors tied to genotypes and disease risks for public health and personal actions; and
  • A better basis for consumer genomics, starting with advice that broad public health measures—increased physical activity, good nutrition, and control of blood pressure, cholesterol, weight, blood glucose, and infectious and chemical exposures—have multi-organ benefits that surely swamp the effects of statistically associated genome-based risk factors. Hopefully, we will gain evidence about whether knowledge of genetic predispositions motivates people to pursue healthier behaviors.

From the Hill – Spring 2009

Economic stimulus bill provides major boost for R&D

The $790-billion economic stimulus bill signed by President Obama on February 17 contains $21.5 billion in federal R&D funding—$18 billion for research and $3.5 billion for facilities and large equipment. The final appropriation was more than the $17.8 billion approved in the Senate or the $13.2 billion approved in the House version of the bill. For a federal research portfolio that has been declining in real terms since fiscal year (FY) 2004, the final bill provides an immediate boost that allows federal research funding to see a real increase for the first time in five years.

The stimulus bill, which is technically an emergency supplemental appropriations bill, was approved before final work has been completed on funding the federal government for FY 2009. Only 3 of 12 FY 2009 appropriations bills have been approved (for the Departments of Defense, Homeland Security, and Veterans Affairs). All other federal agencies are operating at or below FY 2008 funding levels under a continuing resolution (CR) through March 6.

Under the CR and the few completed FY 2009 appropriations, the federal research portfolio stands at $58.3 billion for FY 2009, up just 0.3% (less than inflation), but after the stimulus bill and assuming that final FY 2009 appropriations are at least at CR levels, the federal research portfolio could jump to nearly $75 billion.

Basic competitiveness-related research, biomedical research, energy R&D, and climate change programs are high priorities in the bill. The National Institutes of Health (NIH) will receive $10.4 billion, which would completely turn around an NIH budget that has been in decline since 2004 and could boost the total NIH budget to $40 billion, depending on the outcome of NIH’s regular FY 2009 appropriation.

The National Science Foundation (NSF), the Department of Energy (DOE) Office of Science, and the National Institute of Standards and Technology (NIST)—the three agencies highlighted in the America COMPETES Act of 2007 and President Bush’s American Competitiveness Initiative—would all be on track to double their budgets over 7 to 10 years. NSF will receive $3 billion, DOE’s Office of Science $1.6 billion, and NIST $600 million.

DOE’s energy programs would also be a winner with $3.5 billion for R&D and related activities in renewable energy, energy conservation, and fossil energy, part of the nearly $40 billion total for DOE in weatherization, loan guarantees, clean energy demonstration, and other energy program funds. DOE will receive $400 million to start up the Advanced Research Projects Agency–Energy (ARPA-E), a new research agency authorized in the America COMPETES Act but not funded until now.

The bill will provide money for climate change–related projects in the National Aeronautics and Space Administration and the National Oceanic and Atmospheric Administration (NOAA). There is also additional money for non-R&D but science and technology–related programs, higher education construction, and other education spending of interest to academia.

The bill provides billions of dollars for universities to construct or renovate laboratories and to buy research equipment, as well as money for federal labs to address their infrastructure needs. The bill provides $3.5 billion for R&D facilities and capital equipment to pay for the repair, maintenance, and construction of scientific laboratories as well as large research equipment and instrumentation. Considering that R&D facilities funding totaled $4.5 billion in FY 2008, half of which went to just one laboratory (the International Space Station), the $3.5-billion supplemental will be an enormous boost in the federal government’s spending on facilities.

Obama cabinet picks vow to strengthen role of science

Key members of President Obama’s new cabinet are stressing the importance of science in developing policy as well as the need for scientific integrity and transparency in decisionmaking.

In one of his first speeches, Ken Salazar, the new Secretary of the Interior, told Interior Department staff that he would lead with “openness in decisionmaking, high ethical standards, and respect to scientific integrity.” He said decisions will be based on sound science and the public interest, not special interests.

Lisa Jackson, the new administrator of the Environmental Protection Agency (EPA), said at her confirmation hearing that “science must be the backbone of what EPA does.” Addressing recent criticism of scientific integrity at the EPA, she said that “political appointees will not compromise the integrity of EPA’s technical experts to advance particular regulatory outcomes.”

In a memo to EPA employees, Jackson noted, “I will ensure EPA’s efforts to address the environmental crises of today are rooted in three fundamental values: science-based policies and programs, adherence to the rule of law, and overwhelming transparency.” The memo outlined five priority areas: reducing greenhouse gas emissions, improving air quality, managing chemical risks, cleaning up hazardous waste sites, and protecting America’s water.

New Energy Secretary Steven Chu, a Nobel Prize–winning physicist and former head of the Lawrence Berkeley National Laboratory, emphasized the key role science will play in addressing the nation’s energy challenges. In testimony at his confirmation hearing, Chu said that “the key to America’s prosperity in the 21st century lies in our ability to nurture and grow our nation’s intellectual capital, particularly in science and technology.” He called for a comprehensive energy plan to address the challenges of climate change and threats from U.S. dependence on foreign oil.

In other science-related picks, the Senate confirmed Nancy Sutley as chair of the Council on Environmental Quality at the White House. Awaiting confirmation as this issue went to press were John Holdren, nominated to be the president’s science advisor, and Jane Lubchenco, nominated as director of NOAA.

Proposed regulatory changes under review

As one of its first acts, the Obama administration has halted all proposed regulations that were announced but not yet finalized by the Bush administration until a legal and policy review can be conducted. The decision means at least a temporary stop to certain controversial changes, including a proposal to remove gray wolves in the northern Rocky Mountains from Endangered Species Act (ESA) protection.

However, the Bush administration was able to finalize a number of other controversial changes, including a change in implementation of the ESA that allows agencies to bypass scientific reviews of their decisions by the Fish and Wildlife Service or the National Marine Fisheries Service. In addition, the Department of the Interior finalized two rules: one that allows companies to dump mining debris within a current 100-foot stream buffer and one that allows concealed and loaded guns to be carried in national parks located in states with concealed-carry laws.

Regulations that a new administration wants to change but have been finalized must undergo a new rulemaking process, often a lengthy procedure. However, Congress can halt rules that it opposes, either by not funding implementation of the rules or by voting to overturn them. The Congressional Review Act allows Congress to vote down recent rules with a resolution of disapproval, but this technique has been used only once and would require separate votes on each regulation that Congress wishes to overturn. House Natural Resources Chairman Nick Rahall (D-WV) and Select Committee on Global Warming Chairman Ed Markey (D-MA) have introduced a measure that would use the Congressional Review Act to freeze the changes to the endangered species rules.

Members of Congress have introduced legislation to expand their options to overturn the rules. Rep. Jerrold Nadler (D-NY), chair of the House Judiciary Subcommittee on the Constitution, Civil Rights and Civil Liberties, has introduced a bill, the Midnight Rule Act, that would allow incoming cabinet secretaries to review all regulatory changes made by the White House within the last three months of an administration and reverse such rules without going through the entire rulemaking process.

Witnesses at a February 4 hearing noted, however, that every dollar that goes into defending or rewriting these regulations is money not spent advancing a new agenda, so the extent to which agencies and Congress will take on these regulatory changes remains to be seen.

Democrats press action on climate change

Amid efforts to use green technologies and jobs to stimulate the economy, Congress began work on legislation to cap greenhouse gas emissions that contribute to climate change. At a press conference on February 3, Barbara Boxer (D-CA), chair of the Senate Environment and Public Works Committee, announced a broad set of principles for climate change legislation. They include setting targets that are guided by science and establishing “a level global playing field, by providing incentives for emission reductions and effective deterrents so that countries contribute their fair share to the international effort to combat global warming.” The principles also lay out potential uses for the revenues generated by establishing a carbon market.

Also addressing climate change is the Senate Foreign relations Committee, which on January 28 heard from former Vice President Al Gore, who pushed for domestic and international action to address climate change. Gore urged Congress to pass the stimulus bill because of its provisions on energy efficiency, renewable energy, clean cars, and a smart grid. He also called for a cap on carbon emissions to be enacted before the next round of international climate negotiations in Copenhagen in December 2009.

In the House, new Energy and Commerce Chair Henry Waxman (D-CA), who ousted longtime chair John Dingell (D-MI) and favors a far more aggressive approach to climate change legislation, said that he wants a bill through his committee by Memorial Day. Speaker Nancy Pelosi (D-CA) would like a bill through the full House by the end of the year.

A hearing of Waxman’s committee on climate change featured testimony from members of the U.S. Climate Action Partnership, a coalition of more than 30 businesses and nongovernmental organizations, which supports a cap-and-trade system with a 42% cut in carbon emissions from 2005 levels by 2030 and reductions of 80% by 2050. Witnesses testified that a recession is a good time to pass this legislation because clarity in the law would illuminate investment opportunities.

Energy and Environment Subcommittee Chair Ed Markey (D-MA) has said that he intends to craft a bill that draws on existing proposals, including one developed at the end of the last Congress by Dingell and former subcommittee chair Rick Boucher (D-VA). Markey’s proposal is also likely to reflect a set of principles for climate change that he announced last year, along with Waxman and Rep. Jay Inslee (D-WA). The principles are based on limiting global temperature rise to 2 degrees Celsius.

President Obama has also taken steps to address greenhouse gas emissions. He directed the EPA to reconsider whether to grant California a waiver to set more stringent automobile standards. California has been fighting the EPA’s December 2007 decision to deny its efforts to set standards that would reduce carbon dioxide emissions from automobiles by 30% by 2016. If approved, 13 other states have pledged to adopt the standards. Obama also asked the Department of Transportation to establish higher fuel efficiency standards for carmakers’ 2011 model year.

Biological weapons threat examined

The Senate and the House held hearings in December 2008 and January 2009, respectively, to examine the findings of the report A World at Risk, by the Commission on the Prevention of Weapons of Mass Destruction, Proliferation and Terrorism. At the hearings, former Senators Bob Graham and Jim Talent, the commission chair and vice chair, warned that “a terrorist attack involving a weapon of mass destruction—nuclear, biological, chemical, or radiological—is more likely than not to occur somewhere in the world in the next five years.”

Graham and Talent argued that although the prospect of a nuclear attack is a matter of great concern, the threat of a biological attack poses the more immediate concern because of “the greater availability of the relevant dual-use materials, equipment, and know-how, which are spreading rapidly throughout the world.”

That view was supported by Senate Homeland Security and Governmental Affairs Committee chairman Joe Lieberman (I-CT) and ranking member Susan Collins (R-ME). Both recognized that although biotechnology research and innovation have created the possibility of important medical breakthroughs, the spread of the research and the technological advancements that accompany innovations have also increased the risk that such knowledge could be used to develop weapons.

Graham and Talent acknowledged that weaponizing biological agents is still difficult and stated that “government officials and outside experts believe that no terrorist group has the operational capability to carry out a mass-casualty attack.” The larger risk, they said, comes from rogue biologists, which is what is believed to have happened in the 2001 anthrax incidents. Currently, more than 300 research facilities in government, academia and the private sector in the United States, employing about 14,000 people, are authorized to handle pathogens. The research is conducted in high-containment laboratories.

The commission said it was concerned about the lack of regulation of unregistered BSL-3 research facilities in the private sector. These labs have the necessary tools to handle anthrax or synthetically engineer a more dangerous version of that agent, but whether they have implemented appropriate security measures is often not known.

For this reason, the commission recommended consolidating the regulation of registered and unregistered high-containment laboratories under a single agency, preferably the Department of Homeland Security or the Department of Health and Human Services. Currently, regulatory oversight of research involves the Department of Agriculture and the Centers for Disease Control and Prevention, with security checks performed by the Justice Department.

Collins has repeatedly stated the need for legislation to regulate biological pathogens, expressing deep concern over the “dangerous gaps” in biosecurity and the importance of drafting legislation to close them.

In the last Congress, the Select Agent Program and Biosafety Improvement Act of 2008 was introduced to reauthorize the select agent program but did not pass. The bill aimed at strengthening biosafety and security at high-containment laboratories. It would not have restructured agency oversight. No new bills have been introduced in the new Congress.

Before leaving office, President Bush on January 9 signed an executive order on laboratory biosecurity that established an interagency working group, co-chaired by the Departments of Defense and Health and Human Services, to review the laws and regulations on the select agent program, personnel reliability, and the oversight of high-containment labs.

Multifaceted ocean research bill advances

The Senate on January 15, 2009, approved by a vote of 73 to 21 the Omnibus Public Lands Management Act of 2009, a package of five bills authorizing $794 million for expanded ocean research through FY 2015, including $104 million authorized for FY 2009, along with a slew of other wilderness conservation measures. The House is expected to take up the bill.

The first of the five bills, the Ocean Exploration and NOAA Undersea Research Act, authorizes the National Ocean Exploration Program and the National Undersea Research Program. The act prioritizes research on deep ocean areas, calling for study of hydro thermal vent communities and sea mounts, documentation of shipwrecks and submerged sites, and development of undersea technology. The bill authorizes $52.8 million for these programs in FY 2009, increasing to $93.5 million in FY 2015.

The Ocean and Coastal Mapping Integration Act authorizes an integrated federal plan to improve knowledge of unmapped maritime territory, which currently comprises 90% of all U.S. waters. Calling for improved coordination, data sharing, and mapping technology development, the act authorizes $26 million for the program along with $11 million specifically for Joint Ocean and Coastal Mapping Centers in FY 2009. These quantities would increase to $45 million and $15 million, respectively, beginning in FY 2012.

The Integrated Coastal and Ocean Observation System Act (S.171) authorizes an integrated national observation system to gather and disseminate data on an array of variables from the coasts, oceans, and Great Lakes. The act promotes basic and applied research to improve observation technologies, as well as modeling systems, data management, analysis, education, and outreach through a network of federal and regional entities. Authorization levels for the program are contingent on the budget developed by the Interagency Ocean Observation Committee.

The Federal Ocean Acidification Research and Monitoring Act establishes a coordinated federal research strategy to better understand ocean acidification. In addition to contributing to climate change, increased emissions of carbon dioxide are making the ocean more acidic, with resulting effects on corals and other marine life. The act authorizes $14 million for FY 2009, increasing to $35 million in FY 2015.

The fifth research bill included in the omnibus package, the Coastal and Estuarine Land Protection Act, creates a competitive state grant program to protect threatened coastal and estuarine areas with significant conservation, ecological, or watershed protection values, or with historical, cultural, or aesthetic significance.


“From the Hill” is prepared by the Center for Science, Technology, and Congress at the American Association for the Advancement of Science (www.aaas.org/spp) in Washington, D.C., and is based on articles from the center’s bulletin Science & Technology in Congress.

Forum – Spring 2009

Low carbon fuels

In “Low Carbon Fuel Standards,” (Issues, Winter 2009), Daniel Sperling and Sonia Yeh argue for California’s proposed low carbon fuel standard (LCFS). Improving the sustainability of transportation has long been one of our goals at Shell.

Shell believes that reducing the carbon emissions from the transportation sector is an important part of the overall effort to address climate change. We believe that in order to effectively reduce emissions from the transportation sector, policies need to focus on fuels, vehicles, and consumer choice. Only by addressing all three aspects of the transportation equation will emissions actually be reduced.

Shell has been working on the development of low-carbon fuels, such as biofuels, for many years. Shell is already a global distributor of about 800 million gallons of ethanol from corn and sugar cane each year. But Shell recognizes that this is only a starting place, and we are actively investing in a range of initiatives to develop and transition to second-generation (also called next-generation) biofuels.

For instance, Shell is developing ethanol from straw and is pursuing several promising efforts, such as a process that uses marine algae to create biodiesel. Shell is also working on a process to convert biomass to gasoline, which can then be blended with normal gasoline, stored in the same tanks, pumped through the same pipes, and distributed into the same vehicles that use gasoline today, thereby eliminating the need to build massive new infrastructure or retrofit existing infrastructure.

Shell believes that fuel regulatory programs should create incentives to encourage the use of the most sustainable biofuels with the best well-to-wheel greenhouse gas reduction performance. In the interim, however, petroleum will continue to play an important role in transportation as an easily accessible and affordable fuel for which delivery infrastructure already exists for commercial amounts.

We hope that as policymakers move forward, they will fully evaluate the economic impacts of a transportation fuel greenhouse gas performance standard policy and recognize that it will take some time to get the science right. Moreover, we encourage policymakers to consider the significant challenges we face in moving from the lab to commercial-level production of the fuel and vehicle technologies that California seeks to incentivize through the LCFS. In our view, it will be critically important to establish a process to periodically assess progress against goals and to make adjustments as necessary, given that the timeline for commercialization of new technologies can be difficult to predict, and ultimately the commercial success of these new technologies also depends on consumer acceptance.

With populations, fossil fuel use, and carbon dioxide (CO2) levels continuing to grow rapidly, we have no time to lose to enact policies to reduce CO2 emissions. To be successful, the regulatory requirements must be challenging, yet achievable. We are hopeful that as California continues to debate its low-carbon fuel policy, it will promulgate such requirements.

MARVIN ODUM

President

Shell Oil Company

Houston, Texas


To achieve the deep reductions in global warming pollution necessary to avoid the worst consequences of global warming, the transportation sector must do its fair share to reduce emissions. Daniel Sperling and Sonia Yeh have provided a comprehensive and compelling summary of a groundbreaking policy to reduce global warming pollution from transportation fuels. Low-carbon fuel standards (LCFSs), such as the standard that California is planning to adopt this spring, can help reduce the carbon intensity of transportation and are a perfect complement to vehicle global warming standards and efforts to reduce vehicle miles traveled. President Obama has in the past expressed support for LCFSs, even introducing federal legislation for a national standard.

The article provides a rich discussion of some of the design challenges posed by an LCFS. A major analytical challenge is quantifying emissions from biofuels, particularly emissions associated with indirect changes in land use induced by increased production of biofuels feedstocks. The Sperling and Yeh paper raises key issues associated with quantifying emissions, but does not address the question of how to account for CO2-equivalent emissions (CO2e) over time. Biofuels that result directly or indirectly in land conversion can have a large release of CO2e initially, because of deforestation and other impacts, but benefits will accrue over time. Other transportation fuels may have land-use impacts, but the impact from biofuels dwarfs that of most other fuels.

Thus far, analyses of emissions from biofuels have used an arbitrary time period, such as 30 years, and then treated all emissions or avoided emissions within this period as equivalent. This approach is consistent with how regulatory agencies have traditionally evaluated the benefits of reducing criteria pollutant emissions. Because criteria pollutants have a short residence time in the atmosphere, it is appropriate to account for their emissions in tons per day or per year.

But greenhouse gasses have a long residence time of decades or even centuries, and the radiative forcing of a ton of carbon emitted today will warm and damage the planet continuously while we wait for promised benefits to accrue in the future. Two quantities are of special importance: the actual amount of climate-forcing gasses in the atmosphere at future dates and the cumulative radiative forcing, which is a measure of the relative global warming impact. As a general rule, any policy designed to reduce emissions should reduce the cumulative radiative forcing that the planet experiences by the target date. Compared to the conventional approach of averaging emissions over 30 years, a scientific approach based on the cumulative radiative forcing leads to a higher carbon intensity from fuels that cause land conversion.

If implemented well, an LCFS will drive investment in low-carbon, sustainable fuels and help reach global warming emission reduction targets. A key challenge is to make sure that the life-cycle CO2e estimates are based on sound science, including appropriate accounting for indirect land conversion and emissions over time.

PATRICIA MONAHAN

Deputy Director for Clean Vehicles

JEREMY MARTIN

Senior Scientist, Clean Vehicles

Union of Concerned Scientists

Berkeley, California


Unburdening science

I resonate fully with the spirit and content of Shawn Lawrence Otto and Sheril Kirshenbaum’s “Science on the Campaign Trail” (Issues, Winter 2009). Science Debate 2008 was an unprecedented event that garnered substantial public attention and helped the campaigns hone their own policies. Science Debate 2008 also mobilized an unprecedented focus by the scientific community on an election. Have 30,000 scientists ever lined up behind any other social or political effort?

The efficiency and effectiveness of scientific research, and its ability to contribute to national needs, are heavily affected by the full array of policies surrounding the conduct of science, and many of them need streamlining and reformulation.

The aftermath of the election has been tremendously encouraging. President Obama has said all the right things about the role of science in his administration and has delivered on a promise to surround himself with first-rate scientists. In my view, no administration has ever assembled such a highly qualified scientific brain trust. This group is clearly equal to the task of tackling the national priorities highlighted by Science Debate 2008, such as climate, energy, and biomedical research. Moreover, the economic stimulus package includes substantial investments in science, science infrastructure, and some, though I would argue not enough, investment in science education. The new administration and congressional leadership do seem to understand the role of science in solving societal problems, including the economy.

I am concerned, however, that amid all the euphoria we could lose sight of the need to attend to a group of science policies not discussed in the Science Debate 2008 list. These relate not to the use of science to inform broader national policies but to the conduct of science itself. The efficiency and effectiveness of scientific research, and its ability to contribute to national needs, are heavily affected by the full array of policies surrounding the conduct of science, and many of them need streamlining and reformulation. At a minimum, they require rationalization across agencies and institutions that set and monitor them.

According to a 2007 survey by the U.S. Federal Demonstration Partnership (A Profile of Federal-Grant Administrative Burden among Federal Demonstration Partnership Faculty), 42% of the time that faculty devote to research is devoted to pre- and post-award administrative activities. And much of this burden is the result of the difference in policies and procedures across the federal government. Each agency seems to find it necessary to design its own idiosyncratic forms and rules for reporting on common issues such as research progress, revenue management, and the protection of animal and human subjects. New post-9/11 security concepts such as “dual-use research” or “sensitive but unclassified science” have added substantially to the workload.

Although the need for rules and procedures in these areas is undeniable, the variation among agencies creates an inexcusable and wasteful burden on scientists. Consuming this much of a researcher’s productive time with administrative matters is indefensible. One of the first tasks of the new scientific leadership in Washington should be to review all existing practices and then develop a single set of rules, procedures, and forms that applies to all agencies.

ALAN LESHNER

Chief Executive Officer

American Association for the Advancement of Science

Washington, DC


Manufacturing revival

Susan Helper’s “The High Road for U.S. Manufacturing” (Issues, Winter 2009) is very timely given the meltdown in the auto industry and its impact on small manufacturers. Helper does a very balanced analysis of the strengths and challenges of the Manufacturing Extension Partnership (MEP) program. There are more than 24,000 manufacturers in Ohio. As you might imagine, manufacturers who are not supplying the auto industry are doing much better than those who are. MAGNET, based in Cleveland, and its sister organization TechSolve in Cincinnati are helping companies implement growth strategies. Small manufacturers have cut waste from their processes, started energy-saving programs, reduced labor, and outsourced work, but as Helper points out, these strategies alone will not lead to jobs or growth. Successful companies recognize that they must be innovative with products, markets, and services or face the inevitable demise of their business. In short, you cannot cost-cut your way to survival or growth.

Ironically, the national MEP is driven by metrics primarily focused on capital investments and cost-savings programs such as lean manufacturing. Similarly, Six Sigma and other quality improvement and waste reduction programs are essential for global competitiveness—these efficiency programs have become standard business practice. With assistance from the MEP staff, consultants, and in most cases using internal staff, companies have made significant gains in productivity and quality. The results are measurable, and the outcome is a globally competitive manufacturing sector. But, as managers would agree, what gets measured is what gets done. The MEP program must start measuring and auditing outcomes that will drive innovation and job creation.

To maintain our status as a MEP Center in good standing, the outcomes reported by our clients to the national auditor must meet or exceed performance thresholds as established by the National Institute of Standards and Technology. That process ensures a return on the taxpayers’ investment and validates the effectiveness of the federal program. However, as with any system of evaluation, the metrics need to be reviewed periodically to be sure the right things are being measured. Priorities have to be adjusted based on current circumstances and desired outcomes. Helper points this out in her article. The national MEP metrics, developed more than a decade ago, don’t reflect the current state of emergency in manufacturing. For example, job creation and retention are not a priority. MEP Centers must capture the data, but that information does not affect a center’s performance standing. It is not viewed as a priority. Enough said—revamping the evaluation system is long overdue.

Among the 59 centers across the nation, the Ohio MEP ranks in the top four in job creation and retention. We recognize that when the economy rebounds—and it will—it is far easier to emerge from the rubble if you have retained the talent and skills necessary to rebuild your economy. Helping more companies weather the economic storm requires two things: (1) metrics that emphasize growth strategies and job creation or retention, and (2) state and federal dollars to expand the reach of the MEP.

FATIMA WEATHERS

Chief Operating Officer

MEP Director

Manufacturing Advocacy and Growth Network

Cleveland, Ohio


Flood protection

“Restoring and Protecting Coastal Louisiana” by Gerald E. Galloway, Donald F. Boesch, and Robert R. Twilley (Issues, Winter 2009) should be required reading for every member of Congress and every member of the newly appointed presidential administration. It graphically outlines the drastic consequences of the failure of this nation to initiate and implement a system to prioritize the allocation of funds for critical water resource projects.

The tragic history of the continued loss of Louisiana’s coastline—beginning with the construction of a massive levee system after the flood of 1927—and the dire implications of that loss for the region and nation have unfortunately created a poster child for the need for such an initiative. Louisiana does not stand alone: The Chesapeake Bay, upper Mississippi, Great Lakes, Puget Sound, and the Everglades each require federal assistance predicated on policies that define national goals and objectives. Louisiana has restructured its government, reallocated its finances to channel efforts to address this catastrophe, and adopted a comprehensive plan to establish a sustainable coastline based on the best science and engineering. Louisiana fully recognizes that in order to combat its ever deteriorating coast, difficult and far-reaching changes are required. The consequences of failure to respond in that fashion far outweigh the cost and inconvenience of such action.

No state in the Union has the financial capacity to meet such challenges on its own, and in this case vital energy and economic assets are at stake. Unfortunately, we are dealing with a federal system that is functionally inept, with no clarity for defining national goals and objectives. Contradictory laws and policies among federal agencies consistently impede addressing such issues directly and urgently. Funding, when approved, is generally based on Office of Management and Budget guidelines with little or no relationship to the needs of the country as a whole or to the scientific and engineering decisions required to achieve sustainability. Accountants and auditors substitute their myopic views for solid scientific and engineering advice. Federal agencies, including the Army Corps of Engineers, are virtually hamstrung by historic process and inconsistent policies that have little relationship to federally mandated needs assessments. Finally, Congress has historically reviewed these issues from a purely parochial posture, often authorizing funds on the basis of political merit. In the process, the greater needs of the public as a whole are generally forgotten.

The time for action is now. The investment by the nation is critical and urgent. The questions that must be asked are: What is the ultimate cost of the impending loss of vital ports and navigation systems and of the potential inability to deliver hydrocarbon fuel to the nation? How should the loss of strategic and historic cities be judged as well as the implosion of a worldwide ecological and cultural treasure? And although we may not be able to undo what the engineering of the Mississippi River has caused, we must act swiftly to replenish America’s wetlands with the fresh water, nutrients, and sediments they need to survive by letting our great river do what it can do best. It is not a question of if but rather when. The value to the nation of these tangible assets is incalculable.

R. KING MILLING

Chairman

America’s Wetland Foundation

New Orleans, Louisiana


Gerald E. Galloway, Donald F. Boesch, and Robert R. Twilley make several points: (1) At the federal level, we have no system of prioritization for funding critical water resources infrastructure and no clear set of national water resources goals. (2) As a nation, we are underfunding critical water resource infrastructure. (3) The restoration of coastal Louisiana, one of the great deltaic ecosystems of the world, which has lost 2,000 square miles of coastal marshes and swamp forests in the past 100 years, should be a national water resources investment priority. (4) The fact that we are not investing major federal resources in the restoration of this ecosystem, so critical to Mississippi River navigation, the most important oil and gas infrastructure in the nation, Gulf fisheries, and storm buffering of coastal Louisiana urban communities is a manifestation of this lack of prioritization and underfunding.  Climate change, the entrapment of Mississippi River sediments behind its tributary dams, and the Gulf dead zone have implications for coastal Louisiana restoration. (6) We need something like a National Investment Corporation to provide sustainable funding for water resources infrastructure. (7) Protection and restoration of coastal Louisiana should have the same status as the Mississippi River & Tributaries (MR&T) flood control and navigation missions that Congress established after the historic floods of 1927.

Most of these are valid points. Certainly, the disintegration of coastal Louisiana, the country’s premier coastal ecosystem, is a national environmental and economic disgrace, and its restoration should be of paramount importance to the nation. Without comprehensive and rapid restoration through the introduction of large amounts of sediment, the lower Mississippi River navigation system, the coastal levee protection system, major components of the Gulf Coast’s oil and gas operations, and Gulf fisheries are in increasing jeopardy. Despite the 2007 Water Resources Development Act (WRDA) authorizing a coastal Louisiana restoration program, the Army Corps of Engineers has not made it a national priority and perhaps Congress has not yet made it a priority either.

Restoration of riverine and coastal ecosystems is now emerging as an increasingly important national priority, and ecosystem restoration, if it is to be effective, requires confronting and making choices about goals and priorities.

Although the authors write about wastewater treatment plant infrastructure needs, as well as flood protection, navigation, agricultural drainage, and other traditional needs, the congressional legal and funding statutory frameworks for water supply and wastewater treatment infrastructure are very different from that for dams, levees, and other structures that service flood control and navigation needs. The former are addressed through the Clean Water Act and the Safe Drinking Water Act, which have a set of goals and funding mechanisms and designate the Environmental Protection Agency (EPA) to administer those programs, with the EPA overseeing delegated state programs. Federal funding is far too limited, particularly in terms of older urban water supply and wastewater infrastructure, but statutory frameworks are in place to establish needs and set priorities.

In contrast, the WRDA authorization process and Corps appropriation process do not have a comparable framework for fostering congressional or administrative discussion of national priorities in the context of national water resource goals. This may have been less of a problem in decades past, when national water resources goals encompassed overwhelmingly traditional economic development river management programs. However, the cost of proper maintenance of these projects demands some kind of prioritization system. In addition, the restoration of riverine and coastal ecosystems is now emerging as an increasingly important national priority, and ecosystem restoration, if it is to be effective, requires confronting and making choices about goals and priorities. It would appear that the way forward for the Corps and perhaps also Congress has been to add on restoration as just one more need to the traditional agenda rather than rethinking water resources priorities in a broader framework that considers how to integrate ecosystem concerns with traditional economic development priorities.

Nowhere is this more apparent than in coastal Louisiana. The long-term sustainability of the Mississippi River navigation system depends on protecting and restoring the deltaic ecosystem. In the 2007 WRDA, Congress approved the Chief of Engineers’ Louisiana Coastal Area (LCA) Ecosystem Restoration report. This LCA authorization contains lofty prose about a plan that considers ways to take maximum feasible advantage of the sediments of the Mississippi and Atchafalaya Rivers for environmental restoration. Yet the Corps’ framework for thinking about this mighty river system and its sediments is constrained by what it still considers to be its primary MR&T navigation and flood control missions. Coastal restoration is peripheral, an add-on responsibility, not integral to its 80-year-old MR&T responsibilities. The state of Louisiana also faces challenges in figuring out its own priorities and finding ways to address the impacts of restoration on, for example, salt-water fisheries. However, given the role of the federal government through the Corps in managing the Mississippi River, the biggest struggle will be at the federal level. We can hope that new leadership at the federal level will allow the rapid creation of a new integrated framework that considers coastal restoration integral to the sustainability of the navigation system, to storm protection levees, and to the world-class oil and gas production operation. This will probably entail a fundamental amendment to the MR&T Act, establishing coastal restoration as co-equal with navigation and flood control; indeed, primus inter pares.

Because of the central importance of the lower Mississippi River to the Corps and the nation, this struggle over the fate and management of Mississippi River deltaic resources will do more than any other single action to facilitate the emergence of a far better national system for assessing water resources needs and priorities. If and when this is done, funding will follow. The authors of this paper have therefore quite appropriately linked making coastal Louisiana wetland restoration a true national water resource priority to fashioning “a prioritization system at the federal level for allocating funds for critical water resources infrastructure.”

JAMES T. B. TRIPP

General Counsel

Environmental Defense Fund

New York, New York


Louisiana’s problems will continue to elude productive solutions without fundamental changes in the institutions that are empowered to develop and implement the measures that can improve the region’s productivity and sustainability. With the present separation of the development of comprehensive plans from access to the financial and other resources necessary for their implementation, little progress will be made. We will continue to read about the ever increasing gap between perceived needs and the measures being taken to satisfy them.

Ever more grandiose protection and restoration schemes for coastal Louisiana are being developed. Major uncertainties concerning the fundamental physical, chemical, and biological relationships that determine the health of ecosystems and the effectiveness of project investments are not widely understood and certainly not acknowledged in the continuing search for funding sources. Louisiana residents and businesses continue to be encouraged to make investments in high-hazard areas, which will remain highly vulnerable under the most optimistic projection of resources for project development and execution. As each major storm event wreaks its havoc, politicians join citizens in the clamor for federal financial assistance. Although funds to patch and restore the status quo may be provided in the short run, long-term funding has proved to be elusive, and the current economic realities make it even less likely that these plans will become compelling national priorities.

What are the necessary institutional changes? Simply put, planning, decisionmaking, and project implementation authorities must rest with an entity that also has the resources to carry out its decisions. This body must have access to the best scientific information and planning capabilities and have taxing authority and/or a dedicated revenue source. Its charter also must be sufficiently broad to require regulatory changes and other hazard mitigation measures to complement the engineering and management measures it takes. Because this entity would have the most complete understanding of the uncertainties and tradeoffs required by its funding capabilities, it would be in the best position to ensure the most productive use of public funds.

Gerald E. Galloway, Donald F. Boesch, and Robert R. Twilley cite the Mississippi River Commission (MRC) as a possible institutional model. Although the MRC has accomplished much, we now recognize that its charter and authorities were too narrowly drawn and consequently responsible in part for the problems of coastal Louisiana today. Because it relied on the federal budget rather than on internally generated resources for the vast majority of its funding, some of its projects persisted as earnestly pursued dreams for decades despite their low payoffs. The Yazoo Backwater Pumping Plant project, an economically unproductive and environmentally damaging scheme conceived in the 1930s that was finally killed by the Environmental Protection Agency in 2008, remains a poster child for the consequences of the separation of beneficiaries from funders.

The creation of new institutions is certainly not easy, but it is essential to a realistic and productive comprehensive plan for coastal Louisiana. Without substantial internalization of both benefit and cost considerations in decisionmaking about its future, the problems of coastal Louisiana will continue to be lamented rather than addressed effectively.

G. EDWARD DICKEY

Affiliate Professor of Economics

Loyola College in Maryland

Baltimore, Maryland


Changing the energy system

Frank N. Laird’s “A Full-Court Press for Renewable Energy” (Issues, Winter 2009) offers a valuable addition to the debate on energy system change, broadening it beyond its too-frequent sole focus on questions of technology development and pricing. Laird rightly points out that energy systems are, in reality, sociotechnical systems deeply interconnected with a wide range of social, political, and economic arrangements in society. As Laird suggests, these broader social and institutional dimensions of energy systems demand as much attention as technology and pricing if societies are going to successfully bring about large-scale system change in the energy sector. Failure to adequately address them could stifle energy system innovation and transformation.

Given this backdrop, the plan of action Laird proposes should, in fact, be even more ambitious in engaging broader societal issues. Laird discusses workforce development for renewable energy, for example, but workforce issues must also address existing energy sector jobs and their transformation or even elimination, as well as possible policy responses, such as retraining programs or regional economic redevelopment policies. Likewise, current energy-producing regions may see revenue declines, with widespread social consequences. In Mexico and Alaska, oil revenues provide an important safety net for poor communities (via government subsidies and payouts). Both already feel the consequences of declining oil revenues. These are just two examples of the distribution of benefits, risks, rights, responsibilities, and vulnerabilities in society that accompanies energy system change. Energy planning will have to deal with these kinds of challenges.

The indirect societal implications of energy system change may be even greater, if often more subtle. How will societies respond to energy system changes and modify their behaviors, values, relationships, and institutions; and in turn, how will these changes affect the opportunities for and resistance to energy system change? One example: Will home solar electricity systems coupled with solar-powered home hydrogen refueling stations (as imagined by Honda) make it marginally easier to locate homes off the grid, thus potentially exacerbating exurban sprawl or even the construction of homes in semi-wilderness areas? Would this undermine efforts to improve urban sustainability or protect biodiversity?

Although a knee-jerk reaction might fear that making visible these kinds of issues and questions could jeopardize public support for energy system change, I disagree. It would be far better to be aware of these implications and factor them into energy planning efforts. Energy technologies and markets are flexible in their design parameters and could be shaped to enhance societal outcomes. Wind projects, for example, could focus on high-wind areas that don’t obviously overlap with tourist destinations. The alternative is to have these issues become apparent mid-project (as with wind projects off Cape Cod) or, worse, only in the aftermath of major investments in new infrastructure that ultimately end up being redone or abandoned (as was the case with nuclear facilities in the 1980s).

It would seem to make sense, therefore, to add a fifth plank to Laird’s proposal focused on anticipating, analyzing, and responding to the societal implications of large-scale change in energy systems. Interest in this area of work is scattered throughout the Department of Energy, but no office currently has responsibility for overall coordination. As a consequence, little systematic investment is currently being made in relevant research or education. We need to do better.

With effort, we might avoid racing headlong into an unknown technological future. The best possible outcome would be an energy planning process that used these insights to shape technological system design and implementation so as not only to address major public values associated with energy (such as reducing greenhouse gas emissions and improving energy security) but also to reduce the risks, vulnerabilities, and injustices that plague existing energy systems.

CLARK A. MILLER

Associate Director

Consortium for Science, Policy and Outcomes

Arizona State University

Tempe, Arizona


Regional climate change

In “Climate Change: Think Globally, Assess Regionally, Act Locally” (Issues, Winter 2009), Charles F. Kennel stresses the importance of empowering local leaders with better information about the climate at the regional level. He provides multiple evidence of the growing consensus that adapting to the consequences of climate change requires local decisions and says that “The world needs a new international framework that encourages and coordinates participatory regional forecasts and links them to the global assessments.”

Fortunately, such a framework already exists. The Group on Earth Observations (GEO), which is implementing the Global Earth Observation System of Systems (GEOSS), provides a firm basis for developing the end-to-end information services that decisionmakers need for adapting to climate change.

Interestingly, out of the nine “societal benefit areas” being served by GEOSS, climate variability and change is the only one that has been focusing primarily on global assessments, and it needs to be rescaled to reflect the regional dimension. Information on the other themes, ranging from disasters and agriculture to water and biodiversity, has instead usually been generated by local-to-regional observations, models, and assessments; the challenge here is to move from the regional to the global perspective.

Take water. Despite its central importance to human well-being, the global water cycle is still poorly understood and has for a long time been addressed, at best, at the catchment level. If we are to understand how water supplies will evolve in a changing climate and how the water cycle will in turn drive the climate, local and national in situ networks need to be linked up with remote-sensing instruments to provide the full global picture. Key variables available today for hydrological assessment include precipitation, soil moisture, snow cover, glacial and ice extent, and atmospheric vapor. In addition, altimetry measurements of river and lake levels and gravity-field fluctuations provide indirect measurements of groundwater variability in time and space. Such integrated, cross-cutting data sets, gathered at the local, regional, and global levels, are required for a full understanding of the water cycle and, in particular, to decipher how climate change is affecting both regional and global water supplies.

Kennel concludes with a series of issues that have to be addressed when ensuring that global climate observations can support regional assessments and decisionmaking: connection, interaction, coordination, standards, calibration, certification, transfer, dissemination, and archiving. They are currently being addressed by GEO in its effort to support global, coordinated, and calibrated assessments in all societal benefit areas. The experience gained can certainly help when developing a coordinated approach to regional climate assessments. In turn, the experience gained in going from global to regional scales with climate information may shed some light on how to build a global system of systems for the other societal areas, in particular water, which Kennel considers a “critical issue” and “a good place to start.”

JOSE ACHACHE

Executive Director

Group on Earth Observations

Geneva, Switzerland


The article by Charles F. Kennel is wonderful and very timely. I say so because it raises pertinent issues and questions that must be squarely addressed for effective mitigation of climate change.

It is no secret today that global trends in climate change, environmental degradation, and economic disparity are a reality and are of increasing concern because poverty, disease, and hunger are still rampant in substantial parts of the world. The core duty of climate change scientists is therefore to monitor this for public safety. Despite trailblazing advancements, our society is still experiencing an imbalance in improving the literacy of citizens with the scientific and technological development process that has serious implications for public policy formulations, especially in the developing countries.

The interconnectedness of human–environmental earth systems points to the fact that no region is independent of the rest of the world, to the extent that processes such as desertification and biomass burning in Africa can have global consequences in the same way as processes in other regions can have influence in Africa. By definition, global climate/environmental change is a set of changes in the oceans, land, and atmosphere that are usually driven by an interwoven system of both socioeconomic and natural processes. This is aggravated by the increased application and utilization of advanced science, technology, and innovation. Human activities are already exceeding the natural forces that regulate the Earth system, to the extent that the particles emitted by these activities alter the energy balance of the planet, resulting in adverse effects on human heath.

To address the above problems, international and regional cooperation and collaboration are needed not only among scientists but also among decisionmakers and the citizenry to ensure acceptability and ownership of the process. I am therefore glad to report here that a group of African scientists has already initiated and produced a document on A Strategy for Global Environmental Change Research: Science Plan and Implementation Strategy. This is a regional project that will indeed assess regionally and act locally; it will also think globally, since the implementation strategy will involve both African scientists and scientists from the other regions of the world. The initiative AFRICANESS (the African Network of Earth Science Systems) will focus on four top-level issues that are the focus of concern with respect to global climate and environmental change and their impact in Africa, namely food and nutritional security, water resources, health, and ecosystem integrity.

Evidence-based advice to policy is paramount for sustainability, which can only be achieved by providing for current human needs while preserving the environment and the natural resources for future generations. It is how a government and the nation can best draw on the knowledge and skills from the science community.

Last but not least, I strongly believe that citizens’ engagement is not just vital but central to the success of any process to get the desired result and must be fully addressed. For these technologies to provide well-being to the citizenry, innovations must be rooted in local realities, and this cannot be achieved without the effective involvement of social scientists. Hence, a more participatory approach is needed, in which innovations are seen as part of a broader system of governance and markets that extends from local to national, regional, and international levels for sustainability.

I agree with Kennel’s statement that “a good place to start is the critical issue of water. The effects of climate change on water must be understood before turning to agriculture and ecosystems.” The Kenya National Academy of Sciences celebrates the Scientific Revival Day of Africa on June 30 every year by organizing a workshop on topical issues. The June 2009 theme is “water is life.” All are welcome.

JOSEPH O. MALO

President, Kenya National Academy of Sciences

Professor of Physics

University of Nairobi

Nairobi, Kenya


Charles F. Kennel is to be congratulated for his thoughtful article. When we recognize the need to “adapt to the inevitable consequences of climate change,” acting locally is necessary and will result in local benefits.

One of the challenges of local action will be to mainstream climate change factors into the decisionmaking processes of all levels of government and all sectors of society. Local adaptive capacity will be extremely variable across sectors and across the globe, and many modes of making choices exist. Existing regulatory frameworks often are based on the climate of the past or ignorance or neglect of weather-water-climate factors entirely. It is also important to recognize that these decisionmaking processes have their own natural cycles, be they the time until the next election, the normal infrastructure renewal or replacement cycle, or the time required for returns to be realized on investments. In acting locally, each of these will be an important factor.

In most countries, local actions are done within a national framework, so there are roles for national leaders in providing a framework, regulation, and incentives for action at local levels. These may be needed in order to go beyond local myopia. Also, because we now live in a globalized and competitive world, it is important to know how the climate is affecting other regions and how they are, or are not, adjusting.

A key issue for acting locally on climate change is to get beyond the idea that climate change adaptation is an environmental issue. Climate change adaptation must become an economic issue of concern to managers of transportation systems, agricultural-fishery-forestry sectors, and industry generally. It also matters to health care systems. For example, urban smog is a major local problem that can be addressed locally, at least to some extent. How will climate change affect the characteristics of smoggy days and alter health impacts as more stagnant and hot days occur?

Many, but not all, climate change issues relate to extreme events (floods, droughts, storms, and heat waves) and their changing characteristics. Emergency managers, who are now only peripherally connected to the climate change adaptation community, need to be brought into the local action.

A challenge for those developing tools, techniques, and frameworks for presentation of the information from regional assessments will be to find the ways that make them meaningful to and used by a wide range of decisionmakers across sectors within developed and developing countries.

Regarding Kennel’s last section on “building a mosaic,” we need to add START (the global change SysTem for Analysis, Research and Training), which has programs on regional change adaptation in developing countries, as well as the new global, multidisciplinary, multihazard, Integrated Research on Disaster Risk program, which works to characterize hazards, vulnerability, and risk; understand decisionmaking in complex and changing risk contexts; and reduce risk and curb losses through knowledge-based actions.

GORDON MCBEAN

Departments of Geography and Political Science

University of Western Ontario

London, Ontario, Canada


Charles F. Kennel’s article carries a very important message. The essence of the argument is that integrated assessments of the impacts of climate change will be most useful if they are done with a regional focus. This is because models using primarily global means miss the essential variability and specificity of impacts on the ground. That disconnect makes the tasks of planning for adaptation more difficult. Furthermore, he argues, adequate planning for adaptation requires understanding both how climate changes at the regional level and how climate change affects key natural systems in specific places. Kennel uses the first California assessment, published in 2006, as the example for his argument.

The points Kennel makes are demonstrated not only in the California example but in all of the teams participating in a small program created by the National Oceanic and Atmospheric Administration in 1995, which later came to be called the Regional Integrated Sciences Assessments (RISA) program. The support for Kennel’s arguments to be derived from the RISA teams is considerable.

They all demonstrate the power of a linked push/pull strategy. Each regional team must persist for a long time; focused on the diversity of a specific region; producing useful information for a wide range of types of stakeholders about the dynamics and impacts of climate variability on their resources, interests, and activities; and projecting scenarios of climate change and its consequences over the course of the next century.

Persistence creates trust over time as it increases stakeholders’ awareness of the role that climate plays in the systems, processes, and activities of major concern to them. This kind of push, conducted in a sustained manner, creates the pull of co-production of knowledge. Because stakeholders don’t always know what they need to know, it is also the responsibility of the team to conduct “use-inspired” fundamental research to expand the scope and utility of decision tools that will be of use in the development of strategies for responding to changes in the regional climate system. But we should expect that as communities become more cognizant of the implications of a changing climate, societal demands for the creation of national climate services will intensify.

No one yet fully understands what the best approach for doing so is. We will have to be deliberately flexible, dynamic, and experimental in choosing which paths to follow. Because regional specificity will be the primary focus, we should not expect that there will be a single optimal design.

Kennel also argues that “the world needs a new international framework that encourages and coordinates participatory regional forecasts and links them to the global assessments.” Without a doubt, we are now at this point in the development of the next Intergovernmental Panel on Climate Change assessment, which is being planned for 2013. Kennel spells out a series of specific questions, all of which bear on the overall issue. This is another valuable contribution of the article, as is his suggestion that a good place to start is the critical issue of water.

EDWARD MILES

University of Washington

Seattle, Washington


Charles F. Kennel addresses a crucial and urgent necessity, namely to create a worldwide mosaic of Regional Climate Change Assessments (RCCAs). His argument is compelling, yet no such activity has ever emerged from the pertinent scientific community, in spite of the existence of an entire zoo of global environmental change programs as coordinated by the International Council for Science (ICSU) and other institutions. There are obvious reasons for that deficiency, most notably national self-interest and the highly uneven distribution of investigative capacity across the globe.

The people of Burkina Faso would be keen to investigate how their country might be transformed by global warming (caused by others), yet they do not have the means to carry out the proper analysis. This is a case where international solidarity would be imperative to overcome a highly inequitable situation.

Although a country such as the United States has several world-class ocean-atmosphere simulation models for anticipating how North America will be affected by greenhouse gas–induced perturbations of planetary circulation patterns, it is arguably less interested in funding a RCCA for, say, a Sahel area embracing Burkina Faso. Such an assessment would neither add to the understanding of relevant global fluid dynamics nor generate direct hints for domestic adaptation. So the United States assesses California, Oregon, or Louisiana instead. The people of Burkina Faso, on the other hand, would be keen to investigate how their country might be transformed by global warming (caused by others), yet they do not have the means to carry out the proper analysis. This is a case where international solidarity would be imperative to overcome a highly inequitable situation.

Let me emphasize several of Kennel´s statements. Global warming cannot be avoided entirely; confining it to roughly 2°C appears to be the best we can still achieve under near-optimal political circumstances. Planetary climate change will manifest itself in hugely different regional impacts, not least through the changing of tipping elements (such as ocean currents or biomes), thus generating diverse subcontinental-scale repercussions. Adaptation measures will be vital but have to be tailor-made for each climate-sensitive item (such as a river catchment).

Kennel piles up convincing arguments against a one-size-fits-all attitude in this context. Yet even the production of a bespoke suit has to observe the professional principles of tailoring. Optimal results arise from the right blend of generality and specificity, and that is how I would like to interpret Kennel’s intervention: We need internationally concerted action on the national-to-local challenges posed by climate change. Let us offer three c-words that define indispensable aspects of that action: cooperation, credibility, and comparability.

First, as indicated above, a reasonable coverage of the globe by RCCAs will not emerge from a purely autochthonous strategy, where each area is supposed to take care of its own investigation. Mild global coordination, accompanied by adequate fundraising, will be necessary to instigate studies in enough countries in good time to produce small as well as big pictures. Self-organized frontrunner activities are welcome, of course.

Second, the RCCAs need to comply with a short list of scientific and procedural standards. Otherwise, the results generated may do more harm than good if used in local decisionmaking.

Third, the studies have to be designed in a way that allows for easy intercomparison. This will create multiple benefits such as the possibility of constructing global syntheses, of deriving differential vulnerability measures, and of directly exchanging lessons learned and best practices. Actually, a crucial precondition for comparability would be a joint tool kit, especially community impacts models for the relevant sectors. Unfortunately, the successful ensembles approach developed in climate system modeling has not yet been adopted in impacts research.

Who could make all this happen? Well, the ICSU is currently pondering the advancement of an integrated Earth System Science Research Program. Turning Kennel´s vision into reality would be a nice entrée for that program.

HANS JOACHIM SCHELLNHUBER

Potsdam Institute for Climate Impact Research

Potsdam, Germany

Oxford University

Oxford, United Kingdom


Global science policy

Gerald Hane’s astute and forward-thinking analysis of the structural problems facing international science and technology (S&T) policy underscores both the challenges and the great need to increase the role of science in the Obama administration’s foreign policy (“Science, Technology, and Global Reengagement,” Issues, Fall 2008). The new administration is heading in the right direction in restoring science’s critical place, although it falls short of Hane’s forceful recommendations. In his inaugural speech, the president said that he would “restore science to its rightful place.” He has speedily nominated John Holdren as Assistant to the President for Science and Technology, Director of the White House Office of Science and Technology Policy (OSTP), and Co-Chair of the President’s Council of Advisors on Science and Technology (PCAST), indicating his commitment to reinvigorating the role of science advisor. By appointing Holdren Assistant to the President as well as Director of OSTP, he is reestablishing the position to its former cabinet level.

Similarly, the administration is making strong moves toward reengaging the United States with the world. Obama’s campaign promise was to renew America’s leadership in the world with men and women who know how to work within the structures and processes of government. Obama’s initial cadre of appointments at the State Department restores some positions previously eliminated by the Bush administration. As the months progress, it will be interesting to see whom Secretary Clinton appoints as science advisor and whether she will create an undersecretary position, as Hane suggested.

Strengthening U.S. international science policy is not disconnected from helping to solve Obama’s economic and foreign policy challenges. As Hane states, many countries are using numerous science partnerships to their competitive benefit. The United States could encourage more partnerships to do the same. With more cooperative efforts, including allowing agencies (other than the National Institutes of Health and Department of Defense) to fund cross-national science teams, the Obama administration could facilitate R&D projects that increase the United States’ competitive position in the world. Rebuilding U.S. relationships with former adversaries can be aided by undertaking more S&T partnerships. The State Department has sponsored a series of science partnerships with Libya during the past year in order to both increase knowledge of solar eclipses and help build bridges with a country that was once a prime candidate to be included in the Axis of Evil.

In November 2008, six U.S. university presidents toured Iran in an effort to build scientific and educational links to the country’s academic community. Science cooperation gives countries a neutral, mutually beneficial platform from which to build diplomatic links.

In an interview with National Public Radio in January 2009, Speaker of the House Nancy Pelosi forcefully stated that a key part of Congress’s economic recovery plan is spending on “science, science, science.” Hane’s piece gives a roadmap for how to strengthen the S&T infrastructure within the government. It is now up to our new policymakers to use this very valuable tool for the betterment of our country.

AMY HOANG WRONA

Senior Policy Analyst

Strategic Analysis

Arlington, Virginia


Military restructuring

“Restructuring the Military,” by Lawrence J. Korb and Max A. Bergmann (Issues, Fall 2008), makes many excellent points about the need to match our military forces to the current and potential future threat environments. There is, however, a critical issue of threat anticipation and tailoring of the forces that has yet to be faced. The article makes the valid point that forces built for modern conventional warfare are not well suited to the kinds of irregular warfare we face today, and quotes Lt. Col. Paul Yingling to the effect that our military “continued [after Desert Storm] to prepare for the last war while its future enemies prepared for a new kind of war.” It goes on to argue that the United States devotes too many resources to “dealing with threats from a bygone era [rather] than the threats the U.S. confronts today.”

This argument raises a logical paradox: Given the years that it takes to prepare the armed forces, in doctrine, equipment, and training, for any kind of war, if we start now to prepare them for the kinds of warfare we are facing now (usually referred to in the military lexicon as “asymmetric warfare”), by the time they are ready they will have been readied for the “last war.”

Threats to our security will always move against perceived holes in our defenses. Insurgencies take place in local areas outside our borders that we and/or our allies occupy for reasons that reinforce our own and allied national security. Terrorist tactics used by transnational jihadists exploit openings in the civilian elements of our overall national security posture.

If we were to concentrate our military resources on meeting these current threats at the expense of our ability to meet threats by organized armed forces fielding modern weapons, as has been suggested, we could find ourselves again woefully unprepared for a possible resurgence of what is now labeled a bygone era of warfare, as we were in Korea in 1950. Such resurgent threats could include a modernizing Iran or North Korea, a suddenly hostile Pakistan, a China responding to an injudicious breakaway move by Taiwan, a Russia confronting us in nations such as Ukraine or Georgia that we are considering as future members of the NATO Alliance, or others that arise with little warning, as happened on the breakup of Yugoslavia. Indeed, our effective conventional forces must serve to some degree as an “existential deterrent” to the kinds of actions that might involve those forces.

Given the decades-long development times for modern weapons, communications, and transportation systems, if we did not keep our conventional forces at peak capability, the time that it would take to respond to threats in these other directions would be much longer than the time it has taken us to respond to the ones that face us in the field today and have led to the current soul-searching about the orientation of our military forces.

Nor should we forget that it took the modern combat systems—aircraft carriers and their combat aircraft, intercontinental bombers and refueling tankers, intercontinental transport aviation, and modern ground forces—to enable our responses to attacks originating in remote places such as Afghanistan or to stop ongoing genocide in Bosnia.

Thus, the problem isn’t that we have incorrectly oriented the resources that we have devoted to our military forces thus far. It is that we haven’t anticipated weaknesses that should also have been covered by those resources. Although there is much truth in the aphorism that the one who defends everywhere defends nowhere, we must certainly cover the major and obvious holes in our defenses.

The fact that we have had to evolve our armed forces quickly to cover the asymmetric warfare threat should not lead us to shift the balance so much that we open the conventional warfare hole for the next threat to exploit. The unhappy fact is that we have to cover both kinds of threat; indeed, three kinds, if we view the potential for confrontations with nuclear-armed nations as distinct from conventional and asymmetric threats.

How to do that within the limited resources at our disposal is the critical issue. Fortunately, the resources necessary to meet the asymmetric warfare threat are much smaller than those needed to be prepared to meet the others. All involve personnel costs, which represent a significant fraction of our defense expenditures, but the equipment entailed can be mainly (though not exclusively) a lesser expense derived from preparation for the other kinds of warfare. And the larger parts of our defense budget are devoted to the equipment and advanced systems that receive most of the animus of the critics of that budget in its current form.

The most effective way to approach the issue of priorities and balance in the budget would be to ask the military services how they would strike that balance within budgets of various levels above and below the latest congressional appropriations for defense. That might mitigate the effects of the congressional urge to protect defense work ongoing in specific states or congressional districts, regardless of service requirements; that is, the tendency to use the defense budget as a jobs program.

Beyond that, it will be up to our political leaders, with the advice of the Joint Chiefs of Staff that is required by law, to decide, and to make explicit for the nation, what levels of risk they are willing to undertake for the nation by leaving some aspects, to be specified, of all three threat areas not fully covered by our defense expenditures. Then the public will have been apprised of the risks, the desirability of incurring them will presumably have been argued out, and the usual recriminations induced by future events might be reduced.

S. J. DEITCHMAN

Chevy Chase, Maryland


The Bioterror Threat

World at Risk, a new report by the Commission on the Prevention of Weapons of Mass Destruction Proliferation and Terrorism, concludes that “it is more likely than not that a weapon of mass destruction will be used in a terrorism attack somewhere in the world by the end of 2013.” The commission, chaired by Bob Graham, a former Democratic U.S. senator from Florida, further states that “terrorists are more likely to be able to obtain and use a biological weapon than a nuclear weapon” and that “the U.S. government needs to move more aggressively to limit the proliferation of biological weapons and reduce the prospects of a bioterror attack.”

William R. Clark, professor of immunology at the University of California, Los Angeles, and author of Bracing for Armageddon? The Science and Politics of Bioterrorism in America, is not likely to welcome the commission’s findings. In his book, Clark argues that concerns about bioterrorism in the United States have at times “risen almost to the level of hysteria” and that “bioterrorism is a threat in the twenty-first century, but it is by no means, as we have so often been told over the past decade, the greatest threat we face.” Clark goes on to say that it is time for the United States “to move on now to a more realistic view of bioterrorism, to tone down the rhetoric and see it for what it actually is: one of many difficult and potentially dangerous situations we—and the world—fear in the decades ahead. And it is certainly time to examine closely just how wisely we are spending billions of dollars annually to prepare for a bioterrorism attack.”

It is difficult to disagree with Clark’s conclusion that at times during the past decade some people, including some who should know better, have hyped the bioterrorism problem. He is right that “unrealistic statements about the threat posed by bioterrorism continue to this day, at the highest levels of government.” Elsewhere, too, he might have added.

It is also difficult to dispute Clark’s view that the money spent to address the bioterrorism problem has not all been wisely spent. Clark suggests that the amount is perhaps around $50 billion. Although it is extremely difficult to come up with a figure in which one can have great confidence, other estimates conclude that it could be 50 to 100% more than that, depending on what one counts. Whatever the amount, it is considerable, and U.S. taxpayers have cause to question whether they have received their money’s worth in terms of capabilities to respond effectively to, let alone to prevent, a bioterrorist attack.

DEALING SUCCESSFULLY WITH BIOTERRORISM WILL REQUIRE ACTION ACROSS A WIDER SPECTRUM THAN CLARK ADDRESSES, INCLUDING DETERRENCE, PREVENTION, AND DEFENSE.

Although Clark’s points should always be heeded, what is less clear is why he wrote this book to make them. In his preface, Clark states, “What has been lacking in our approach to the threat of political bioterrorism to date is an assessment of exactly how real it is.” This is just not the case. For the past decade, a number of experts, both self-styled and genuine, have addressed the bioterrorism problem. These observers have fallen into two distinct categories. One is what might be called the “hypers,” to whom Clark points. The other might be called the “calibrators,” who have tried quite self-consciously to provide a clear-eyed, balanced, nuanced, and realistic assessment of bioterrorism. Indeed, several of the people Clark thanks in his acknowledgements—Seth Carus, Milton Leitenberg, Amy Smithson, and Ray Zalinskas, among them—fall into this latter category. Leitenberg’s 2005 book Assessing the Biological Weapons and Bioterrorism Threat and Brad Roberts’s 2001 book Terrorism with Chemical and Biological Weapons: Calibrating Risks and Responses are only two good examples of several efforts that have taken a balanced view of the bioterrorism threat.

This leads to a second question as to why Clark wrote this book: Why did he choose to cover ground that has been extremely well plowed during the past decade? His chapter on agroterrorism highlights an issue that far too often has received short shrift. But filling gaps in understanding or raising tough new questions are not what the rest of the book achieves. The chapter on the history of bioterrorism, for example, primarily provides thumbnail sketches of three cases about which other detailed, sometimes book-length assessments have already been completed. Similarly, a wealth of material has been generated during the past decade about the biological agents that Clark considers the most likely prospects for bioterrorism use: smallpox, anthrax, plague, botulism, and tularemia. Moreover, the current discussion of such issues as synthetic biology, both in the United States and Europe, has carried the issue of the risks associated with genetically modified pathogens well beyond Clark’s consideration of that issue.

But perhaps the greatest failure of Bracing for Armageddon? is that it says next to nothing about how the bioterrorism challenge is evolving and what it might look like in the future. It does not address, for example, the security implications of intriguing developments such as:

  • The speed at which the underlying life sciences are advancing, including areas related to agent delivery such as aerosolization.
  • The remarkable global diffusion of the life sciences and biotechnology, spurred by the perception of biotechnology as a key driver of future economic development.
  • The concomitant shift in bioterrorism from being a materials- and equipment-based threat to a knowledge-based risk.
  • The potential change in the relationship between capabilities and intentions as these trends continue.

Many other issues could be added to this list. In sum, then, the paradigm defining the bioterrorism problem that emerged a decade ago, and which Clark essentially adopts here, is not likely to define the contours of that challenge in the future. But on these issues, Clark is silent.

Other lacunae in the book relate to Clark’s discussion of the U.S. response to the bioterrorism challenge. One, his discussion provides virtually no information on what the United States has done with respect to prevention; it focuses exclusively on preparedness and response. To be sure, preparedness and consequence management are clearly the areas that have consumed, by far, the greatest government expenditures, and Clark offers some perceptive comments (albeit made by others as well) regarding some specific issues and government programs. Dealing successfully with bioterrorism, however, will require action across a wider spectrum than Clark addresses, including deterrence, prevention, and defense. But he makes no mention of what U.S. entities such as the FBI or the Departments of Homeland Security and Defense have done to bolster capabilities to meet these challenges.

Nor does Clark consider the international dimension of responses to bioterrorism. It has become a cliché for analysts to observe that bioterrorism is a challenge that cannot be met by one nation alone and that international cooperation is vital. But it is true for all that, and the United States has initiated or been part of a number of multilateral efforts intended, at least in part, to help the international community manage the bioterrorism risk. These include the revision of the International Health Regulations, the creation of the Global Health Security Action Group, and the launch of Interpol’s bioterrorism program, among others. Despite the perceived necessity of international cooperation and consultation, however, this book has virtually nothing to say about it.

Finally, it is not just what Clark’s analysis does not say that limits the utility of his contribution, but, in a couple of places at least, what it does. For example, one chapter is titled “The Ultimate Bioterrorist: Mother Nature!” Such a portrayal is unhelpful and even potentially detrimental to the making and execution of good policy. To be sure, bioterrorism and naturally occurring infectious diseases are sometimes closely related, especially in terms of medical and other responses that must be mobilized when an outbreak of either occurs. But the two phenomena are, and must be treated as, distinct. Making the two equivalent fundamentally misrepresents the nature of bioterrorism. Terrorism, of any sort, is a product of human agency. Not only is it the consequence of human design, but it is a dynamic in which the human ability to analyze, adapt, innovate, and choose are critical features to which effective policy must be attuned. Although nature constantly adapts, naturally occurring infectious diseases entail no such dynamic, certainly not with respect to policy options that decisionmakers must consider.

If, as suggested, naturally occurring infectious disease is a more serious problem than bioterrorism, then the case should be argued on its own terms, not offered in some seemingly clever formulation that, in fact, provides an unhelpful framework. It is almost as if Clark falls victim to the very practice he so rightfully criticizes: In order to get attention to his argument, he casts it in terms that are more dramatic and eye-catching than is probably appropriate.


Michael Moodie (), a consultant on chemical and biological weapons issues based in Silver Spring, Maryland, is the former head of the Chemical and Biological Arms Control Institute and a former assistant director for multilateral affairs at the U.S. Arms Control and Disarmament Agency.

Archives – Spring 2009

TIM ROLLINS + K.O.S., On the Origin of the Species (after Darwin), India ink, graphite transfer, matte acrylic on book pages.

On the Origin of Species (after Darwin)

This detail is taken from the art exhibition On the Origin of Species (after Darwin) by Tim Rollins + K.O.S., which was sponsored by Cultural Programs of the National Academy of Sciences in celebration of the 150th anniversary of the publication of On the Origin of Species and the 200th anniversary of Darwin’s birth.

Tim Rollins, teacher and conceptual artist, began working with special education teenagers in the South Bronx in the early 1980s. He developed an approach where the students, who named themselves K.O.S. (Kids of Survival), produced works of art based on classic literature.

Rollins observed that when these students were engaged in classroom discussions of literature, they often drew or painted on the pages of their books. He encouraged them to share their drawings with one another and then to work together to create collaborative visual expressions of their response to the literature. Over the course of nearly three decades, they have created artwork based on Franz Kafka’s Amerika, George Orwell’s Animal Farm, and Ralph Ellison’s Invisible Man.

Tim Rollins + K.O.S. have exhibited extensively worldwide and their work is in prestigious collections including the Museum of Modern Art in New York City, the Hirshhorn Museum and Sculpture Garden in Washington, D.C., and the Tate Modern in London.

U.S. Workers in a Global Job Market

Among the many changes that are part of the emergence of a global economy is a radically different relationship between U.S. high-tech companies and their employees. As late as the 1990s, a degree in science, technology, engineering, or mathematics (STEM) was a virtual guarantee of employment. Today, many good STEM jobs are moving to other countries, reducing prospects for current STEM workers and dimming the appeal of STEM studies for young people. U.S. policymakers need to learn more about these developments so that they can make the critical choices about how to nurture a key ingredient in the nation’s future economic health, the STEM workforce.

U.S. corporate leaders are not hiding the fact that globalization has fundamentally changed how they manage their human resources. Craig Barrett, then the chief executive officer (CEO) of Intel Corporation, said that his company can succeed without ever hiring another American. In an article in Foreign Affairs magazine, IBM’s CEO Sam Palmisano gave the eulogy for the multinational corporation (MNC), introducing us to the globally integrated enterprise (GIE): “Many parties to the globalization debate mistakenly project into the future a picture of corporations that is unchanged from that of today or yesterday….But businesses are changing in fundamental ways—structurally, operationally, culturally—in response to the imperatives of globalization and new technology.”

GIEs do not have to locate their high-value jobs in their home country; they can locate research, development, design, or services wherever they like without sacrificing efficiency. Ron Rittenmeyer, then the CEO of EDS, said he “is agnostic specifically about where” EDS locates its workers, choosing the place that reaps the best economic efficiency. EDS, which had virtually no employees in low-cost countries in 2002, had 43% of its workforce in low-cost countries by 2008. IBM, once known for its lifetime employment, now forces its U.S. workers to train foreign replacements as a condition of severance. In an odd twist, IBM is offering U.S. workers the opportunity to apply for jobs in its facilities in low-cost countries such as India and Brazil at local wage rates.

Policy discussions have not kept pace with changes in the job market, and little attention is being paid to the new labor market for U.S. STEM workers. In a time of GIEs, advanced tools and technology can be located anywhere, depriving U.S. workers of an advantage they once had over their counterparts in low-wage countries. And because technology workers not only create new knowledge for existing companies but are also an important source of entrepreneurship and startup firms, the workforce relocation may undermine U.S. world leadership as game-changing new companies and technologies are located in low-cost countries rather than the United States. The new corporate globalism will make innovations less geographically sticky, raising questions about how to make public R&D investments pay off locally or even nationally. Of course, scientists and engineers in other countries can generate new ideas and technologies that U.S. companies can import and put to use, but that too will require adjustments because this is not a strategy with which U.S. companies have much experience. In short, the geographic location of inputs and the flow of technology, knowledge, and people are sure to be significantly altered by these changes in firm behavior.

As Ralph Gomory, a former senior vice president for science and technology at IBM, has noted, the interests of corporations and countries are diverging. Corporate leaders, whose performance is not measured by how many U.S. workers they employ or the long-term health of the U.S. economy, will pursue their private interests with vigor even if their actions harm their U.S. employees or are bad prescriptions for the economy. Simply put, what’s good for IBM may not be good for the United States and vice versa. Although this may seem obvious, the policy and political processes have not fully adjusted to this reality. Policymakers still turn to the CEOs of GIEs for advice on what is best for the U.S. economy. Meanwhile, STEM workers have yet to figure out that they need to get together to identify and promote what is in their interest.

Most STEM workers have not embraced political activism. Consider employees in the information technology (IT) industry, one of the largest concentrations of STEM workers. They have by and large rejected efforts by unions to organize them. One might expect a professional organization such as the Institute of Electrical and Electronics Engineers (IEEE) to represent their interests, but IEEE is an international organization that sees little value in promoting one group of its members over another.

Because STEM workers lack an organized voice, their interests are usually neglected in policy discussions. There was no worker representative on the National Academies committee that drafted the influential report Rising Above the Gathering Storm. And although the Council on Competitiveness, which prepared the National Innovation Initiative, has representatives of labor unions in its leadership, they did not participate in any significant way on the initiative. Both studies had chairs who were CEOs of GIEs. It should come as no surprise, therefore, that neither of these reports includes recommendations that address the root problem of offshoring: the misalignment of corporate and national interests, in which firms compete by substituting foreign for U.S. workers. Instead, the reports diagnosed the problem as a shortage of qualified STEM workers and therefore advocated boosting R&D spending, expanding the pool of STEM workers, and recruiting more k-12 science and math teachers.

Low-cost countries attract R&D

Although everyone recognizes that globalization is remaking the R&D landscape, that U.S.-based companies are moving some of their high-value activities off shore, and that some low-income countries such as China and India are eager to enhance their capabilities, we actually have very little reliable and detailed data on what is happening. In fact, much of what we think we do know is contradictory. For example, in 2006, China was by far the leading exporter of advanced technology products to the United States, surpassing all of the European Union combined. On the other hand the number of triadic patents—those filed in Europe, the United States, and Japan—awarded to Chinese inventors in 2002 was a mere 177 versus more than 18,000 for American and more than 13,000 for Japanese inventors. A mixed picture also emerges from India. On the one hand, India’s indigenous IT services companies such as Infosys and Wipro have become the market leaders in their sector, forcing U.S.-based competitors such as IBM and HP to adopt their offshore outsourcing business model. But in 2003, India produced only 779 engineering doctorates compared to the 5,265 produced in the United States.

The standard indicators in this area are backward-looking and often out of date by the time they are published. More timely and forward-looking information might be gleaned from surveys of business leaders and corporate announcements. A survey by the United Nations Conference on Trade and Development of the top 300 worldwide R&D spenders found that China was the top destination for future R&D expansion, followed by the United States, India, Japan, the United Kingdom, and Russia. A 2007 Economist magazine survey of 300 executives about R&D site selection found that India was the top choice, followed by the United States and China.

No comprehensive list of R&D investments by U.S. multinational corporations exists, and the firms aren’t required to disclose the location of R&D spending in financial filings. We must rely on the information that companies offer voluntarily. From public announcements we know that eight of the top 10 R&D-spending companies have R&D facilities in China or India, (Microsoft, Pfizer, DaimlerChrysler, General Motors, Siemens, Matsushita Electric, IBM, and Johnson & Johnson), and that many of them plan to increase their innovation investments in India and China.

Although early investments were for customizing products for a local market, foreign-based facilities are now beginning to develop products for global markets. General Motors has a research presence in India and China, and in October 2007, it announced that it would build a wholly owned advanced research center to develop hybrid technology and other advanced designs in Shanghai, where it already has a 1,300-employee research center as part of a joint venture with the Shanghai Automotive Industry Corporation. Pfizer, the number two R&D spender, is outsourcing drug development services to India and already has 44 new drugs undergoing clinical trials there. The company has approximately 200 employees at its Shanghai R&D center, supporting global clinical development. Microsoft has a large and expanding R&D presence in India and China. Microsoft’s India Development Center, its largest such center outside the United States, employs 1,500 people. The Microsoft China R&D Group also employs 1,500, and in 2008, Microsoft broke ground on a new $280-million R&D campus in Beijing and announced an additional $1 billion investment for R&D in China. Intel has about 2,500 R&D workers in India and has invested approximately $1.7 billion in its Indian operations. Its Indian engineers designed the first all-India microprocessor, the Xeon 7400, which is used for high-end servers. Intel has been investing in startup companies in China, where it created a $500 million Intel Capital China Technology Fund II to be used for investments in wireless broadband, technology, media, telecommunications, and “clean tech.”

Although General Electric spends less than the above companies on R&D, it has the distinction of having the majority of its R&D personnel in low-cost countries. Jack Welch, GE’s former CEO, was an early and significant evangelizer of offshoring. The firm has four research locations worldwide, in New York, Shanghai, Munich, and Bangalore. Bangalore’s Jack Welch R&D Center employs 3,000 workers, more than the other three locations combined. Since 47% of GE’s revenue in 2008 came from the United States and only 16% from Asia, it is clear that it is not moving R&D to China and India just to be close to its market.

The fact that China and India are able to attract R&D is an indicator that they have improved their ability to attract the mid-skill technology jobs in the design, development, and production stages. The true benefit of attracting R&D activities might be the downstream spillover benefits in the form of startup firms and design and development and production facilities.

U.S. universities have been a magnet for talented young people interested in acquiring the world’s best STEM education. Many of these productive young people have remained in the United States, become citizens, and made enormous contributions to the productivity of the U.S. economy as well as its social, cultural, and political life. But these universities are beginning to think of themselves as global institutions that can deliver their services anywhere in the world.

Cornell, which already calls itself a transnational institution, operates a medical school in Qatar and sent its president to India in 2007 to explore opportunities to open a branch campus. Representatives of other top engineering schools, such as Rice, Purdue, Georgia Tech, and Virginia Tech, have made similar trips. Carnegie Mellon offers its technology degrees in India in partnership with a small private Indian college. Students take most of their courses in India, because it is less expensive, and then spend six months in Pittsburgh to complete the Carnegie Mellon degree.

If students do not have to come to the United States to receive a first-rate education, they are far less likely to seek work in the United States. More high-quality job opportunities are appearing in low-cost countries, many of them with U.S. companies. This will accelerate the migration of STEM jobs out of the United States. Even the perfectly sensible move by many U.S. engineering programs to provide their students with more international experience through study-abroad courses and other activities could contribute to the migration of STEM jobs by preparing these students to manage R&D activities across the globe.

Most of the information about university globalization is anecdotal. The trend is clearly in its early stages, but there are indications that it could grow quickly. This is another area in which more reliable data is essential. If the nation’s leaders are going to manage university activities in a way that will advance U.S. interests, they will need to know much more about what is happening and what is planned.

Uncertainty and risk

The emerging opportunities for GIEs to take advantage of high-skilled talent in low-cost countries have markedly increased both career uncertainty and risk for the U.S. STEM workforce. Many U.S. STEM workers worry about offshoring’s impact on their career prospects and are altering career selection. For instance, according to the Computing Research Association, enrollment in bachelors programs in computer science dropped 50% from 2002 to 2007. The rising risk of IT job loss, caused in part by offshoring, was a major factor in students’ shying away from computer science degrees.

Offshoring concerns have been mostly concentrated on IT occupations, but many other STEM occupations may be at risk. Princeton University economist Alan Blinder analyzed all 838 Bureau of Labor Statistics standard occupation categories to estimate their vulnerability to offshoring. He estimates that nearly all (35 of 39) STEM occupations are “offshorable,” and he described many as “highly vulnerable.” By vulnerable, he is not claiming that all, or even a large share, of jobs in those occupations will actually be lost overseas. Instead, he believes that those occupations will face significant new wage competition from low-cost countries. Further, he finds that there is no correlation between vulnerability and education level, so simply increasing U.S.education levels, as many have advocated, will not slow offshoring.

The National Science Foundation should work with the appropriate agencies such as the Bureaus of Economic Analysis, Labor Statistics, and the Census to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Workers need to know which jobs will be geographically sticky and which are vulnerable to being offshored so that they can make better choices for investing in their skills. But there is a great deal of uncertainty about how globalization will affect the level and mix of domestic STEM labor demand. The response of some workers appears to be to play it safe and opt for occupations, often non-STEM, that are likely to stay. Further, most employers, because of political sensitivities, are very reluctant to reveal what jobs they are offshoring, sometimes going to great lengths to mask the geographic rebalancing of their workforces. The uncertainty introduced by offshoring aggravates the already volatile job market that is characteristic of the dynamic high-tech sector.

For incumbent workers, especially those in mid-career, labor market volatility creates a special dilemma. The two prior technology recessions, 1991 to 1992 and 2002 to 2004, have been especially long, longer even than for the general labor force. At the same time, technology-obsolescence cycles are shortening, which means that unemployed STEM workers can find that there skills quickly become outdated. If unemployment periods are especially long, it will be even more difficult to reenter the STEM workforce when the market rebounds. An enormous amount of human capital is wasted when experienced STEM professionals are forced to move into other professions because of market vagaries.

Policy has done little to reduce risks and uncertainty for STEM workers. The government does not collect data on work that is moving offshore or real-time views of the STEM labor markets, both of which would help to reduce uncertainty. Trade Adjustment Assistance (TAA), the primary safety net for workers who lose their jobs due to international trade, has not been available for services industries, but it has been authorized as part of the recently passed stimulus legislation. This is one part of the stimulus that should be made permanent. In addition, Congress should ensure that the program is adequately funded, because it is often oversubscribed, and the Department of Labor should streamline the eligibility regulations, because bureaucratic rules often hamper the ability of displaced workers to obtain benefits. This will be especially true with services workers whose employers are reluctant to admit that workers are displaced due to offshoring.

Response to competition

One of the most important high-technology stories of the past decade has been the remarkably swift rise of the Indian IT services industry, including firms such as Wipro, Infosys, TCS, and Satyam, as well as U.S.-based firms such as Cognizant and iGate that use the same business model. There is no need to speculate about whether the Indian firms will eventually take the lead in this sector; they already have become market leaders. By introducing an innovative, disruptive business model, the Indian firms have turned the industry upside down in only four years. U.S. IT services firms such as IBM, EDS, CSC, and ACS were caught flat-footed. Not a single one of those firms would have considered Infosys, Wipro, or TCS as direct competitors as recently as 2003, but now they are chasing them by moving as fast as possible to adopt the Indian business model, which is to move as much work as possible to low-cost countries. The speed and size of the shift is breathtaking.

The Indian IT outsourcing firms have extensive U.S. operations, but they prefer to hire temporary guest workers with H-1B or L-1 visas. The companies train these workers in the United States, then send them home where they can be hired to do the same work at a lower salary. These companies rarely sponsor their H-1B and L-1 workers for U.S. legal permanent residence.

The important lesson is how the U.S. IT services firms have responded to the competitive challenge. Instead of investing in their U.S. workers with better tools and technologies, the firms chose to imitate the Indian model by outsourcing jobs to low-cost countries. IBM held a historic meeting with Wall Street analysts in Bangalore in June 2006, where its whole executive team pitched IBM’s strategy to adopt the Indian offshore-outsourcing business model, including an additional $6 billion investment to expand its Indian operations. IBM’s headcount in India has grown from 6,000 in 2003 to 73,000 in 2007, and is projected to be 110,000 by 2010. The U.S. headcount is about 120,000. And IBM is not alone. Accenture passed a historic milestone in August 2007, when its Indian headcount of 35,000, surpassed any of its other country headcounts, including the United States, where it had 30,000 workers. In a 2008 interview, EDS’s Rittenmeyer extolled the profitability of shifting tens of thousands of the company’s workers from the United States to low-cost countries such as India. He said outsourcing is “not just a passing fancy. It is a pretty major change that is going to continue. If you can find high-quality talent at a third of the price, it’s not too hard to see why you’d do this.” ACS, another IT services firm, recently told Wall Street analysts that it plans its largest increase in offshoring for 2009, when it will move many of its more complex and higher-wage jobs overseas so that nearly 35% of its workforce will be in low-cost countries.

As Alan Blinder’s analysis indicates, many other types of STEM jobs could be offshored. The initiative could come from foreign competitors or from U.S.-based GIEs.

Preserving STEM jobs

Private companies will have the final say about the offshoring of jobs, but the federal government can and should play a role in tracking what is happening in the global economy and taking steps that help the country adapt to change. Given the speed at which offshoring is increasing in scale, scope, and job sophistication, a number of immediate steps should be taken.

Collect additional, better, and timelier data. We cannot expect government or business leaders to make sound decisions in the absence of sound data. The National Science Foundation (NSF) should work with the appropriate agencies, such as the Bureaus of Economic Analysis (BEA) and Labor Statistics and the Census, to begin collecting more detailed and timely data on the globalization of innovation and R&D.

Specifically, the NSF Statistical Research Service (SRS) should augment existing data on multinational R&D investments to include annual detailed STEM workforce data, including occupation, level of education, and experience for workers within and outside the United States. These data should track the STEM workforce for multinational companies in the United States versus other countries. The SRS should also collect detailed information on how much and what types of R&D and innovation activities are being done overseas. The NSF Social, Behavioral, and Economic Sciences division should do four things: 1) begin a research program to estimate the number of jobs that have been lost to offshoring and to identify the characteristics of jobs that make them more or less vulnerable to offshoring; 2) assess the extent of U.S. university globalization and then track trends; 3) identify the effects of university globalization on the U.S. STEM workforce and students, and launch a research program to identify and disseminate best practices in university globalization; and 4) conduct a study to identify the amount and types of U.S. government procurement that are being offshored. Finally, the BEA should implement recommendations from prior studies, such as the 2006 study by MIT’s Industrial Performance Center, to improve its collection of services data, especially trade in services.

Establish an independent institute to study the implications of globalization. Blinder has said that the economic transformation caused by offshoring could rival the changes caused by the industrial revolution. In addition to collecting data, government needs to support an independent institute to analyze the social and economic implications of these changes and to consider policy options to address the undesirable effects. A $40 million annual effort to fund intramural and extramural efforts would be a good start.

Facilitate worker representation in the policy process. Imagine if a major trade association, such as the Semiconductor Industry Association, was excluded from having any representative on a federal advisory committee making recommendations on trade and export control policy in the semiconductor industry. It would be unfathomable. But we have precisely this arrangement when it comes to making policies that directly affect the STEM workforce. Professional societies and labor unions should be invited to represent the views of STEM workers on federal advisory panels and in congressional hearings.

Create better career paths for STEM workers. STEM offshoring has created a pessimistic attitude about future career prospects for incumbent workers as well as students. To make STEM career paths more reliable and resilient, the government and industry should work together to create programs for continuing education, establish a sturdier safety net for displaced workers, improve information about labor markets and careers, expand the pool of potential STEM workers by making better use of workers without a college degree, and provide assistance for successful reentry into the STEM labor market after voluntary and involuntary absences. Some specific steps are:

  • The government should encourage the adoption and use of low-cost asynchronous online education targeted at incumbent STEM workers. The program would be coordinated with the appropriate scientific and engineering professional societies. A pilot program should assess the current penetration rates of online education for STEM workers and identify barriers to widespread adoption.
  • The Department of Labor should work with the appropriate scientific and engineering professional societies to create a pilot program for continuous education of STEM workers and retraining of displaced mid-career STEM workers. Unlike prior training programs, these should be targeted at jobs that require at least a bachelor’s degree. Funding could come from the H-1B visa fees that companies pay when they hire foreign workers.
  • The National Academies should form a study panel to identify on-ramps to STEM careers for students who do not go to college and recommend ways to eliminate barriers and identify effective strategies for STEM workers to more easily reenter the STEM workforce.
  • Congress should reform immigration policy to increase the number of highly skilled people admitted as permanent residents and reduce the number of temporary H-1B and L-1 work visas. Rules for H-1B and L-1 visas should be tightened to ensure that workers receive market wages and do not displace U.S. citizens and permanent resident workers.

Improve the competitiveness of the next generation of STEM workers. As workers in other countries develop more advanced skills, U.S. STEM workers must develop new skills and opportunities to distinguish themselves. They should identify and pursue career paths that are geographically sticky, and they should acquire more entrepreneurship skills that will enable them to create their own opportunities. The National Academies could help by forming a study panel to identify necessary curriculum reforms and best practices in teaching innovation, creativity, and entrepreneurship to STEM students. NSF should encourage and help fund study-abroad programs for STEM students to improve their ability to work in global teams.

Public procurement should favor U.S. workers. The public sector—federal, state, and local government—is 19% of the economy and is an important mechanism that should be used by policymakers. There is a long, strong, and positive link between government procurement and technological innovation. The federal government not only funded most of the early research in computers and the Internet but was also a major customer for those new technologies. U.S. taxpayers have a right to know that government expenditures at any level are being used appropriately to boost innovation and help U.S. workers. The first step is to do an accounting of the extent of public procurement that is being offshored. Then the government should modify regulations to keep STEM intensive-work at home.

We are at the beginning of a major structural shift in global distribution of R&D and STEM-intensive work. Given the critical nature of STEM to economic growth and national security, the United States must begin to adapt to these changes. The responses that have been proposed and adopted so far are based on the belief that nothing has changed. Simply increasing the amount of R&D spending, the pool of STEM workers, and the number of k-12 science and math teachers is not enough. The nation needs to develop a better understanding of the new dynamics of the STEM system and to adopt policies that will advance the interests of the nation and its STEM workers.

In the Zone: Comprehensive Ocean Protection

For too long, humanity’s effects on the oceans have been out of sight and out of mind. Looking at the vast ocean from the shore or a jet’s window, it is hard to imagine that this seemingly limitless area could be vulnerable to human activities. But during the past decade, reports have highlighted the consequences of human activity on our coasts and oceans, including collapsing fisheries, invasive species, unnatural warming and acidification, and ubiquitous “dead zones” induced by nutrient runoff. These changes have been linked not to a single threat but to the combined effects of the many past and present human activities that affect marine ecosystems directly and indirectly.

The declining state of the oceans is not solely a conservation concern. Healthy oceans are vital to everyone, even those who live far from the coast. More than 1 billion people worldwide depend on fish as their primary protein source. The ocean is a key component of the climate system, absorbing solar radiation and exchanging, absorbing, and emitting oxygen and carbon dioxide. Ocean and coastal ecosystems provide water purification and waste treatment, land protection, nutrient cycling, and pharmaceutical, energy, and mineral resources. Further, more than 89 million Americans and millions more around the world participate in marine recreation each year. As coastal populations and demand for ocean resources have grown, more and more human activities now overlap and interact in the marine environment.

Integrated management of these activities and their effects is necessary but is just beginning to emerge. In Boston Harbor, for example, a complicated mesh of navigation channels, offshore dumping sites, outflow pipes, and recreational and commercial vessels crisscrosses the bay. Massachusetts, like other states and regions, has realized the potential for conflict in this situation and is adopting a more integrated framework for managing these and future uses in the harbor and beyond. In 2007, California, Oregon, and Washington also agreed to pursue a new integrated style of ocean management that accounts for ecosystem interactions and multiple human uses.

This shift in thinking is embodied in the principles of ecosystem-based management, an integrated approach to management that considers the entire ecosystem, including humans. The goal of ecosystem-based management is to maintain an ecosystem in a healthy, productive, and resilient condition so that it can provide the services humans want and need, taking into account the cumulative effects and needs of different sectors. New York State has passed legislation aimed at achieving a sustainable balance among multiple uses of coastal ecosystems and the maintenance of ecological health and integrity. Washington State has created a regional public/private partnership to restore Puget Sound, with significant authority for coordinated ecosystem-based management.

These examples reflect a promising and growing movement toward comprehensive ecosystem-based management in the United States and internationally. However, efforts to date remain isolated and relatively small in scale, and U.S. ocean management has largely failed to address the cumulative effects of multiple human stressors.

Falling short

A close look at several policies central to the current system of ocean and coastal management reveals ways in which ecosystem-based management was presaged as well as reasons why these policies have fallen short of a comprehensive and coordinated ocean management system. The 1972 Coastal Zone Management Act (CZMA) requires coastal states, in partnership with the federal government, to protect and preserve coastal wetlands and other ecosystems, provide healthy fishery harvests, ensure recreational use, maintain and improve water quality, and allow oil and gas development in a manner compatible with long-term conservation. Federal agencies must ensure that their activities are consistent with approved state coastal zone management plans, providing for a degree of state/federal coordination. At the state level, however, management of all of these activities is typically fragmented among different agencies, generally not well coordinated, and often reactive. In addition, the CZMA does not address important stressors on the coasts, such as the runoff of fertilizers and pesticides from inland areas that eventually make their way into the ocean.

The 1970 National Environmental Policy Act (NEPA) also recognized the importance of cumulative effects. Under NEPA and related state environmental laws, agencies are required to assess cumulative effects, both direct and indirect, of proposed development projects. In addition, they must assess the cumulative effects of all other past, present, and future developments on the same resources. This is an onerous and ambiguous process, which is seldom completed, in part because cumulative effects are difficult to identify and measure. It is also a reactive process, triggered when a project is proposed, and therefore does not provide a mechanism for comprehensive planning in the marine environment.

Congress in the early 1970s also passed the Endangered Species Act, the Marine Mammal Protection Act, and the National Marine Sanctuaries Act, which require the National Marine Fisheries Service and National Oceanic and Atmospheric Administration (NOAA) to address the cumulative effects of human activities on vulnerable marine species and habitats. NOAA’s National Marine Sanctuary Program is charged with conserving, protecting, and enhancing biodiversity, ecological integrity, and cultural legacy within sanctuary boundaries while allowing uses that are compatible with resource protection. The sanctuaries are explicitly managed as ecosystems, and humans are considered to be a fundamental part of those ecosystems. However, sanctuary management has often been hampered by a lack of funding and limited authority to address many important issues. Although the endangered species and marine mammal laws provide much stronger mandates for action, the single-species approach inherent in these laws has limited their use in dealing with broader-scale ecological degradation.

Existing laws may look good on paper, but in practice they have proven inadequate. Overall, U.S. ocean policy has five major shortcomings. First, it is severely fragmented and poorly matched to the scale of the problem. Second, it lacks an overarching set of guiding principles and an effective framework for coordination and decisionmaking. Third, the tools provided in the various laws lead mostly to reactive planning. Fourth, many policies lack sufficient regulatory teeth or funding to implement their mandates. Finally, scientific information and methods for judging the nature and extent of cumulative effects have been insufficient to support integrated management.

The oceans are still largely managed piecemeal, one species, sector, or issue at a time, despite the emphasis on cumulative effects and integrated management in existing laws. A multitude of agencies with very different mandates have jurisdiction over coastal and ocean activities, each acting at different spatial scales and locations. Local and state governments control most of what happens on land. States generally have authority out to three nautical miles, and federal agencies govern activities from the three-mile limit to the exclusive economic zone (EEZ) boundary, 200 nautical miles offshore. Layered on top of these boundaries are separate jurisdictions for National Marine Sanctuaries, National Estuarine Reserves, the Minerals Management Service, regional fisheries management councils, and many others. There is often no mechanism or mandate for the diverse set of agencies that manage individual sectors to communicate or coordinate their actions, despite the fact that the effects of human activities frequently extend across boundaries (for example, land-based pollutants may be carried far from shore by currents) and the activities of one sector may affect those of another (for example, proposed offshore energy facilities may affect local or regional fisheries). The result is a de facto spatial configuration of overlapping and uncoordinated rules and regulations.

Important interconnections—between human uses and the environment and across ecological and jurisdictional realms—are being ignored. The mandate for meaningful coordination among these different realms is weak at best, with no transparent process for implementing coordinated management and little authority to address effects outside of an agency’s direct jurisdiction. The increasing number and severity of dead zones are examples of this. A dead zone is an area in which coastal waters have been depleted of oxygen, and thus of marine life, because of the effects of fertilizer runoff from land. But the sources of the problem are often so distant from the coast that proving the land/coast connection and then doing something about the problem are major challenges for coastal managers.

Ocean and coastal managers are forced to react to problems as they emerge, often with limited time and resources to address them, rather than being able to plan for them within the context of all ocean uses. For example, states and local communities around the country are grappling with plans for liquefied natural gas facilities. Most have no framework for weighing the pros and cons of multiple potential sites. Instead, they are forced to react to each proposal individually, and their ability to plan ahead for future development is curtailed. Approval of an individual project requires the involvement of multiple federal, state, and local agencies; the makeup of this group varies depending on the prospective location. This reactive and variable process results in ad hoc decisionmaking, missed opportunities, stalemates, and conflicts among uses. Nationwide, similar problems are involved in the evaluation of other emerging ocean uses, such as offshore aquaculture and energy development.

The limited recovery of endangered and threatened salmon species is another example of how our current regulatory framework can fail, while also serving as an example of the potential of the ecosystem-based approach. A suite of land- and ocean-based threats, including overharvesting, habitat degradation, changing ocean temperatures, and aquaculture threaten the survival of salmon stocks on the west coast. In Puget Sound, the problem is made more complex by the interaction of salmon with their predators, resident killer whales, which are also listed as endangered. Despite huge amounts of funding and the mandates of the Endangered Species Act, salmon recovery has been hampered by poor coordination among agencies with different mandates, conflicts among users, and management approaches that have failed to account for the important influence of ecosystem interactions. In response, a novel approach was developed called the Shared Strategy for Puget Sound. Based on technical input from scientists about how multiple stressors combine to affect salmon populations, local watershed groups developed creative, feasible, salmon recovery plans for their own watersheds. Regional-scale action was also needed, so watershed-level efforts were merged with input from federal, county, and tribal governments as well as stakeholders to create a coordinated Puget Sound Salmon Recovery Plan. That plan is now being implemented by the Puget Sound Partnership, a groundbreaking public/private alliance aimed at coordinated management and recovery of not just salmon but the entire Puget Sound ecosystem.

As in Puget Sound, most areas of the ocean are now used or affected by humans in multiple ways, yet understanding how those various activities and stresses interact to affect marine ecosystems has proven difficult. Science has lagged behind policies aimed at addressing cumulative effects, leaving managers with few tools to weigh the relative importance and combined effects of a multitude of threats. In some cases, the stressors act synergistically, so that the combination of threats is worse than just the sum of their independent effects. Examples abound, such as when polycyclic aromatic hydrocarbon pollutants combine with increases in ultraviolet radiation to increase the mortality of some marine invertebrates. Recent work suggests that these synergistic effects are common, especially as more activities are undertaken in a particular place. The science of multiple stressors is still in its infancy, and only recently has a framework existed for mapping and quantifying the impacts of multiple human activities on a common scale. This scientific gap is a final critical limitation on active and comprehensive management of the marine environment.

Toward ocean zoning

Advocates for changing U.S. ocean policy are increasingly calling for comprehensive ocean zoning, a form of ecosystem-based management in which zones are designated for different uses in order to separate incompatible activities and reduce conflicts, protect vulnerable ecosystems from potential stressors, and plan for future uses. Zoning is already in the planning stages in a variety of places.

Creating a comprehensive, national, ecosystem-based, ocean zoning policy could address many of the ubiquitous problems with current policy by providing a set of overarching guiding principles and a standardized mechanism for the planning of ocean uses that takes into account cumulative effects. In order to be successful, the policy must mandate and streamline interagency coordination and integrated decisionmaking. It should also provide for public accountability and effective stakeholder engagement. Finally, it should be supported by scientific tools and information that allow participants in the process to understand where and why serious cumulative effects occur and how best to address them.

An integrated management approach would be a major shift in ocean policy. Managers will need not only a new governance framework but also new tools for prioritizing and coordinating management actions and measuring success. They must be able to:

  • Understand the spatial distribution of multiple human activities and the direct and indirect stresses on the ecosystem associated with those activities
  • Assess cumulative effects of multiple current and future activities, both inside and out of their jurisdictions, that affect target ecosystems and resources in the management area
  • Identify sets of interacting or overlapping activities that suggest where and when coordination between agencies is critical
  • Prioritize the most important threats to address and/or places to invest limited resources
  • Effectively monitor management performance and changing threats over time

Given the differences among the stressors—in their effects, intensity, and scale—comparing them with a common metric or combining them into a measure of cumulative impact in order to meet these needs has been difficult. Managers have lacked comprehensive data and a systematic framework for measuring cumulative effects, effectively making it impossible for them to implement ecosystem-based management or comprehensive ecosystem-based ocean zoning. Thanks to a new tool developed by a diverse group of scientists and described below, managers are now able to assess, visualize, and monitor cumulative effects. This tool is already being used to address the needs listed above and to guide the implementation of ecosystem-based management in several places. The resulting maps of cumulative human impact produced by the process show an over-taxed ocean, reinforcing the urgent need for careful zoning of multiple uses.

An assessment tool

The first application of this framework for quantifying and mapping cumulative effects evaluated the state of the oceans at a global scale. Maps of 17 different human activities, from pollution to fishing to climate change, were overlaid and combined into a cumulative impact index. The results dramatically contradict the common impression that much of the world’s ocean is too vast and remote to be heavily affected by humans (figure 1). As much as 41% of the ocean has been heavily influenced by human activity (the orange to red areas on the map), less than 4% is relatively unaffected (the blue areas), and no single square mile is unaffected by the 17 activities mapped. In coastal areas, no fewer than 9 and as many as 14 of the 17 activities co-occur in every single square mile. The consequences of this heavy use may be missed if human activities are evaluated and managed in isolation from one another, as they typically have been in the United States and elsewhere. The stunning ubiquity of multiple effects highlights the challenges and opportunities facing the United States in trying to achieve sustainable use and long-term protection of our coasts and oceans.

Source: Modified from Halpern et al., 2008, Science

The global map makes visible for the first time the overall impact that humans have had on the oceans. More important, the framework used to create it can help managers move beyond the current ad hoc decisionmaking process to assess the effects of multiple sectors simultaneously and to consider their separate and cumulative effects, filling a key scientific gap. The approach makes the land/sea connection explicit, linking the often segregated management concerns of these two realms. It can be applied to a wide range of management issues at any scale. Local and regional management have the most to gain by designing tailored analyses using fine-scale data on relevant activities and detailed habitat maps. Such analyses have recently been completed for the Northwest Hawaiian Islands Marine National Monument and the California Current Large Marine Ecosystem, which stretches from the U.S.-Canada border to Mexico. Armed with this tool, policymakers and managers can more easily make complex decisions about how best to design comprehensive spatial management plans that protect vulnerable ecosystems, separate incompatible uses, minimize harmful cumulative effects, and ultimately ensure the greatest overall benefits from marine ecosystem goods and services.

Policy recommendations

Three key recommendations emerge as priorities from this work. First, systematic, repeatable integration of data on multiple combined effects provides a way to take the pulse of ocean conditions over time and evaluate ocean restoration and protection plans, as well as proposed development. Policymakers should support efforts by scientists and agencies to develop robust ways to collect these critical data.

The framework developed to map human effects globally is beginning to play an important role in cumulative-impact assessment and the implementation of ecosystem-based management and ocean zoning. This approach integrates information about a wide variety of human activities and ecosystem types into a single, comparable, and updatable impact index. This index is ecologically grounded in that it accounts for the variable responses of different ecosystems to the same activity (for example, coral reefs are more sensitive to fertilizer runoff than are kelp forests). The intensity of each activity is assessed in each square mile of ocean on a common scale and then weighted by the vulnerability of the ecosystems in that location to each activity. Weighted scores are summed and displayed in a map of cumulative effects.

With this information, managers can answer questions such as, where are areas of high and low cumulative impact? What are the most important threats to marine systems? And what are the most important data gaps that must be addressed to implement effective integrated management of coastal and marine ecosystems? They can also identify areas in which multiple, potentially incompatible activities overlap, critical information for spatial planning. For example, restoring oyster reefs for fisheries production may be ineffective if nearby activities such as farming and urban development result in pollution that impairs the fishery. Further, managers can use the framework to evaluate cumulative effects under alternative management or policy scenarios, such as the placement of new infrastructure (for example, wind or wave energy farms) or the restriction of particular activities (for example, certain kinds of fishing practices).

This approach highlights important threats to the viability of ocean resources, a key aspect of NOAA’s framework for assessing the health of marine species’ populations. In the future, it might also inform a similar framework for monitoring the condition of coastal and ocean ecosystems. In particular, this work could contribute to the development of a national report card on ocean health to monitor the condition of our oceans and highlight what is needed to sustain the goods and services they provide. NOAA’s developing ecosystem approach to management is fundamentally adaptive and therefore will depend critically on monitoring. This tool is a valuable contribution to such efforts, providing a benchmark for future assessments of ocean conditions and a simple new way to evaluate alternative management strategies.

The second key policy recommendation is that, because of the ubiquity and multitude of human uses of the ocean, cumulative effects on marine ecosystems within the U.S. EEZ can and must be addressed through comprehensive marine spatial planning. Implementing such an effort will require new policies that support ocean zoning and coordinated regional planning and management under an overarching set of ecosystem-based guiding principles.

The vast extent, patchwork pattern, and intensity of stressors on the oceans highlight the critical need for integrated planning of human activities in the coastal and marine environments. However, this patchwork pattern also represents an opportunity, because small changes in the intensity and/or location of different uses through comprehensive ocean zoning can dramatically reduce cumulative effects.

Understanding potential tradeoffs among diverse social and ecological objectives is a key principle of ecosystem-based management and will be critical to effective ocean zoning. Quantification and mapping of cumulative effects can be used to explore alternative management scenarios that seek to balance tradeoffs. By assessing how cumulative effects change as particular human activities are added, removed, or relocated within the management area, managers, policymakers, and stakeholders can compare the potential costs and benefits of different decisions. Decisionmaking by the National Marine Sanctuaries, coastal states, and regional ecosystem initiatives could all potentially benefit from revealing areas of overlap, conflict, and incompatibility among human activities. In the Papahānaumokuākea Marine National Monument in the Northwestern Hawaiian Islands, this approach is already being used to help guide decisions on where different activities should be allowed in this highly sensitive ecosystem.

Other spatial management approaches, particularly marine protected areas (MPAs), can also benefit from this framework and tool. Most MPA regulations currently restrict only fishing, but the widespread overlap of multiple stressors suggests that for MPAs to be successful, managers must either expand the list of activities that are excluded from MPAs, locate them carefully to avoid negative effects from other human activities, or implement complementary regulations to limit the impact of other activities on MPAs. Comprehensive assessments and mapping of cumulative effects can be used at local or regional levels to highlight gaps in protection, select areas to protect, and help locate MPAs where their beneficial effects will be maximized. For example, the Great Barrier Reef Marine Park Authority of Australia embedded its large network of MPAs within other zoning and regulations to help address the many threats to coral reef ecosystems not mitigated by protection in MPAs.

The transition to comprehensive ocean zoning will not happen overnight. The governance transition, coordination with states and neighboring countries, engagement of diverse stakeholder groups, and scientific data collection and integration, among other challenges, will all take significant time, effort, and political will. These efforts would be galvanized by the passage of a U.S. ocean policy act, one that cements the country’s commitment to protecting ocean resources, just as the Clean Air and Clean Water Acts have done for air and fresh water. In addition, continued strengthening and funding of the CZMA would support states’ efforts to reduce the impact of coastal and upstream land use on coastal water quality and protect vulnerable ecosystems from overuse and degradation. Lawmakers could also increase the authority of the National Marine Sanctuary Program as a system of protected regions in which multiple human uses are already being managed. Finally, legislation to codify NOAA and support and increase its efforts to understand and manage marine ecosystems in an integrated way is urgently needed.

The third key policy recommendation is that protection of the few remaining relatively pristine ecosystems in U.S. waters should be a top priority. U.S. waters harbor important intact ecosystems that are currently beyond the reach of most human activities. Unfortunately, less than 2% of the U.S. EEZ is relatively unaffected. These areas are essentially national ocean wilderness areas, offering rich opportunities to understand how healthy systems work and important baselines to inform the restoration of those that have been degraded. These areas deserve immediate protection so that they can be maintained in a healthy condition for the foreseeable future. The fact that these areas are essentially unaffected by human activity means that the cost of protecting them in terms of lost productivity or suspended economic activity is minimal. The opportunity cost is small and the returns likely very large, making a strong case for action. If the nation waits to create robust marine protected areas to permanently conserve these intact places, it risks their degradation and a lost opportunity to protect the small percentage of marine systems that remains intact.

In Defense of Biofuels, Done Right

Biofuels have been getting bad press, not always for good reasons. Certainly important concerns have been raised, but preliminary studies have been misinterpreted as a definitive condemnation of biofuels. One recent magazine article, for example, illustrated what it called “Ethanol USA” with a photo of a car wreck in a corn field. In particular, many criticisms converge around grain-based biofuel, traditional farming practices, and claims of a causal link between U.S. land use and land-use changes elsewhere, including tropical deforestation.

Focusing only on such issues, however, distracts attention from a promising opportunity to invest in domestic energy production using biowastes, fast-growing trees, and grasses. When biofuel crops are grown in appropriate places and under sustainable conditions, they offer a host of benefits: reduced fossil fuel use; diversified fuel supplies; increased employment; decreased greenhouse gas emissions; enhanced habitat for wildlife; improved soil and water quality; and more stable global land use, thereby reducing pressure to clear new land.

Not only have many criticisms of biofuels been alarmist, many have been simply inaccurate. In 2007 and early 2008, for example, a bumper crop of media articles blamed sharply higher food prices worldwide on the production of biofuels, particularly ethanol from corn, in the United States. Subsequent studies, however, have shown that the increases in food prices were primarily due to many other interacting factors: increased demand in emerging economies, soaring energy prices, drought in food-exporting countries, cut-offs in grain exports by major suppliers, market-distorting subsidies, a tumbling U.S. dollar, and speculation in commodities markets.

Although ethanol production indeed contributes to higher corn prices, it is not a major factor in world food costs. The U.S. Department of Agriculture (USDA) calculated that biofuel production contributed only 5% of the 45% increase in global food costs that occurred between April 2007 and April 2008. A Texas A&M University study concluded that energy prices were the primary cause of food price increases, noting that between January 2006 and January 2008, the prices of fuel and fertilizer, both major inputs to agricultural production, increased by 37% and 45%, respectively. And the International Monetary Fund has documented that since their peak in July 2008, oil prices declined by 69% as of December 2008, and global food prices declined by 33% during the same period, while U.S. corn production has remained at 12 billion bushels a month, one-third of which is still used for ethanol production.

In another line of critique, some argue that the potential benefits of biofuel might be offset by indirect effects. But large uncertainties and postulations underlie the debate about the indirect land-use effects of biofuels on tropical deforestation, the critical implication being that use of U.S. farmland for energy crops necessarily causes new land-clearing elsewhere. Concerns are particularly strong about the loss of tropical forests and natural grasslands. The basic argument is that biofuel production in the United States sets in motion a necessary scenario of deforestation.

According to this argument, if U.S. farm production is used for fuel instead of food, food prices rise and farmers in developing countries respond by growing more food. This response requires clearing new land and burning native vegetation and, hence, releasing carbon. This “induced deforestation” hypothesis is based on questionable data and modeling assumptions about available land and yields, rather than on empirical evidence. The argument assumes that the supply of previously cleared land is inelastic (that is, agricultural land for expansion is unavailable without new deforestation). It also assumes that agricultural commodity prices are a major driving force behind deforestation and that yields decline with expansion. The calculations for carbon emissions assume that land in a stable, natural state is suddenly converted to agriculture as a result of biofuels. Finally, the assertions assume that it is possible to measure with some precision the areas that will be cleared in response to these price signals.

A review of the issues reveals, however, that these assumptions about the availability of land, the role of biofuels in causing deforestation, and the ability to relate crop prices to areas of land clearance are unsound. Among our findings:

First, sufficient suitably productive land is available for multiple uses, including the production of biofuels. Assertions that U.S. biofuel production will cause large indirect land-use changes rely on limited data sets and unverified assumptions about global land cover and land use. Calculations of land-use change begin by assuming that global land falls into discrete classes suitable for agriculture—cropland, pastures and grasslands, and forests—and results depend on estimates of the extent, use, and productivity of these lands, as well as presumed future interactions among land-use classes. But several major organizations, including the Food and Agriculture Organization (FAO), a primary data clearinghouse, have documented significant inconsistencies surrounding global land-cover estimates. For example, the three most recent FAO Forest Resource Assessments, for periods ending in 1990, 2000, and 2005, provide estimates of the world’s total forest cover in 1990 that vary by as much as 470 million acres, or 21% of the original estimate.

Cropland data face similar discrepancies, and even more challenging issues arise when pasture areas are considered. Estimates for land used for crop production range from 3.8 billion acres (calculated by the FAO) to 9 billion acres (calculated by the Millennium Ecosystem Assessment, an international effort spearheaded by the United Nations). In a recent study attempting to reconcile cropland use circa 2000, scientists at the University of Wisconsin-Madison and McGill University estimated that there were 3.7 billion acres of cropland, of which 3.2 billion were actively cropped or harvested. Land-use studies consistently acknowledge serious data limitations and uncertainties, noting that a majority of global crop lands are constantly shifting the location of cultivation, leaving at any time large areas fallow or idle that may not be captured in statistics. Estimates of idle croplands, prone to confusion with pasture and grassland, range from 520 million acres to 4.9 billion acres globally. The differences illustrate one of many uncertainties that hamper global land-use change calculations. To put these numbers in perspective, USDA has estimated that in 2007, about 21 million acres were used worldwide to produce biofuel feedstocks, an area that would occupy somewhere between 0.4% and 4% of the world’s estimated idle cropland.

Diverse studies of global land cover and potential productivity suggest that anywhere from 600 million to more than 7 billion additional acres of underutilized rural lands are available for expanding rain-fed crop production around the world, after excluding the 4 billion acres of cropland currently in use, as well as the world’s supply of closed forests, nature reserves, and urban lands. Hence, on a global scale, land per se is not an immediate limitation for agriculture and biofuels.

In the United States, the federal government, through the multiagency Biomass Research and Development Initiative (BRDI), has examined the land and market implications of reaching the nation’s biofuel target, which calls for producing 36 billion gallons by 2022. BRDI estimated that a slight net reduction in total U.S. active cropland area would result by 2022 in most scenarios, when compared with a scenario developed from USDA’s so-called “baseline” projections. BRDI also found that growing biofuel crops efficiently in the United States would require shifts in the intensity of use of about 5% of pasture lands to more intensive hay, forage, and bioenergy crops (25 million out of 456 million acres) in order to accommodate dedicated energy crops, along with using a combination of wastes, forest residues, and crop residues. BRDI’s estimate assumes that the total area allocated to USDA’s Conservation Reserve Program (CRP) remains constant at about 33 million acres but allows about 3 million acres of the CRP land on high-quality soils in the Midwest to be offset by new CRP additions in other regions. In practice, additional areas of former cropland that are now in the CRP could be managed for biofuel feedstock production in a way that maintains positive impacts on wildlife, water, and land conservation goals, but this option was not included among the scenarios considered.

Yields are important. They vary widely from place to place within the United States and around the world. USDA projects that corn yields will rise by 20 bushels per acre by 2017; this represents an increase in corn output equivalent to adding 12.5 million acres as compared with 2006, and over triple that area as compared with average yields in many less-developed nations. And there is the possibility that yields will increase more quickly than projected in the USDA baseline, as seed companies aim to exceed 200 bushels per acre by 2020. The potential to increase yields in developing countries offers tremendous opportunities to improve welfare and expand production while reducing or maintaining the area harvested. These improvements are consistent with U.S. trends during the past half century showing agricultural output growth averaging 2% per year while cropland use fell by an average of 0.7% per year. Even without large yield increases, cropland requirements to meet biofuel production targets may not be nearly as great as assumed.

Concerns over induced deforestation are based on a theory of land displacement that is not supported by data. U.S. ethanol production shot up by more than 3 billion gallons (150%) between 2001 and 2006, and corn production increased 11%, while total U.S. harvested cropland fell by about 2% in the same period. Indeed, the harvested area for “coarse grains” fell by 4% as corn, with an average yield of 150 bushels per acre, replaced other feed grains such as sorghum (averaging 60 bushels per acre). Such statistics defy modeling projections by demonstrating an ability to supply feedstock to a burgeoning ethanol industry while simultaneously maintaining exports and using substantially less land. So although models may assume that increased use of U.S. land for biofuels will lead to more land being cleared for agriculture in other parts of the world, evidence is lacking to support those claims.

Second, there is little evidence that biofuels cause deforestation, and much evidence for alternative causes. Recent scientific papers that blame biofuels for deforestation are based on models that presume that new land conversion can be simulated as a predominantly market-driven choice. The models assume that land is a privately owned asset managed in response to global price signals within a stable rule-based economy—perhaps a reasonable assumption for developed nations.

However, this scenario is far from the reality in the smoke-filled frontier zones of deforestation in less-developed countries, where the models assume biofuel-induced land conversion takes place. The regions of the world that are experiencing first-time land conversion are characterized by market isolation, lawlessness, insecurity, instability, and lack of land tenure. And nearly all of the forests are publicly owned. Indeed, land-clearing is a key step in a long process of trying to stake a claim for eventual tenure. A cycle involving incremental degradation, repeated and extensive fires, and shifting small plots for subsistence tends to occur long before any consideration of crop choices influenced by global market prices.

The causes of deforestation have been extensively studied, and it is clear from the empirical evidence that forces other than biofuel use are responsible for the trends of increasing forest loss in the tropics. Numerous case studies document that the factors driving deforestation are a complex expression of cultural, technological, biophysical, political, economic, and demographic interactions. Solutions and measures to slow deforestation have also been analyzed and tested, and the results show that it is critical to improve governance, land tenure, incomes, and security to slow the pace of new land conversion in these frontier regions.

Perennial biofuel crops can help stabilize land cover, enhance soil carbon sequestration, provide habitat to support biodiversity, and improve soil and water quality.

Selected studies based on interpretations of satellite imagery have been used to support the claims that U.S. biofuels induce deforestation in the Amazon, but satellite images cannot be used to determine causes of land-use change. In practice, deforestation is a site-specific process. How it is perceived will vary greatly by site and also by the temporal and spatial lens through which it is observed. Cause-and-effect relationships are complex, and the many small changes that enable larger future conversion cannot be captured by satellite imagery. Although it is possible to classify an image to show that forest in one period changed to cropland in another, cataloguing changes in discrete classes over time does not explain why these changes occur. Most studies asserting that the production and use of biofuels cause tropical deforestation point to land cover at some point after large-scale forest degradation and clearing have taken place. But the key events leading to the primary conversion of forests often proceed for decades before they can be detected by satellite imagery. The imagery does not show how the forest was used to sustain livelihoods before conversion, nor the degrees of continual degradation that occurred over time before the classification changed. When remote sensing is supported by a ground-truth process, it typically attempts to narrow the uncertainties of land-cover classifications rather than research the history of occupation, prior and current use, and the forces behind the land-use decisions that led to the current land cover.

First-time conversion is enabled by political, as well as physical, access. Southeast Asia provides one example where forest conversion has been facilitated by political access, which can include such diverse things as government-sponsored development and colonization programs in previously undisturbed areas and the distribution of large timber and mineral concessions and land allotments to friends, families, and sponsors of people in power. Critics have raised valid concerns about high rates of deforestation in the region, and they often point an accusing finger at palm oil and biofuels.

Palm oil has been produced in the region since 1911, and plantation expansion boomed in the 1970s with growth rates of more than 20% per year. Biodiesel represents a tiny fraction of palm oil consumption. In 2008, less than 2% of crude palm oil output was processed for biofuel in Indonesia and Malaysia, the world’s largest producers and exporters. Based on land-cover statistics alone, it is impossible to determine the degree of attribution that oil palm may share with other causes of forest conversion in Southeast Asia. What is clear is that oil palm is not the only factor and that palm plantations are established after a process of degradation and deforestation has transpired. Deforestation data may offer a tool for estimating the ceiling for attribution, however. In Indonesia, for example, 28.1 million hectares were deforested between 1990 and 2005, and oil palm expansion in those areas was estimated to be between 1.7 million and 3 million hectares, or between 6% and 10% of the forest loss, during the same period.

Initial clearing in the tropics is often driven more by waves of illegitimate land speculation than agricultural production. In many Latin American frontier zones, if there is native forest on the land, it is up for grabs, as there is no legal tenure of the land. The majority of land-clearing in the Amazon has been blamed on livestock because, in part, there is no alternative for classifying the recent clearings and, in part, because land holders must keep it “in production” to maintain claims and avoid invasions. The result has been the frequent burning and the creation of extensive cattle ranches. For centuries, disenfranchised groups have been pushed into the forests and marginal lands where they do what they can to survive. This settlement process often includes serving as low-cost labor to clear land for the next wave of better-connected colonists. Unless significant structural changes occur to remove or modify enabling factors, the forest-clearing that was occurring before this decade is expected to continue along predictable paths.

Testing the hypothesis that U.S. biofuel policy causes deforestation elsewhere depends on models that can incorporate the processes underlying initial land-use change. Current models attempt to predict future land-use change based on changes in commodity prices. As conceived thus far, the computational general equilibrium models designed for economic trade do not adequately incorporate the processes of land-use change. Although crop prices may influence short-term land-use decisions, they are not a dominant factor in global patterns of first-time conversion, the land-clearing of chief concern in relating biofuels to deforestation. The highest deforestation rates observed and estimated globally occurred in the 1990s. During that period, there was a surplus of commodities on world markets and consistently depressed prices.

Third, many studies omit the larger problem of widespread global mismanagement of land. The recent arguments focusing on the possible deforestation attributable to biofuels use idealized representations of crop and land markets, omitting what may be larger issues of concern. Clearly, the causes of global deforestation are complex and are not driven merely by a single crop market. Additionally, land mismanagement, involving both initial clearing and maintaining previously cleared land, is widespread and leads to a process of soil degradation and environmental damage that is especially prevalent in the frontier zones. Reports by the FAO and the Millennium Ecosystem Assessment describe the environmental consequences of repeated fires in these areas. Estimates of global burning vary annually, ranging from 490 million to 980 million acres per year between 2000 and 2004. The vast majority of fires in the tropics occur in Africa and the Amazon in what were previously cleared, nonforest lands. In a detailed study, the Amazon Institute of Environmental Research and Woods Hole Research Center found that 73% of burned area in the Amazon was on previously cleared land, and that was during the 1990s, when overall deforestation rates were high.

Fire is the cheapest and easiest tool supporting shifting subsistence cultivation. Repeated and extensive burning is a manifestation of the lack of tenure, lack of access to markets, and severe poverty in these areas. When people or communities have few or no assets to protect from fire and no incentive to invest in more sustainable production, they also have no reason to limit the extent of burning. The repeated fires modify ecosystem structure, penetrate ever deeper into forest margins, affect large areas of understory vegetation (which is not detected by remote sensing), and take an ever greater cumulative toil on soil quality and its ability to sequester carbon. Profitable biofuel markets, by contributing to improved incentives to grow cash crops, could reduce the use of fire and the pressures on the agricultural frontier. Biofuels done right, with attention to best practices for sustained production, can make significant contributions to social and economic development as well as environmental protection.

Furthermore, current literature calculates the impacts from an assumed agricultural expansion by attributing the carbon emissions from clearing intact ecosystems to biofuels. If emission analyses consider empirical data reflecting the progressive degradation that occurs (often over decades) before and independently of agriculture market signals for land use, as well as changes in the frequency and extent of fire in areas that biofuels help bring into more stable market economies, then the resulting carbon emission estimates would be worlds apart.

Brazil provides a good case in point, because it holds the globe’s largest remaining area of tropical forests, is the world’s second-largest producer of biofuel (after the United States), and is the world’s leading supplier of biofuel for global trade. Brazil also has relatively low production costs and a growing focus on environmental stewardship. As a matter of policy, the Brazilian government has supported the development of biofuels since launching a National Ethanol Program called Proálcool in 1975. Brazil’s ethanol industry began its current phase of growth after Proálcool was phased out in 1999 and the government’s role shifted from subsidies and regulations toward increased collaboration with the private sector in R&D. The government helps stabilize markets by supporting variable rates of blending ethanol with gasoline and planning for industry expansion, pipelines, ports, and logistics. The government also facilitates access to global markets; develops improved varieties of sugarcane, harvest equipment, and conversion; and supports improvements in environmental performance.

New sugarcane fields in Brazil nearly always replace pasture land or less valuable crops and are concentrated around production facilities in the developed southeastern region, far from the Amazon. Nearly all production is rain-fed and relies on low input rates of fertilizers and agrochemicals, as compared with other major crops. New projects are reviewed under the Brazilian legal framework of Environmental Impact Assessment and Environmental Licensing. Together, these policies have contributed to the restoration or protection of reserves and riparian areas and increased forest cover, in tandem with an expansion of sugarcane production in the most important producing state, Sao Paulo.

Yet natural forest in Brazil is being lost, with nearly 37 million acres lost between May 2000 and August 2006, and a total of 150 million acres lost since 1970. Some observers have suggested that the increase in U.S. corn production for biofuel led to reduced soybean output and higher soybean prices, and that these changes led, in turn, to new deforestation in Brazil. However, total deforestation rates in Brazil appear to fall in tandem with rising soybean prices. This co-occurrence illustrates a lack of connection between commodity prices and initial land clearing. This phenomenon has been observed around the globe and suggests an alternate hypothesis: Higher global commodity prices focus production and investment where it can be used most efficiently, in the plentiful previously cleared and underutilized lands around the world. In times of falling prices and incomes, people return to forest frontiers, with all of their characteristic tribulations, for lack of better options.

Biofuels done right

With the right policy framework, cellulosic biofuel crops could offer an alternative that diversifies and boosts rural incomes based on perennials. Such a scenario would create incentives to reduce intentional burning that currently affects millions of acres worldwide each year. Perennial biofuel crops can help stabilize land cover, enhance soil carbon sequestration, provide habitat to support biodiversity, and improve soil and water quality. Furthermore, they can reduce pressure to clear new land via improved incomes and yields. Developing countries have huge opportunities to increase crop yield and thereby grow more food on less land, given that cereal yields in less developed nations are 30% of those in North America. Hence, policies supporting biofuel production may actually help stop the extensive slash-and-burn agricultural cycle that contributes to greenhouse gas emissions, deforestation, land degradation, and a lifestyle that fails to support farmers and their families.

Biofuels alone are not the solution, however. Governments in the United States and elsewhere will have to develop and support a number of programs designed to support sustainable development. The operation and rules of such programs must be transparent, so that everyone can understand them and see that fair play is ensured. Among other attributes, the programs must offer economic incentives for sustainable production, and they must provide for secure land tenure and participatory land-use planning. In this regard, pilot biofuel projects in Africa and Brazil are showing promise in addressing the vexing and difficult challenges of sustainable land use and development. Biofuels also are uniting diverse stakeholders in a global movement to develop sustainability metrics and certification methods applicable to the broader agricultural sector.

Given a priority to protect biodiversity and ecosystem services, it is important to further explore the drivers for the conversion of land at the frontier and to consider the effects, positive and negative, that U.S. biofuel policies could have in these areas. This means it is critical to distinguish between valid concerns calling for caution and alarmist criticisms that attribute complex problems solely to biofuels.

Still, based on the analyses that we and others have done, we believe that biofuels, developed in an economically and environmentally sensible way, can contribute significantly to the nation’s—indeed, the world’s—energy security while providing a host of benefits for many people in many regions.

Biomedical Enhancements: Entering a New Era

Recently, the Food and Drug Administration (FDA) approved a drug to lengthen and darken eyelashes. Botox and other wrinkle-reducing injections have joined facelifts, tummy tucks, and vaginal reconstruction to combat the effects of aging. To gain a competitive edge, athletes use everything from steroids and blood transfusions to recombinant-DNA–manufactured hormones, Lasik surgery, and artificial atmospheres. Students supplement caffeine-containing energy drinks with Ritalin and the new alertness drug modafinil. The military spends millions of dollars every year on biological research to increase the warfighting abilities of our soldiers. Parents perform genetic tests on their children to determine whether they have a genetic predisposition to excel at explosive or endurance sports. All of these are examples of biomedical enhancements: interventions that use medical and biological technology to improve performance, appearance, or capability in addition to what is necessary to achieve, sustain, or restore health.

The use of biomedical enhancements, of course, is not new. Amphetamines were doled out to troops during World War II. Athletes at the turn of the 20th century ingested narcotics. The cognitive benefits of caffeine have been known for at least a millennium. Ancient Greek athletes swallowed herbal infusions before competitions. The Egyptians brewed a drink containing a relative of Viagra at least 1,000 years before Christ. But modern drug development and improvements in surgical technique are yielding biomedical enhancements that achieve safer, larger, and more targeted enhancement effects than their predecessors, and more extraordinary technologies are expected to emerge from ongoing discoveries in human genetics. (In addition, there are biomechanical enhancements that involve the use of computer implants and nanotechnology, which are beyond the scope of this article.)

What is also new is that biomedical enhancements have become controversial. Some commentators want to outlaw them altogether. Others are concerned about their use by athletes and children. Still others fret that only the well-off will be able to afford them, thereby exacerbating social inequality.

Banning enhancements, however, is misguided. Still, it is important to try to ensure that they are as safe and effective as possible, that vulnerable populations such as children are not forced into using them, and that they are not available only to the well-off. This will require effective government and private action.

A misguided view

Despite the long history of enhancement use, there recently has emerged a view that it is wrong. The first manifestation of this hostility resulted from the use of performance enhancements in sports in the 1950s, especially steroids and amphetamines. European nations began adopting antidoping laws in the mid-1960s, and the Olympic Games began testing athletes in 1968. In 1980, Congress amended the Federal Food, Drug, and Cosmetic Act (FFDCA) to make it a felony to distribute anabolic steroids for nonmedical purposes. Two years later, Congress made steroids a Schedule III controlled substance and substituted human growth hormone in the steroid provision of the FFDCA. Between 2003 and 2005, Congress held hearings lambasting professional sports for not imposing adequate testing regimens. Drug testing has also been instituted in high-school and collegiate sports.

The antipathy toward biomedical enhancements extends well beyond sports, however. Officially, at least, the National Institutes of Health (NIH) will not fund research to develop genetic technologies for human enhancement purposes, although it has funded studies in animals that the researchers tout as a step toward developing human enhancements. It is a federal crime to use steroids to increase strength even if the user is not an athlete. Human growth hormone is in a unique regulatory category in that it is a felony to prescribe it for any purpose other than a specific use approved by the FDA. (For example, the FDA has not approved it for anti-aging purposes.) There is an ongoing controversy about whether musicians, especially string players, should be allowed to use beta blockers to steady their hands. And who hasn’t heard of objections to the use of mood-altering drugs to make “normal” people happier? There’s even a campaign against caffeine.

From an era in which employees are tested to make sure they aren’t taking drugs, we might see a new approach in which employers test them to make sure they are.

If the critics had their way, the government would ban the use of biomedical enhancements. It might seem that this would merely entail extending the War on Drugs to a larger number of drugs. But remember that enhancements include not just drugs, but cosmetic surgery and information technologies, such as genetic testing to identify nondisease traits. So a War on Enhancements would have to extend to a broader range of technologies, and because many are delivered within the patient-physician relationship, the government would have to intrude into that relationship in significant new ways. Moreover, the FDA is likely to have approved many enhancement drugs for legitimate medical purposes, with enhancement use taking place on an “off-label” basis. So there would have to be some way for the enhancement police to identify people for whom the drugs had been legally prescribed to treat illness, but who were misusing them for enhancement purposes.

This leads to a far more profound difficulty. The War on Drugs targets only manufacture, distribution, and possession. There is virtually no effort to punish people merely for using an illegal substance. But a successful ban on biomedical enhancement would have to prevent people from obtaining benefits from enhancements that persisted after they no longer possessed the enhancements themselves, such as the muscles built with the aid of steroids or the cognitive improvement that lasts for several weeks after normal people stop taking a certain medicine that treats memory loss in Alzheimer’s patients. In short, a ban on enhancements would have to aim at use as well as possession and sale.

To imagine what this would be like, think about the campaign against doping in elite sports, where athletes must notify antidoping officials of their whereabouts at all times and are subject to unannounced, intrusive, and often indecent drug tests at any hour of the day or night. Even in the improbable event that regular citizens were willing to endure such an unprecedented loss of privacy, the economic cost of maintaining such a regime, given how widespread the use of highly effective biomedical enhancements might be, would be prohibitive.

A ban on biomedical enhancements would be not only unworkable but unjustifiable. Consider the objections to enhancement in sports. Why are enhancements against the rules? Is it because they are unsafe? Not all of them are: Anti-doping rules in sports go after many substances that pose no significant health risks, such as caffeine and Sudafed. (A Romanian gymnast forfeited her Olympic gold medal after she accidentally took a couple of Sudafed to treat a cold.) Even in the case of vilified products such as steroids, safety concerns stem largely from the fact that athletes are forced to use the drugs covertly, without medical supervision. Do enhancements give athletes an “unfair” advantage? They do so only if the enhancements are hard to obtain, so that only a few competitors obtain the edge. But the opposite seems to be true: Enhancements are everywhere. Besides, athletes are also tested for substances that have no known performance-enhancing effects, such as marijuana. Are the rewards from enhancements “unearned”? Not necessarily. Athletes still need to train hard. Indeed, the benefit from steroids comes chiefly from allowing athletes to train harder without injuring themselves. In any event, success in sports comes from factors that athletes have done nothing to deserve, such as natural talent and the good luck to have been born to encouraging parents or to avoid getting hurt. Would the use of enhancements confound recordkeeping? This doesn’t seem to have stopped the adoption of new equipment that improves performance, such as carbon-fiber vaulting poles, metal skis, and oversized tennis racquets. If one athlete used enhancements, would every athlete have to, so that the benefit would be nullified? No, there would still be the benefit of improved performance across the board—bigger lifts, faster times, higher jumps. In any case, the same thing happens whenever an advance takes place that improves performance.

The final objection to athletic enhancement, in the words of the international Olympic movement, is that it is against the “spirit of sport.” It is hard to know what this means. It certainly can’t mean that enhancements destroy an earlier idyll in which sports were enhancement-free; as we saw before, this never was the case. Nor can it stand for the proposition that a physical competition played with the aid of enhancements necessarily is not a “sport.” There are many sporting events in which the organizers do not bother to test participants, from certain types of “strong-man” and powerlifting meets to your neighborhood pickup basketball game. There are several interesting historical explanations for why athletic enhancement has gained such a bad rap, but ultimately, the objection about “the spirit of sport” boils down to the fact that some people simply don’t like the idea of athletes using enhancements. Well, not exactly. You see, many biomedical enhancements are perfectly permissible, including dietary supplements, sports psychology, carbohydrate loading, electrolyte-containing beverages, and sleeping at altitude (or in artificial environments that simulate it). Despite the labor of innumerable philosophers of sport, no one has ever come up with a rational explanation for why these things are legal and others aren’t. In the end, they are just arbitrary distinctions.

But that’s perfectly okay. Lots of rules in sports are arbitrary, like how many players are on a team or how far the boundary lines stretch. If you don’t like being all alone in the outfield, don’t play baseball. If you are bothered by midnight drug tests, don’t become an Olympian.

The problem comes when the opponents of enhancement use in sports try to impose their arbitrary dislikes on the wider world. We already have observed how intrusive and expensive this would be. Beyond that, there are strong constitutional objections to using the power of the law to enforce arbitrary rules. But most important, a ban on the use of enhancements outside of sports would sacrifice an enormous amount of societal benefit. Wouldn’t we want automobile drivers to use alertness drugs if doing so could prevent accidents? Shouldn’t surgeons be allowed to use beta blockers to steady their hands? Why not let medical researchers take cognitive enhancers if it would lead to faster cures, or let workers take them to be more productive? Why stop soldiers from achieving greater combat effectiveness, rescue workers from lifting heavier objects, and men and women from leading better sex lives? Competent adults who want to use enhancements should be permitted to. In some instances, such as in combat or when performing dangerous jobs, they should even be required to.

Protecting the vulnerable

Rejecting the idea of banning enhancements doesn’t mean that their use should be unregulated. The government has several crucial roles to play in helping to ensure that the benefits from enhancement use outweigh the costs.

In the first place, the government needs to protect people who are incapable of making rational decisions about whether to use enhancements. In the language of biomedical ethics, these are populations that are “vulnerable,” and a number of them are well recognized. One such group, of course, is people with severe mental disabilities. The law requires surrogates to make decisions for these individuals based on what is in their best interests.

Another vulnerable population is children. There can be little disagreement that kids should not be allowed to decide on their own to consume powerful, potentially dangerous enhancement substances. Not only do they lack decisionmaking capacity, but they may be much more susceptible than adults to harm. This is clearly the case with steroids, which can interfere with bone growth in children and adolescents.

The more difficult question is whether parents should be free to give enhancements to their children. Parents face powerful social pressures to help their children excel. Some parents may be willing to improve their children’s academic or athletic performance even at a substantial risk of injury to the child. There are many stories of parents who allow their adolescent daughters to have cosmetic surgery, including breast augmentation. In general, the law gives parents considerable discretion in determining how to raise their children. The basic legal constraint on parental discretion is the prohibition in state law against abuse or neglect, and this generally is interpreted to defer to parental decisionmaking so long as the child does not suffer serious net harm. There are no reported instances in which parents have been sanctioned for giving their children biomedical enhancements, and the authorities might conclude that the benefits conferred by the use of an enhancement outweighed even a fairly significant risk of injury.

Beyond the actions of parents, there remains the question of whether some biomedical enhancements are so benign that children should be allowed to purchase them themselves. At present, for instance, there is no law in the United States against children purchasing coffee, caffeinated soft drinks, and even high-caffeine–containing energy drinks. (Laws prohibiting children from buying energy drinks have been enacted in some other countries.)

At the same time, it may be a mistake to lump youngsters together with older adolescents into one category of children. Older adolescents, although still under the legal age of majority, have greater cognitive and judgmental capacities than younger children. The law recognizes this by allowing certain adolescents, deemed “mature” or “emancipated” minors, to make legally binding decisions, such as decisions to receive medical treatment. Older adolescents similarly may deserve some degree of latitude in making decisions about using biomedical enhancements.

Children may be vulnerable to pressure to use enhancements not only from their parents, but from their educators. Under programs such as No Child Left Behind, public school teachers and administrators are rewarded and punished based on student performance on standardized tests. Private schools compete with one another in terms of where their graduates are accepted for further education. There is also intense competition in school athletics, especially at the collegiate level. Students in these environments may be bull-dozed into using enhancements to increase their academic and athletic abilities. Numerous anecdotes, for example, tell of parents who are informed by teachers that their children need medication to “help them focus”; the medication class in question typically is the cognition-enhancing amphetamines, and many of these children do not have diagnoses that would warrant the use of these drugs.

Beyond students, athletes in general are vulnerable to pressure from coaches, sponsors, family, and teammates to use hazardous enhancements. For example, at the 2005 congressional hearings on steroid use in baseball, a father testified that his son committed suicide after using steroids, when in fact he killed himself after his family caught him using steroids, which the boy had turned to in an effort to meet his family’s athletic aspirations.

Another group that could be vulnerable to coercion is workers. Employers might condition employment or promotion on the use of enhancements that increased productivity. For example, an employer might require its nighttime work force to take the alertness drug modafinil, which is now approved for use by sleep-deprived swing-shift workers. Current labor law does not clearly forbid this so long as the drug is relatively safe. From an era in which employees are tested to make sure they aren’t taking drugs, we might see a new approach in which employers test them to make sure they are.

Members of the military may also be forced to use enhancements. The military now conducts the largest known biomedical enhancement research project. Under battlefield conditions, superiors may order the use of enhancements, leaving soldiers no lawful option to refuse. A notorious example is the use of amphetamines by combat pilots. Technically, the pilots are required to give their consent to the use of the pep pills, but if they refuse, they are barred from flying the missions.

The ability of government regulation to protect vulnerable groups varies depending on the group. It is important that educators not be allowed to give students dangerous enhancements without parental permission and that parents not be pressured into making unreasonable decisions by fearful, overzealous, or inadequate educators. The law can mandate the former, but not easily prevent the latter. Coaches and trainers who cause injury to athletes by giving them dangerous enhancements or by unduly encouraging their use should be subject to criminal and civil liability. The same goes for employers. But the realities of military life make it extremely difficult to protect soldiers from the orders of their superiors.

Moreover, individuals may feel pressure to use enhancements not only from outside sources, but from within. Students may be driven to do well in order to satisfy parents, gain admittance to more prestigious schools, or establish better careers. Athletes take all sorts of risks to increase their chances of winning. Workers may be desperate to save their jobs or bring in a bigger paycheck, especially in economically uncertain times. Soldiers better able to complete their missions are likely to live longer.

Surprisingly, while acknowledging the need to protect people from outside pressures, bioethicists generally maintain that we do not need to protect them from harmful decisions motivated by internal pressures. This position stems, it seems, from the recognition that, with the exception of decisions that are purely random, everything we decide to do is dictated at least in part by internal pressures, and in many cases, these pressures can be so strong that the decisions may no longer appear to be voluntary. Take, for example, seriously ill cancer patients contemplating whether or not to undergo harsh chemotherapy regimens. Bioethicists worry that, if we focused on the pressures and lack of options created by the patients’ dire condition, we might not let the patients receive the treatment, or, in the guise of protecting the patients from harm, might create procedural hurdles that would rob them of their decisionmaking autonomy. Similarly, these bioethicists might object to restricting the ability of workers, say, to use biomedical enhancements merely because their choices are highly constrained by their fear of losing their jobs. But even if we accept this argument, that doesn’t mean that we must be indifferent to the dangers posed by overwhelming internal pressure. As we will see, the government still must take steps to minimize the harm that could result.

Individuals may be vulnerable to harm not only from using enhancements, but from participating in experiments to see if an enhancement is safe and effective. Research subjects are protected by a fairly elaborate set of rules, collectively known as the “Common Rule,” that are designed to ensure that the risks of the research are outweighed by the potential benefits and that the subjects have given their informed consent to their participation. But there are many weaknesses in this regulatory scheme. For one thing, these rules apply only to experiments conducted by government-funded institutions or that are submitted to the FDA in support of licensing applications, and therefore they do not cover a great deal of research performed by private industry. Moreover, the rules were written with medically oriented research in mind, and it is not clear how they should be interpreted and applied to enhancement research. For example, the rules permit children to be enrolled as experimental subjects in trials that present “more than minimal risk” if, among other things, the research offers the possibility of “direct benefit” to the subject, but the rules do not say whether an enhancement benefit can count as a direct benefit. Specific research protections extend to other vulnerable populations besides children, such as prisoners and pregnant women, but do not explicitly cover students, workers, or athletes. In reports of a project several colleagues and I recently completed for the NIH, we suggest a number of changes to current regulations that would provide better protection for these populations.

Ensuring safety and effectiveness

Beginning with the enactment of the Pure Food and Drug Act in 1906, we have turned to the government to protect us from unsafe, ineffective, and fraudulent biomedical products and services. Regardless of how much freedom individuals should have to decide whether or not to use biomedical enhancements, they cannot make good decisions without accurate information about how well enhancements work. In regard to enhancements in the form of drugs and medical devices, the FDA has the legal responsibility to make sure that this information exists.

The FDA’s ability to discharge this responsibility, however, is limited. In the first place, the FDA has tended to rely on information from highly stylized clinical trials that do not reflect the conditions under which enhancements would be used by the general public. Moreover, the deficiencies of clinical trials are becoming more apparent as we learn about pharmacogenetics—the degree to which individual responses to medical interventions vary depending on the individual’s genes. The FDA is beginning to revise its rules to require manufacturers to take pharmacogenetics into consideration in studying safety and efficacy, but it will be many years, if ever, before robust pharmacogenetic information is publicly available. The solution is to rely more on data from actual use. Recently the agency has become more adamant about monitoring real-world experience after products reach the market, but this information comes from self-reports by physicians and manufacturers who have little incentive to cooperate. The agency needs to be able to conduct its own surveillance of actual use, with the costs borne by the manufacturers.

Many biomedical enhancements fall outside the scope of FDA authority. They include dietary supplements, many of which are used for enhancement purposes rather than to promote health. You only have to turn on late-night TV to be bombarded with claims for substances to make you stronger or more virile. Occasionally the Federal Trade Commission cracks down on hucksters, but it needs far greater resources to do an effective job. The FDA needs to exert greater authority to regulate dietary supplements, including those used for enhancement.

The FDA also lacks jurisdiction over the “practice of medicine.” Consequently, it has no oversight over cosmetic surgery, except when the surgeon employs a new medical device. This limitation also complicates the agency’s efforts to exert authority over reproductive and genetic practices. This would include the genetic modification of embryos to improve their traits, which promises to be one of the most effective enhancement techniques. Because organized medicine fiercely protects this limit on the FDA, consumers will have to continue to rely on physicians and other health care professionals to provide them with the information they need to make decisions about these types of enhancements. Medical experts need to stay on top of advances in enhancement technology.

Even with regard to drugs and devices that are clearly within the FDA’s jurisdiction, its regulatory oversight only goes so far. Once the agency approves a product for a particular use, physicians are free to use it for any other purpose, subject only to liability for malpractice and, in the case of controlled substances, a requirement that the use must comprise legitimate medical practice. Only a handful of products, such as Botox, have received FDA approval for enhancement use; as noted earlier, enhancements predominantly are unapproved, off-label uses of products approved for health-related purposes. Modafinil, for example, one of the most popular drugs for enhancing cognitive performance, is approved only for the treatment of narcolepsy and sleepiness associated with obstructive sleep apnea/hypopnea syndrome and shift-work sleep disorder. Erythropoietin, which athletes use to improve performance, is approved to treat anemias. The FDA needs to be able to require manufacturers of products such as these to pay for the agency to collect and disseminate data on off-label experience. The agency also has to continue to limit the ability of manufacturers to promote drugs for off-label uses, in order to give them an incentive to obtain FDA approval for enhancement labeling.

An enhancement technology that will increase in use is testing to identify genes that are associated with nondisease characteristics. People can use this information to make lifestyle choices, such as playing sports at which they have the genes to excel, or in reproduction, such as deciding which of a number of embryos fertilized in vitro will be implanted in the uterus. An area of special concern is genetic tests that consumers can use at home without the involvement of physicians or genetic counselors to help them interpret the results. Regulatory authority over genetic testing is widely believed to be inadequate, in part because it is split among the FDA and several other federal agencies, and there are growing calls for revamping this regulatory scheme that need to be heeded.

Any attempt to regulate biomedical enhancement will be undercut by people who obtain enhancements abroad. The best hope for protecting these “enhancement tourists” against unsafe or ineffective products and services lies in international cooperation, but this is costly and subject to varying degrees of compliance.

To make intelligent decisions about enhancement use, consumers need information not only about safety and effectiveness, but about whether they are worth the money. Should they pay for Botox injections, for example, or try to get rid of facial wrinkles with cheaper creams and lotions? When the FDA approved Botox for cosmetic use, it ignored this question of cost-effectiveness because it has no statutory authority to consider it. In the case of medical care, consumers may get some help in making efficient spending decisions from their health insurers, who have an incentive to avoid paying for unnecessarily costly products or services. But insurance does not cover enhancements. The new administration is proposing to create a federal commission to conduct health care cost-effectiveness analyses, among other things, and it is important that such a body pay attention to enhancements as well as other biomedical interventions.

Subsidizing enhancement

In these times of economic distress, when we already question whether the nation can afford to increase spending on health care, infrastructure, and other basic necessities, it may seem foolish to consider whether the government has an obligation to make biomedical enhancements available to all. Yet if enhancements enable people to enjoy a significantly better life, this may not be so outlandish, and if universal access avoids a degree of inequality so great that it undermines our democratic way of life, it may be inescapable.

There is no need for everyone to have access to all available enhancements. Some may add little to an individual’s abilities. Others may be so hazardous that they offer little net benefit to the user. But imagine that a pill is discovered that substantially improves a person’s cognitive facility, not just their memory but abilities such as executive function—the highest form of problem-solving capacity—or creativity. Now imagine if this pill were available only to those who already were well-off and could afford to purchase it with personal funds. If such a pill were sufficiently effective, so that those who took it had a lock on the best schools, careers, and mates, wealth-based access could drive an insurmountable wedge between the haves and have-nots, a gap so wide and deep that we could no longer pretend that there is equality of opportunity in our society. At that point, it is doubtful that a liberal democratic state could survive.

So it may be necessary for the government to regard such a success-determining enhancement as a basic necessity, and, after driving the cost down to the lowest amount possible, subsidize access for those unable to purchase it themselves. Even if this merely maintained preexisting differences in cognitive ability, it would be justified in order to prevent further erosion of equality of opportunity.

The need for effective regulation of biomedical enhancement is only going to increase as we enter an era of increasingly sophisticated technologies. Existing schemes, such as the rules governing human subjects research, must be reviewed to determine whether additions or changes are needed to accommodate this class of interventions. Government agencies and private organizations need to be aware of both the promise and the peril of enhancements and devote an appropriate amount of resources in order to regulate, rather than stop, their use.

Closing the Environmental Data Gap

The compelling evidence that the global climate is changing significantly and will continue to change for the foreseeable future means that we can expect to see similarly significant changes in a wide variety of other environmental conditions such as air and water quality; regional water supply; the health and distribution of plant and animal species; and land-use patterns for food, fiber, and energy production. Unfortunately, we are not adequately monitoring trends in many of these areas and therefore do not have the data necessary to identify emerging problems or to evaluate our efforts to respond. As threats to human health, food production, environmental quality, and ecological well-being emerge, the nation’s leaders will be handicapped by major blind spots in their efforts to design effective policies.

In a world in which global environmental stressors are increasingly interactive and human actions are having a more powerful effect, the need for detailed, reliable, and timely information is essential. Yet environmental monitoring continues to be undervalued as an investment in environmental protection. We tolerated inadequate data in the past, when problems were relatively simple and geographically limited, such as air or water pollution from a single plant. But it is unacceptable today, as we try to grapple with far more extensive changes caused by a changing climate.

The effects of climate change will be felt across the globe, and at the regional level they are likely to present unique and hard-to-predict outcomes. For example, a small change in temperature in the Pacific Northwest has allowed bark beetles to survive the winter, breed prolifically, and devastate millions of acres of forest. Although scientists are working to improve forecasts of the future and anticipate such tipping points, observation of what is actually happening remains the cornerstone of an adequate response. Society needs consistent and reliable information to establish baselines, make projections and validate them against observed changes, and identify potential surprises as early as possible.

Fortunately, two developments are helping to facilitate the collection of more and better data. First, new technologies and techniques allow us to capture data more efficiently and effectively. Second, society is demanding greater accountability and the demonstration of true value for environmental investments. The ability to easily share large amounts of information, to combine observations from different programs by linking them to specific geographic locations, to monitor many environmental features from space or by using new microscale devices, and other innovations can greatly extend the reach and richness of our environmental baselines. At the same time, many corporations, foundations, and government entities are working to track the effects of their actions in ways that will demonstrate which approaches work and which do not. In much the same way as the medical community is embracing evidence-based medicine, managers are moving toward evidence-based environmental decisionmaking.

No responsible corporation would manage an asset as valuable and complex as the ecosystems of the United States without a better stream of information than can currently be delivered.

Recognition of the scale of environmental problems is also spurring increased collaboration among federal, state, local, and private entities. Wildlife managers recognize that species do not respect state or federal agency boundaries and that adequate response demands range-wide information. Likewise, addressing the expanding “dead zone” in the Gulf of Mexico demands collaboration and data from across the Mississippi River basin in order to understand how farmers’ actions in Missouri affect shrimpers’ livelihood in Louisiana. Evidence of this recognition and the collaboration it demands is growing. For example, state water monitoring agencies, the Environmental Protection Agency (EPA), and the U.S. Geological Survey (USGS) have developed a new multistate data-sharing mechanism that greatly expands access to each others’ data. And, public and private entities are increasingly working together in efforts such as the Heinz Center’s State of the Nation’s Ecosystems report, as well as in more local efforts such as the integrated monitoring of red cockaded woodpeckers by private timber companies, the U.S. Fish and Wildlife Service, state agencies, and the Department of Defense.

Despite these efforts, a coherent and well-targeted environmental monitoring system will not appear without concerted action at the national level. The nation’s environmental monitoring efforts grew up in specific agencies to meet specific program needs, and a combination of lack of funding for integration, fragmented decisionmaking, and institutional inertia cry out for a more strategic and effective approach. Without integrated environmental information, policymakers lack a broad view of how the environment is changing and risk wasting taxpayer dollars.

Since 1997, the Heinz Center’s State of the Nation’s Ecosystems project has examined the breadth of information on the condition and use of ecosystems in the United States and found that the picture is fragmented and incomplete. By publishing a suite of national ecological indicators, this project has provided one-stop access to high-quality, nonpartisan, science-based information on the state of the nation’s lands, waters, and living resources, using national data acceptable to people with widely differing policy perspectives. However, there are data gaps for many geographic areas, important ecological endpoints, and contentious management challenges as well as mismatched datasets that make it difficult to detect trends over time or to make comparisons across geographic scales.

The depth of these gaps can be seen in three case studies, two of which concern chemical elements (nitrogen and carbon) that play vital roles in global ecosystems but can also create havoc in the wrong times, places, and concentrations. The third case considers the condition of our nation’s wildlife.

Controlling nitrogen pollution

Nitrogen is a crucial nutrient for animals and plants as well as one of the most ubiquitous and problematic pollutants. Nitrogen in runoff from sewage treatment plants, farms, feedlots, and urban lawns is a prime cause of expanding dead zones in many coastal areas. Nitrogen in the air contributes to ozone formation and acidification of lakes and streams, as well as to overfertilization of coastal waters. Several nitrogen compounds are also potent greenhouse gases, and nitrogen in drinking water can cause health problems for children. In the environment, nitrogen moves readily from farmlands and forests to streams and estuaries, shifting across solid, liquid, and gas phases, and from biologically active forms to more inert forms and back again. Thus, any nitrogen release can result in multiple effects in sometimes quite-distant locations.

Controlling nitrogen pollution involves public and private action at the national, state, and local levels. We put air pollution controls on cars and power plants, invest in municipal sewage treatment, educate farmers and suburban residents on the risks of overfertilization, and design greenway strategies to cleanse runoff. Understanding how nitrogen moves through the environment is crucial to designing these controls effectively.

The delivery of nitrogen to streams and rivers, and thus to coastal waters, is highly variable by region, with very high levels originating in the upper Midwest and Northeast, and much less from other areas. However, data on nitrogen delivery from streams to coastal waters are not available in a consistent form for more than half the country—essentially all areas not drained by the Mississippi, Susquehanna, or Columbia Rivers. This includes, for example, much of Texas and North Carolina, where major animal feeding operations, a significant source of nitrogen releases, are located.

Moreover, nationally consistent monitoring is available only for limited areas, precluding more detailed tracking that would allow better understanding of the relationship between on-farm management strategies and nitrogen releases. Nitrogen in precipitation is not measured in coastal areas of the East, where it may contribute as much as one-third of the nitrogen delivered to estuaries such as the Chesapeake Bay.

Without such data, regulators cannot understand what inputs are contributing to the problem, which ones are being effectively addressed, and which ones remain as targets for future reduction. As a result, pollution control agencies are left without comprehensive feedback about baseline conditions and whether control strategies are effective, and thus are unable to fully account to the public for their success or failure.

Carbon Storage

Carbon is another element that plays a critical role in ecosystems but, in excess, is now wreaking havoc in the atmosphere. Carbon dioxide and methane (a carbon compound) are the major contributors to global warming, but carbon is also vital to ensuring the productive capacity of ecosystems, including the ability to provide services such as soil fertility, water storage, and resistance to soil erosion.

Carbon dioxide in the atmosphere has increased by more than 30% as compared with preindustrial concentrations, and methane concentrations have increased by more than 150%. Moreover, the data show that, so far, efforts to reverse these increases have been overwhelmed. Through measures designed to increase carbon stored in plants, soils, and sediments, where it does not contribute to the greenhouse effect, it is possible to help offset carbon emissions.

Different ecosystem types store carbon differently. For example, forests store more carbon than many other ecosystems and store more of it above ground (in trees) than do grasslands. Data-gathering by the U.S. Forest Service’s Forest Inventory and Analysis program documented an average annual gain of nearly 150 million metric tons of forest carbon per year in recent years, whereas cropland and grassland soils data show more modest carbon increases. We do not yet have comprehensive data on changes in carbon storage in all U.S. ecosystems and so cannot quantify the total contribution of ecosystems to offsetting the approximately two billion tons of carbon dioxide released in the United States each year. Changing carbon levels are not yet comprehensively monitored in wetlands and peat lands, urban and suburban areas, and aquatic systems. There are also gaps in national-scale data for carbon in forest soils and aboveground carbon in croplands, grasslands, and shrublands.

As we expand our ability to track carbon in the landscape, we will increasingly be able to quantify how different land management practices help or hinder carbon sequestration by ecosystems and to project future changes and set priorities more accurately. Baseline measurements and routine monitoring are also important in determining how changing temperature and moisture conditions as well as disturbances such as invasive weeds, wildfires, and pest outbreaks affect carbon storage. As we expand and improve our carbon-monitoring capability, managers will be able to answer critical questions such as how rising temperatures are affecting northern peat lands and whether invasive weeds and wild-fires are causing U.S. rangelands to lose carbon rather than store it.

As policymakers, land managers, and entrepreneurs push the frontiers of biofuel production and develop new institutions such as carbon-offset markets, greater investments will be needed to produce reliable sources of information about changes in carbon storage at relevant geographic scales. The technology exists or is being developed to gather data more rapidly, more efficiently, and at lower cost. Global agreements on mechanisms for including terrestrial carbon storage in the climate change solution can spur additional investment to refine technologies and implement monitoring systems. What is needed is a commitment to providing the necessary information and a strategic view of what data are needed and how they should be gathered and shared.

Tracking wildlife population trends

Most American would agree that fish and wildlife are an important part of the nation’s heritage. Each year, millions of Americans spend time hunting, fishing, or just enjoying wildlife for its intrinsic worth and beauty. Native species provide products, including food, fiber, and genetic materials, and are central components of ecosystems, determining their community structure, biomass, and ecological function. From bees that pollinate agricultural crops worth billions of dollars a year to oysters that filter coastal waters, wildlife provides a variety of services of direct benefit to humans.

During past decades, wildlife management often focused on huntable and fishable species. More recently, concern about loss of species and habitat has created a broader agenda that includes reducing the danger of extinction of other species and managing habitat to support several goals.

Simply knowing how many species are at risk of extinction is a crucial starting point. State-based Natural Heritage scientists consider how many individuals and populations exist, how large an area the species occupies (and when known, whether these numbers are decreasing or not), and any known threats. The data are compiled at a national scale by NatureServe, a nonprofit organization that also establishes standards for collecting and managing data to ensure that they are updated frequently enough to identify real trends. However, differential funding and sampling frequencies among the states has led to mixed data quality.

Information about extinction risk provides a crucial early warning to identify species in need of attention. In many cases, however, such status information is not backed up with information on how populations have changed over time, making it difficult to determine whether a population’s increased risk levels are due to a historical decline, a recent decline, or natural rarity—scenarios that can require quite different management responses. In 2006, NatureServe reported that information on short-term population trends was available for only about half of the vertebrate species at risk of extinction and only a quarter of invertebrates. The Breeding Bird Survey, managed by the USGS, has proven a consistent long-term source of population data, as have surveys of a number of charismatic species such as monarch butterflies. For many species, however, including many threatened species, population trend data are simply not available.

Our society spends significant amounts to conserve wildlife. In addition, land use and other activities can be disrupted or delayed if endangered or threatened species are present. Understanding which species are declining and which are not is crucial to maximizing the effectiveness of public spending and minimizing the effect of protections on private actions. Many recent conservation challenges have involved species not limited to small regions. As we have noted, no single state or federal agency can address the challenges facing these species alone, and consistent range-wide information is the lingua franca on which collaborative plans can be built.

Species-status information is only one of the keys to good wildlife management. Tracking phenomena such as unusual deaths and deformities provides a glimpse into overall ecosystem conditions. However, collection of these data is limited to certain species, such as marine mammals, while in other cases changes in reporting procedures make data impossible to compare.

In recent years, scientists have become increasingly aware of the threats to ecosystems from invasive species. Weeds cause crop losses, aquatic invasives clog channels and water intake pipes, and plants must be killed or animals trapped when they interfere with native species. Despite these effects, and the fact that federal spending on control and related programs exceeded $1 billion in 2006, little standardized data exists on invasive species, making a broad assessment of the threat and the effectiveness of society’s response difficult. The only group for which data are available at a national scale is fish, and even in this case the data are limited.

Understanding which species are declining and which are not is crucial to maximizing the effectiveness of public spending and minimizing the effect of protections on private actions.

Managing the nation’s environment involves keeping track of many more components than nitrogen, carbon, and wildlife. These three central management challenges, however, illustrate the degree to which information limitations constrain society’s ability to understand what issues must be faced, devise interventions to address these issues, and evaluate whether those interventions work. Although the challenge is clear and urgent, and there are some promising signs of increased collaboration and information sharing, more is needed.

Building a coherent system

As the planet warms, we have begun to experience a variety of changes in ecosystems, the first signs of the environment’s own potentially bumpy road ahead. To deal with the changes, policymakers need objective, detailed, big-picture data: the type of data that decisionmakers have long relied on to understand emerging economic trends. Yet, as noted above, data gaps still abound, obscuring our understanding of the condition and use of the nation’s ecosystems. In The State of the Nation’s Ecosystems 2008, only a third of the indicators could be reported with all of the needed data, another third had only partial data, and the remaining 40 indicators were left blank, largely because there were not enough data to present a big-picture view.

No responsible corporation would manage an asset as valuable and complex as the ecosystems of the United States without a better stream of information than can currently be delivered. We certainly do not wish to throw rocks at the dedicated professionals who manage environmental monitoring programs. Unfortunately, however, their work has been accorded low priority when it comes to setting environmental budgets, and independence, rather than collaboration, has been the primary strategy for managing these programs.

Dealing with the type of gaps we have discussed will require additional investment plus a serious commitment to harnessing the resources of existing environmental monitoring programs into a coherent whole. Identifying a small suite of environmental features that need to be tracked, identifying overlapping and incomplete coverage between programs, and establishing standard methods that can allow different programs to contribute to a larger whole are the kinds of steps that a nation truly committed both to the power of information and the value of our environment would take.

Congress should consider establishing a framework by which federal, state, nongovernmental, private, and other interests can jointly decide what information the nation really needs at different geographic scales, identify what pieces already exist, and decide what new activities are needed. This might be part of upcoming climate change legislation (which might also provide a funding source), but the imperative of improving the information system should not necessarily wait for this complex legislation to pass. The Obama administration has the opportunity to build on more than 10 years of experience in identifying environmental indicators and devising ways to integrate them more effectively. Federal and state agencies can radically increase the degree to which information consistency across related programs is treated as a priority. Nascent efforts such as the National Ecological Status and Trends (NEST) effort, begun in the waning days of the Bush administration, should be energized, expanded, and formalized. (This effort is beginning work on what may eventually become a formal system of national environmental indicators.) Oversight entities such as the Office of Management and Budget and congressional appropriators and authorizers can demand answers to questions about why multiple data collection programs exist, who they are serving, and why they cannot be harmonized to meet the larger-scale needs of the 21st century. They can also pay serious attention to requests for funds to support a larger and more integrated system. For example, it might be appropriate to consider one-time infusions of funds to ensure the consistency of state water-quality monitoring, something states are inadequately funded to do and have never been expected to do.

Building such a system is not a federal-only affair but rather should be governed as a collaborative venture among data users and producers to help ensure utility and practicality. Such a system would help distinguish between truly important needs and ones that may serve only minor interests, eliminate duplicative monitoring efforts, and provide incentives for more coordinated monitoring, including increased cooperation between states and federal agencies. Perhaps most important, such a system could ensure continued, consistent, high-quality, nonpartisan reporting, so that decisionmakers from a variety of sectors can rely on the same information as they forge ahead.

Nuclear fears

Are there any big-idea books left to be written about nuclear terrorism? After all, every possible threat assessment, from apocalyptic to anodyne, is well represented in the stacks. Analyses of how to secure nuclear materials in the former Soviet Union and beyond abound. So do prescriptions for blunting the spread of nuclear weapons and materials to new and possibly irresponsible states. Books exploring the links between the nuclear threat and traditional counterterrorism and homeland security are, although fewer, still in sufficient supply.

Yet in Will Terrorists Go Nuclear?, Brian Michael Jenkins manages to provide a fresh perspective on the subject, largely by devoting most of the book not to nuclear terrorism itself but to an important component of the subject: our own fears about nuclear terror.

Jenkins is a natural for this sort of examination. In the mid-1970s, he brought a careful eye for terrorist psychology to what was, at the time, an overly technical academic effort to assess the likelihood of nuclear terrorism and develop appropriate responses. Scholars then (and still too frequently now) focused on what terrorists might be capable of doing, rather than on what they would actually be motivated to do. Jenkins is still interested in psychology. But in this book, he probes the minds of the would-be victims. His conclusion: By inflating our perceptions of the nuclear terrorist threat, we have managed to make al Qaeda “the world’s first nuclear terrorist power without, insofar as we know, possessing a single nuclear weapon.”

Our understanding of nuclear terrorism, Jenkins persuasively demonstrates, is substantially a product of our imaginations. He does not mean this in a flip or dismissive way. Rather, it is a simple factual observation: Because nuclear terrorism has not happened, our understanding of it is necessarily shaped by the speculations and dreams of nuclear experts and policymakers, as well as those of the people who listen to them. Early in his story, Jenkins high lights the 1967 report of the so-called “Lumb Panel,” which flagged the problem of nuclear terrorism before modern international terrorism was even a meaningful concern. In one of many interesting personal anecdotes sprinkled throughout the book, he relates a conversation he once had with the chair of that panel. “Who were the terrorist groups in 1966, when the panel was convened?” Jenkins recalls asking. The response: “They had no particular terrorists in mind.” It is easy to see how such speculation, only thinly anchored in fact, can get out of control.

It is no surprise that as the issues of international terrorism and nuclear proliferation rose, first in the early 1970s and again in the aftermath of the Cold War, assessments of the threat grew, leaving public terror in their wake. Increasingly sophisticated terrorist operations, along with the expanding global scope of nuclear weapons programs and nuclear commerce, provided analysts with evidence that naturally led them to revisit their previous judgments and inevitably to revise them in ever more pessimistic directions. Cultural influences—the cable news/ terrorism expert complex, popular movies, and end-of-days novels that feature nuclear destruction—added fuel to the fire.

That sort of environment is ripe for a speculative bubble, and in many ways that is what has occurred. It is not that there is no underlying threat of nuclear terrorism; there certainly is, and it is one that deserves our strong attention. But that does not change the fact that we have tended to compound worst-case analyses and imaginings to come up with apocalyptic visions of the threat that may not square well with reality.

Only a disciplined effort to test our projections of nuclear terrorism against whatever evidence we can find has any hope of keeping analyses grounded. It is here, in assessing some of the perennial features of the nuclear terrorism litany, where Jenkins’s book is at its finest. He explores two interesting areas of evidence.

The first is exemplified by his analysis of black markets for nuclear explosive materials, which figure prominently in many dire assessments of the nuclear threat. Such markets clearly exist at some level, as is made clear by the occasional apprehension of participants in illicit transactions. Many if not most analyses of nuclear terrorism take things a step further, conjuring robust markets where nuclear explosive materials can consistently be had for the right price. This is a critical link in the story of nuclear terrorism, because if terrorists can acquire nuclear materials, the logic goes, they can build and detonate a bomb. Jenkins, however, after a careful analysis of real black markets, comes to a different, more subtle conclusion: The black market exists, “although not in the form we imagine.” Jenkins makes no definitive judgment, but the upshot is clear: Nuclear terrorism is more complicated than many imagine, and as a result, many of our fears are unfounded.

The second area of evidence for our overestimation of the threat is typified by a fascinating chapter that traces the history of “red mercury.” For decades, stories of red mercury have conjured a lethal substance whose near-magical properties might quickly turn a terrorist or tin-pot dictator into a mini-superpower. To their credit, most mainstream analysts have long dismissed red mercury as a ruse. Still, red mercury doesn’t seem to want to die. Why, Jenkins wants to know, do serious people still refuse to part with a notion that most agree is nonsense? As he traces its history, from the 1960s to today, an important pattern emerges. Given the consequences of underestimating a nuclear threat, analysts tend to err on the side of not ruling anything out, even something as discredited as red mercury. But as Jenkins notes, the consequences of threat inflation can be grave, including unnecessary wars and erosions of our liberties. No analyst of nuclear terrorism has ever been blamed for these.

Most of the book is on solid ground, but it is not without its flaws. Jenkins argues, for example, that many Americans have “sentenced themselves to eternal terror” because of guilt stemming from the atomic bombing of Hiroshima—a stretch at best. And some of the sections drag; the book could be shorter and tighter.

Jenkins also enters tricky territory when he moves from his exposition of nuclear terror to a policy-focused assessment of what the United States should do in the aftermath of a nuclear attack; an analysis that occupies much of the final two chapters of the book. He asks an extraordinarily important set of questions: Should the United States retaliate against states that might have been complicit in an attack, either through deliberate action or through negligence? Should it respond more broadly? Should it exercise restraint?

Unfortunately, the analysis backing up his answers is thin. Torn between counseling restraint and an overwhelming response, he falls back on the old Cold War standby of strategic ambiguity. This may have been the right way to deal with the Soviet Union. Stopping short of a firm threat to retaliate overwhelmingly to any aggression allowed the United States to avoid a commitment trap, while explicitly keeping all options open preserved deterrence. But it is not clear that strategic ambiguity makes sense for dealing with potential state sources of nuclear terror. In the aftermath of a nuclear attack, the first priority of the United States will be to prevent another strike. That will, in turn, require cooperation. Certainty that states will be spared the brunt of U.S. force—the opposite of ambiguity—may be essential for securing the sort of cooperation that is needed.

Ultimately, these are small flaws in a book that is engaging and illuminating and adds an important new dimension to our understanding of nuclear terrorism—and of nuclear terror. Our imaginations, as Jenkins surely knows, are essential for confronting the real threats of the future, including the threat of nuclear terrorism. But our imaginations can also get in our way. Writing about black markets, Jenkins notes that “Theoretically, there are several ways for terrorists to obtain [nuclear materials].” Wisely, he adds, “Theoretically, just about anything is possible.” The job of analysts is to keep our imaginations at once active and in check, to help policymakers and the public understand the nuclear threat without becoming overwhelmed and paralyzed by it. Will Terrorists Go Nuclear? shows how important that task is and how hard it is to do.