Forum – Fall 2010
University futures
In “Science and the Entrepreneurial University” (Issues, Summer 2010), Richard C. Atkinson and Patricia A. Pelfrey remind us of the extent to which the U.S. economy is increasingly driven by science and technology and the central role the U.S. research university plays in producing both new knowledge and human capital. Although policymakers should already be aware that federal support for academic research is critical to economic prosperity, academic leaders would do well to recall that the movement of ideas, products, and processes from universities into application requires diligent guidance.
Atkinson and Pelfrey underscore the imperative for the ongoing evolution of our research universities as well as the continued development of new initiatives to enhance the capacity of these institutions to carry out advanced teaching and high-intensity discovery. The transdisciplinary future technologies development institutes in the University of California system that the authors describe serve as prototypes. The calls for more robust funding, expansion of areas of federal investment, and immigration policies that welcome the best and brightest from around the world equally merit attention.
We can maintain America’s competitive success by working on several fronts simultaneously:
First, advance the integration of universities into coordinated networks of differentiated enterprises, thus expanding our potential to exert impact across a broader swathe of technological areas. Organize research to mount adequate responses at scale and in real time to the challenges that confront us. The need for transdisciplinary organization of teaching and research is obvious, but transinstitutional collaboration among universities, industry, and government both aggregates knowledge and prevents duplication.
Second, accelerate the evolution of institutional and organizational frameworks that facilitate innovation. A slow feedback loop between the economy, Congress, and academia may be to blame, but the pace of scientific understanding and technological adaptation in areas as critical as climate change or renewable energy is lagging. Rigid organizational structures leave us insufficiently adaptive.
Third, rethink the criteria by which we evaluate the contributions of our institutions. Simplistic methodologies that pretend to establish precise rankings abound, but an alternative scheme might evaluate institutions according to their contributions to selected national objectives. We might seek to determine what an institution has done to help build a more sustainable planet, advance the nation’s position in nanotechnology, or gain a fundamental understanding of the origins of the universe. We might even evaluate the impact of universities in aggregate in their capacity to achieve outcomes we desire in our national innovation system.
Finally, we must come to terms with the concept of outcomes.We may be working toward economic security, national security, and positive health outcomes, but we do so in such a generic way that the nature of the entrepreneurial impact of the university remains fuzzy. We need to define its role, measure its impact, and assess its returns to everything from the general stock of knowledge, to advancing specific technological solutions, to advancing our fundamental understanding of who we are and what it means to be human.
Therefore I want to focus here, perhaps ironically, on the absence of wisdom: the shortsightedness of our national and state leaders in considering support of these world-class institutions as “discretionary” spending. It is no more so, in fact, than spending on national defense. Indeed, investment in research at our great universities is a form of investment in productive growth and national defense.
In his April 27, 2009, speech to the National Academies of Sciences, President Obama pledged to devote more than 3% of our gross domestic product to R&D. This commitment was followed by one-time spending linked to the U.S. economic stimulus package, but that cannot be mistaken for long-term investments in research at our universities. A year later, we have not witnessed great expansion of federal and state investment in fundamental research; we have, however, witnessed wholesale disinvestments in higher education in virtually every state.
California is the quintessential example of this impulse to disinvest, leaving fewer dollars for its universities than its prisons. Let me suggest what’s at stake by backsliding. Having built the world’s greatest system of public higher education, California now seems determined to destroy it. Californians’ lives would be diminished today without the discoveries born at these great universities. Isolation of the gene for insulin; transplant of infant corneas; the Richter scale; hybrid plants and sturdy vineyards resistant to a variety of viruses, pests, and adverse weather conditions; recombinant DNA techniques that led to the biotechnology industry; the UNIX operating system for computers; initial work leading to the use of stem cells; the discovery of prions as causes of neurodegenerative diseases; and even the nicotine patch are among the thousands of advances.
Perhaps these discoveries would have been made elsewhere, but the thousands of startup companies and new jobs for the skilled and well-educated workforce of California would have been lost, along with hundreds of billions of dollars pumped into the state’s economy from the worldwide sales of these companies.
If California disinvests in its great universities, a downward spiral in quality will follow. It will be difficult for them to hold on to their more creative faculty members when wealthier competitors are ready to pick them off. If those faculty leave, the best students will not come. Hundreds of millions of dollars of the roughly $3.5 billion annual federal grants and contracts will disappear. New discoveries will not be made; new industries will not be born. Californians will have killed the goose that laid the golden egg.
The California legislature and its voters must recognize that it is far more difficult and costly to rebuild greatness than it is to maintain it.
Science’s influence
Through his years of service as director of the Office of Science and Technology, within the Executive Office of President George W. Bush, John Marburger had more than the customary opportunity to test the authority of science to govern political decisions. “… [In] my conversations with scientists and science policymakers,” he writes in “Science’s Uncertain Authority in Policy” (Issues, Summer 2010), “there is all too often an assumption that somehow science must rule, must trump all other sources of authority.” Indeed, he cites three examples from his experience in the Bush administration when good science advice was (1) misinterpreted (anthrax in the mail), (2) overruled (the launch of the Challenger spacecraft), or (3) deliberately corrupted in the interest of making “the administration’s case for war” in Iraq (aluminum tubes).
What is the source of the naïve idea that science must surely triumph over all other sources of conviction when making public policy? It surely comes from the discipline of research processes, an extraordinarily successful track record for science, and a dash of idealism and wishful thinking. Add to that the tendency of scientists to take little interest in how other sectors of society actually make decisions.
Fortunately, most scientists are not passive when their science is ignored, distorted, or used inappropriately by politicians. The best proof: the 15,000 U.S. scientists, including 52 Nobel Laureates, who supported the strenuous efforts of the Union of Concerned Scientists (UCS) in the past decade to defend scientific integrity in public decisions. They confronted the reality that people in positions of political, economic, or managerial power will select technical experts to justify their decisions but nevertheless will base those decisions, as most citizens do, on their own values, objectives, and self-interest. The efforts of the UCS, together with other scientific institutions such as the National Academies, the American Association for the Advancement of Science, and professional societies, did not accept inappropriate uses of science and science advice by government officials. Knowing that, as Marburger says, “Science has no firm authority in government and public policy,” scientists also realize that the legitimacy of a democratic government depends on rational decision, arrived at transparently, and with accountability for the consequences.
Science has no “right” to dictate public decisions, but we do have an obligation to try to be heard. Fortunately, in the 2008 election campaign President Obama listened. He appointed John Holdren Assistant to the President for Science and Technology and asked him to “develop recommendations for Presidential action designed to guarantee scientific integrity throughout the executive branch.” When this is accomplished, science’s authority in policy may not be unchallenged, but it need no longer be uncertain.
The key to success in our democracy depends not only on consistent efforts, both in and out of government, to hold government more accountable, transparent in its decisions and worthy of the public trust. It also requires a commitment by science to provide information to the public in a form it wants and can understand and use.
When a former presidential science adviser speaks about the place of science in policy, people listen. This is why John Marburger’s concluding recommendation is particularly troublesome. Half right and half deeply misinformed, it is apt to sow confusion instead of promoting clarity.
Marburger says, “Science must continually justify itself, explain itself, and proselytize through its charismatic practitioners to gain influence on social events.” This prescription is based on his observation that people apparently trust science because they endow its practitioners with exceptional qualities, similar to what the early 20thcentury German sociologist Max Weber termed “charismatic” authority. Since law and policy, in Marburger’s view, so often operate “beyond the domain of science,” an important way for scientists to ensure their influence is to rule by charisma—or, to coopt a phrase from another setting, by shock and awe.
Not only is this prescription at odds with all notions of democratic legitimation, it is also empirically wrong and represents dangerous misconceptions of the actual relationship between law, science, and public policy in a democracy. Marburger claims that “no nation has an official policy that requires its laws or actions to be based on the methods of science.” Later, he adds, “science is not sanctioned by law.” But it is not law’s role to pay lip service to the methods of science, nor to order people to take their marching orders from scientists. Rather, a core function of the law is to ensure that power is not despotically exercised. Acting without or against the evidence, as the George W. Bush administration was often accused of doing, is one form of abuse of power that the law firmly rejects. It does so not by mindlessly endorsing the scientific method but by requiring those in power to make sure their decisions are evidence-based and well-reasoned.
U.S. law can justly take pride in having led the world in this direction. The Administrative Procedure Act of 1946 helped put in place a framework of increasing transparency in governmental decisionmaking. As a result, U.S. citizens enjoy unparalleled access to the documents and analyses of public authorities, opportunities to express contrary views and present counterevidence, and the right to take agencies to court if they fail to act in accordance with science and reason. In the 2007 case of Massachusetts v. EPA, for example, the Supreme Court held that the Bush-era EPA had an obligation to explain why, despite mounting evidence of anthropogenic climate change, it had refused to regulate greenhouse gases as air pollutants under the Clean Air Act. It was not charisma but respect for good reasoning that swayed the judicial majority.
Marburger’s anecdotal examples, each of which could stand detailed critique, conceal an important truth. Science serves democratic legitimacy by promoting certain civic virtues that are equally cherished by the law: openness, peer criticism, rational argument, and above all readiness to admit error in the face of persuasive counter-evidence. So Marburger is right to say that science, like any form of entrenched authority, “must continually justify itself, [and] explain itself.” He takes a giant step backward when he advocates proselytizing by science’s charismatic representatives.
John Marburger discusses instances where it was foolish to ignore specific scientific knowledge claims (e.g., pertaining to Iraqi nuclear capabilities). More generally, however, he makes a thoughtful case that science “has no special standing when it comes to the laws of nations,” and that scientists therefore must enter the political fray in order “to gain influence on social events.”
I agree with the basic argument, as would most science policy scholars and practitioners. But I wonder if Marburger’s readers might wish to go a step farther; might wish to reflect on whether democratic society actually facilitates a “political fray” that is capable of melding expert knowledge, public values, and political prudence. Or do shortcomings in democratic design prevent experts from providing all the help they potentially could?
Marburger tactfully skirts around the fact that contemporary democratic practices do not structure expert/lay interactions appropriately, failing, for example, to select and train lay participants to be capable of playing their roles. Ask yourself: Do most legislators on the Science and Budget committees in Congress have either the capacity or interest in determining how the National Science Foundation expends the chemistry budget, much less the ability to decide whether green chemistry ought to get greater attention? Do you know of systematic mechanisms for ascertaining the competence of candidates for electoral office, or are we governing a technological civilization with jury-rigged methods from previous eras? Requiring actual qualifications for nominees obviously would be contentious, and even the best system would have nontrivial shortcomings. But if it is irresponsible to mint incompetent Ph.D.s in physics or political science, isn’t it all the more important to select and train competent elected officials?
Nor are scientists prepared to play their roles in a commendable deliberative process. Although research practices do a remarkable job of arriving at certified knowledge claims, the contemporary scientific community is riven with systematic biases regarding who becomes a scientist. Gender, ethnic, and class inequalities are obvious in U.S. science, but equally dubious is the dominance of world science by a handful of countries. Certainly there are some harmonies of purpose worldwide, and of course some scientists strive to speak for humanity at large. Yet glaring inequalities remain, with neither the shaping of cutting-edge research nor the advice given to governments being remotely in accord with the spirit of democratic representation of multiple standpoints. Many scientists doubt the importance of this, I know. But they are mistaken: A basic finding of the social sciences is that people’s ideas and priorities are heavily influenced by role and context. Hence, disproportionately white/Asian, upper-middle-class, young, male scientists are not legitimate spokespersons for a global civilization. The long delay in attention to malaria is one manifestation; tens of billions of dollars poured into climate modeling instead of remedial action is another. For 22nd-century science to become radically more representative of humanity, a first step would be simply to acknowledge that a great many perspectives now are being shortchanged in science policy conversations.
For scientists to make the best possible contribution to global governance, we need relatively thoroughgoing political innovation, both within government and within science.
Through my experience with these programs, I agree with Marburger that perhaps the most important lesson scientists can learn about policymaking is that the scientific method is not the only way to arrive at a decision. The scientific method is an incredibly valuable tool, but in many policy decisions it can only assist; it cannot determine. This understanding is especially important because scientists who disregard it can undermine the “charisma” that Marburger deems so important for the authority of scientists.
In addition to the good track record that Marburger credits, social scientists argue that a key source of the charisma of scientists is that they are often seen as free from the “contamination” of politics. Sociologist Thomas Merton argued that one of the main reasons why science is a unique form of knowledge is that its practitioners adhere to the norm of disinterestedness. This idea resonates with the public. Unlike politicians, scientists aren’t supposed to have an agenda and therefore can be trusted. Scientists simply want to better understand the world and refuse to let prejudice or personal gain distract from that goal. Political scientist Yaron Ezrahi has written extensively about how useful it can be for politicians to cite the objectivity of science to justify a policy choice, rather than arguing one subjective value over another.
There are times, however, when citizens do not see scientists as objective. When scientific consensus does not support a potential policy, those promoting the policy sometimes question the disinterestedness of scientists. But the perception of bias can also occur when scientists make arguments that extend beyond scientific knowledge. The scientific method cannot be used to determine what types of stem cell research are ethical or how international climate change agreements should be organized. Scientists as citizens certainly should have a say in such matters, but when the public sees scientists as an interest group, the charisma that stems from the ideals of disinterestedness is reduced. Scientists who understand the nuances of the policy process develop ways of balancing these roles. They can speak to what science knows and to what they think is best for the country without conflating the two.
Can geoengineering be green?
In their provocative article, “Pursuing Geoengineering for Atmospheric Restoration,” Robert B. Jackson and James Salzman put forth a new objective for the management of Earth. Atmospheric restoration would return the atmosphere “ultimately to its preindustrial condition.” The authors are persuaded that the only responses to climate change are compensation and restoration, and they deeply dislike compensating for a changed atmosphere with other forms of planetary manipulation, notably injecting aerosols into the upper atmosphere.
For the foreseeable future, however, the active pursuit of atmospheric restoration would be a misallocation of resources. It is inappropriate to undertake removal of carbon dioxide (CO2) from the atmosphere with chemicals at the same time as the world’s power plants are pouring CO2 into the atmosphere through their the smokestacks—in the case of coal plants, at a 300 times greater concentration. First things first. Priority must be given to capture of CO2 emissions at all fossil fuel power plants that the world is not prepared to shut down. As for biological strategies for CO2 removal from the atmosphere, early deployment is appropriate in limited instances, especially where forests can be restored and land and soil reclaimed. But biological strategies quickly confront land constraints.
CO2 capture from the atmosphere with chemicals may become a significant activity several decades from now. The cost of CO2 capture from the atmosphere is highly likely to be lower at that time than it is are today. This will be a side benefit of R&D that is urgently needed now to lower the costs of capture from power plants.
Even at some future time when CO2 capture from the atmosphere with chemicals becomes feasible, restoration of the atmosphere is a flawed objective. Imagine success. For every carbon atom extracted as coal, oil, or gas during the fossil fuel era, an extra carbon atom would be found either in the planet’s biomass, in inorganic form on land or in the ocean, or tucked back into the earth deep below ground via geological sequestration. But unless all the carbon atoms were in underground formations, the world’s lands and oceans would differ from their preindustrial predecessors. Why not restore the lands and oceans as well? Why privilege the atmosphere?
Robert Solow, in a famous talk in 1991 at Woods Hole Oceanographic Institute, provided a vocabulary for dealing with such objectives, invoking strong and weak sustainability. Strong sustainability demands that nothing change. Weak sustainability allows only change that is accompanied by new knowledge that enables our species to function as well in a changed world as in the world before the changes.
Strong sustainability everywhere is impossible. Strong sustainability in selective areas of life while other areas are allowed to change fundamentally is myopic and self-defeating. But the embrace of weak sustainability has its own perils: It readily leads to complacency and self-indulgence. We should not even aspire to atmospheric restoration. This single task could well commandeer all our creativity and all our wealth. A much more diverse response to the threat of climate change is required. It would be more productive for us to acknowledge that we cannot leave this small planet unchanged, but also that we are obligated to invent the policies, technologies, behaviors, and values that will enable our successors to prosper.
Jackson and Salzman thus convey some level of acceptance of geoengineering, and yet they pick delicately from the menu of geoengineering options. Their selections (combined with reducing emissions) focus on “atmospheric restoration” with technologies that meet the criteria of treating the causes of climate change rather than the symptoms, minimizing the chance of harm, and having what they believe to be the highest probability of public acceptance. Previous analysts have looked for technologies that could be implemented incrementally and could be halted promptly if the results were unacceptable.
The first choice of Jackson and Salzman is forest protection and restoration. This seems to me to be a no-brainer, with multiple benefits; but it really hardly qualifies as geoengineering, and it does not fully confront the problem of burgeoning greenhouse gas emissions from fossil fuel combustion. It is nonetheless widely agreed that we should be pursuing this goal for many reasons.
Jackson and Salzman then give limited endorsement to research on the industrial removal of CO2 from the atmosphere and the development of bioenergy, combined with carbon capture and storage. It is impossible to provide balanced analysis of these two technologies with the number of words available here, but they give reason for both optimism and serious concern. Surely either would have to be at massive scale to make a difference, would have broad potential for unintentional harm, would have unevenly distributed costs and benefits, and would rely on the collection and disposal somewhere of huge quantities of CO2. And, as the authors note, the time and cost for implementation are much in excess of those for some of the proposals for managing the radiative balance of
Earth, short time and small cost being two of the most beguiling characteristics of some geoengineering proposals. Early in their essay Jackson and Salzman pose the question: “Is geoengineering more dangerous than climate change?” They do not provide a convincing yes-or-no answer to the question, and this is why the discussion of geoengineering is likely to continue. Do we have the knowledge, the resolve, and the wisdom to address one major problem without creating more? Can we envision sustainable paths, or do we simply step forward toward the next limiting constraint?
The article’s opening claim that, until recently, geoengineering was primarily in the realm of science fiction is true only if you ignore the long and checkered history of climate control. To give a few brief nonfictional examples, in 1901 the Swedish scientist Nils Ekholm suggested that, if facing the return of an ice age, atmospheric CO2 might be increased artificially by opening up and burning shallow coal seams—a process that would warm the climate. He also wrote that the climate could be cooled “by protecting the weathering layers of silicates from the influence of the air and by ruling the growth of plants.”
Five decades later, Harrison Brown, the Caltech geochemist, eugenicist, and futurist, imagined feeding a hungry world by increasing the CO2 concentration of the atmosphere to stimulate plant growth: “If, in some manner, the carbon-dioxide content of the atmosphere could be increased threefold, world food production might be doubled. One can visualize, on a world scale, huge carbon-dioxide generators pouring the gas into the atmosphere.”
In 1955, the famous mathematician John von Neumann asked “Can We Survive Technology?” He issued a strong warning against tinkering with Earth’s heat budget. Climate control, in his opinion, like the proliferation of nuclear weapons, could lend itself to unprecedented destruction and to forms of warfare as yet unimagined.
Jackson and Salzman are right to distance themselves from solar radiation management schemes (turning the blue sky milky white), ocean iron fertilization (turning the blue oceans soupy green), and fantasies of a green Sahara or Australian Outback. Forest protection and restoration are fine, but they will not cool the planet significantly. Carbon capture, removal, and long-term storage, however, face daunting thermodynamic, economic, and infrastructural hurdles, not to mention potential high risks to future generations if the CO2 does not stay put.
We need to be avoiding, not pursuing, geoengineering, and “curing climate change outright” is a chimera.
With respect to climate restoration, a forest (more precisely, a forest ecosystem) has much greater importance. By the interplay of evapotranspiration and condensation processes, forest ecosystems are known to control the air temperature within their own boundaries. Moreover, forest ecosystems have an impact on the temperature and climate conditions at the planetary scale while influencing the global hydrological cycle and thus the level of the partial pressure of the other strong greenhouse gas: water vapor. Calculations of Makarieva, Gorshkov, and Li in 2009 led to the conclusion that forest ecosystems have a strong effect on the water vapor partial pressure in the atmosphere above the canopy and beyond. Investigation of the spatial course of rainfall events perpendicular to the coastline revealed a steady increase of the precipitation rate over a distance of up to 2,500 kilometers, whereas the precipitation rate sharply decreases in areas where in the past the size of forests was significantly diminished. It appears that without a stabilizing biotic impact, desertification and increase of the surface temperature are unavoidable.
At this point we should remember that a moderate mean temperature of about 15°C and the availability of liquid water are the most important preconditions for the existence of life on Earth. If Lovelock´s Gaia theory holds, the concept of preservation and restoration of large tropical and boreal forests becomes a crucial issue, of higher importance than the carbon capture and storage paradigm suggests.
It appears that forest ecosystems constitute our most important life-supporting backbone. Human society is well advised to stop sacrificing them. The preservation and restoration of forest ecosystems should to be treated as the most promising instruments of sustainable geoengineering.
Mineral reserves
I wholeheartedly agree with the Roderick G. Eggert’s comments in “Critical Minerals and Emerging Technologies” (Issues, Summer 2010), even though he sugarcoats the impact of our nation’s current short-sighted mining regulations.
The discussion on improving regulatory approval for domestic resource development is really a key issue that needs elaboration. The current morass of regulations and environmental zealotism in the United States, in the eyes of investors, has created a psychological and economic image of the United States that is somewhat akin to that of the worst of the third-world nations. The 2009/2010 version of the report produced by the Fraser Institute, a leading free-market think-tank, cites the United States as becoming even less favorable to mining investment, and hence less attractive as a target for development. The introductory letter in the report carries the headline “California Ranks with Bolivia, Lags Behind Kyrgyzstan,” and states: “The worst-performing state was California, which placed 63rd, ranking among the bottom 10 jurisdictions worldwide, alongside regimes such as Bolivia, Mongolia, and Guatemala.” Fred McMahon, coordinator of the survey and the institute’s vice president of International Policy Research, goes on to comment: “California is staring at bankruptcy yet the state’s policies on mining are so confused, difficult, and uncertain that mining investment, which could create much-needed jobs, economic growth, and tax revenue, is being driven away.”
Is it time to ask ourselves whether we are interested in the global environment or just that of our locality? By driving mining away from countries such as the United States, where it can be monitored and held to a reasonable standard of accountability, we are forcing it into areas of the world with few rules and little control over the practices employed.
Imposing an environmental tariff on materials from such countries, as some have proposed, does not appear to be a practical solution, because it will lead to still further job losses in the United States. More industries will move offshore to take advantage of lower costs and the higher availability of raw materials. The magnesium-casting industry is a good example of what happens when a single-country tariff, in this case a protective tariff, is levied.
With such movement go not only jobs and tax base but critical technology developed in the United States. Today, offshore production bases are saying “Don’t worry about the shortages of critical raw materials in the West. Just send us your orders and your best technology and we will build them for you.” What they don’t mention is the loss of the high-paying jobs in areas such as the alternative energy industry, a key component in the current administration’s recovery program. In the defense industry the risk is even greater, putting our ability to ensure our way of life at risk.
I believe it is time we stepped back and took a realistic look at what we want from the mining industry and what the economic impact of a crippled domestic mining industry will be. Only then will we be in a position to determine our economic and political future.
Transforming conservation
As Alejandro Camacho, Holly Doremus, Jason S. McLachlan, and Ben A. Minteer point out in “Reassessing Conservation Goals in a Changing Climate” (Issues, Summer 2010), a challenge now is how to continue to save species, ecosystem services, and “wild” ecosystems under current and anticipated global warming. Business-as-usual conservation biology, based on setting aside tracts of land to preserve nature as it was found at some past point in time, will not meet its goals when the climatic rug is pulled out from under our preserves.
We agree that the challenge of restructuring conservation biology is daunting, but it is tractable. A committee to “develop … a broad policy framework under the auspices of the National Academy of Sciences,” as Camacho advocates, is an essential step. Focusing such a committee’s mandate on unifying the conservation targets of U.S. governmental agencies can effectively jump-start a new era in conserving nature.
It can also provide a global model because (1) the United States is large and geographically diverse, providing test cases for many biomes; (2) different land-management agencies encompass a wide range of sometimes conflicting goals, but are under one national jurisdiction; (3) America has long valued nature and has been a leader in global conservation; and (4) copious historic and prehistoric data document the natural ecological variability of vast tracts of our continent at time scales that range from tens to thousands of years or longer.
It is no longer appropriate or feasible to set the benchmark for successful conservation as managing for single species or holding an ecosystem to a historical condition. We know from the past that the normal ecological response to climate change is for species to dramatically change geographic distributions and abundances and assemble into new communities. Some species thrive when environments change, others suffer. A more realistic and indeed ecologically more sound overall philosophy is to ensure that species can traverse the landscape as needed in order to track their climate space, and where that is not possible, to help species move using sound science.
This overall philosophy requires developing new standards for land managers—standards based on ecosystem properties rather than the presence of individual species. As an example, in most western American terrestrial ecosystems, the rank-order abundance of individuals within genera of small mammals did not change much during the past several hundred thousand years of dramatically fluctuating climate, but the species within those genera did. Thus, it may not be of much concern if one species replaces another in the same genus, but it may be of great concern if the genus disappears. Likewise, it is already possible to model, using biogeographic principles, what overall species richness in a given climatic and physiographic setting should be. With changed climates, some reserves should see an increase in the number of species, and others should show a loss. Deviations from such expectations would indicate the need for management action.
It may be inevitable that managed relocation be implemented in such cases, and also where it is clear that endangered species simply cannot survive under the climatic regime in their existing preserves. That is a risky business, which has the potential of turning what are now reasonably natural ecosystems into elaborate, human-managed gardens and zoos. That is, saving species could destroy the wild part of nature that many regard as its key value.
For that reason, we suggest that the new conservation mandate needs to incorporate the explicit recognition of two separate-but-equal kinds of nature reserves. One—species reserves—would have the primary goal of saving species. Receiving endangered species brought in through managed relocation would be an integral part of the management arsenal. Such reserves would be most logical in places that already have many human impacts. The other—wildland reserves—would have the main goal of mimicking the ecological processes (not necessarily with today’s species) that prevail in times or places where humans are not the landscape architects. Managed relocation simply to save a species would be less desirable there. Prioritization of ecosystem services would be the focus of other government lands.
Whatever strategies eventually are adopted to make conservation biology more compatible with the future, it is essential to initiate action now, given the rapid rate and probable magnitudes of human-caused global climate change.
Contrary to what Camacho et al. seem to imply, the idea that conservation measures must go much beyond traditional reserves is not new and was urged for a long time before the problems posed by climate change. For over a decade now, systematic conservation planning has focused on the creation of “conservation area” networks, where conservation areas are any habitat parcels that are at least partly managed for the persistence of biodiversity. A dominant theme has been that these networks must be suitably interwoven into the landscape or seascape matrix. The motivation has been partly to prevent conservation areas from becoming islands surrounded by such inhospitable habitat that species’ populations within the areas become isolated and unviable, and partly to incorporate conservation plans into the local cultural geography.
What the specter of climate change has done is to underscore this integrative approach. For those species that are capable of adapting to climate change by dispersal, conservation-oriented management of habitats must include units that ensure the required connectivity at the appropriate time. As Camacho et al. note, the time for static conservation planning is over. Luckily, many recently developed decision support software tools (such as ConsNet or Zonation) enable dynamism by incorporating spatial coherence and other criteria of concern to conservation planners.
Turning to managed relocation (or assisted colonization), its proponents fully acknowledge most of the difficulties raised by Camacho et al., although there has not been the kind of attention to ethical issues that the authors are correct to highlight. The problem here is of social justice: the possibility that relocation will be targeted to regions where human stakeholders have the least power to prevent the reallocation of land to uses stipulated by those in power.
However, even this problem is not new and is related to the problem of creating reserves through enforced human exclusion. As many authors have documented, the creation of national parks in the United States and elsewhere has routinely involved the expulsion of or denial of traditional rights to resident peoples; for instance, the First Nations in North America. In recent years, the shift in focus from traditional reserves to conservation areas has somewhat mitigated this problem. Managed relocation may well reintroduce this problem, although it is unlikely that the scale will be as large as that of the creation of the original national parks.
As Camacho et al. suggest, the only ethically responsible policy is to insist on environmental justice analyses of every proposed relocation. However, there is no rationale for restricting these to relocation policies: They should form part of every environmental policy that we consider.
For the past century and a half, the centerpieces of conservation have been parks, wilderness areas, and refuges. Many of these are now public lands, but more and more such conservation efforts have been private initiatives, most notably the Nature Conservancy’s efforts. The premise of both the public and private initiatives has been that large protected areas would be self-replicating, examples of what was once presumed to be an orderly recurring cycle of disturbance and recovery. To be sure, there were compelling reasons to doubt this premise long before we began to recognize climate change–induced departures from what had long been regarded as the norm. But until recently, the model of orderly succession held sway, and conservation efforts were directed to setting aside wilderness areas, expanding parks, and expanding refugia. In the face of climate change, areas with fixed boundaries may well more closely resemble prisons than refuges. The ensembles of species contained in our parks, refuges, and designated wilderness areas are almost certainly going to change in ways that many will find deplorable.
Translocation may be, as the authors suggest, a reasonable response to the prospect that protected areas may become dead ends, but there are serious problems with putting all our eggs in that basket. First, we can’t possibly move everything (any more than we can save every endangered species). It seems almost inevitable that our efforts, should we opt for translocation, will focus on charismatic species, who may or may not qualify as being biotically significant.
There is also the distinct risk that transplants will, if they take to their new home (no small “if”) become invasive nuisances or worse. There are no sure bets, but the authors are to be congratulated for calling for the initiation of a urgently serious discussion of how conservation needs to be reinvented in a warming world.
I was pleased to read another article by Stephen Ezell related to the need for action in the deployment of intelligent transportation systems (ITS) in the United States. (“Bringing U.S. Roads into the 21st Century,” Issues, Summer 2010). I have regarded him for some time as the most erudite researcher and meticulous analyst, bar none, in the area of ITS, and the article is true to that reputation—a concise and powerful statement of the case for action.
As in his other works, Ezell carefully blends the optimal mix of facts, statistics, and real-world case studies to give a complete view of the growing gap between the United States and other nations around the world in the ITS field, citing earlier successes of the interstate highway system and global positioning system now being supplanted by far more robust development elsewhere. Yet despite his straightforward style, the crispness of his presentation serves to inspire rather than alarm, and throughout there is an air of measured but clear optimism that the United States has all the resources needed to emerge as a leader in the field. Having set up and broken down the key elements of the problem, Ezell concludes with clear, actionable recommendations based on this analysis, suggesting what the ITS community agrees are now the two most pressing issues: the transition of the U.S. Department of Transportation’s role from R&D to implementation, and the application of performance-based metrics.
Along with endorsing these recommendations in the strongest possible terms, based on my own 15+ years of involvement with the ITS community, I would add that I believe that the effective use of the Federal Advisory Committee Act and its ability to bring public and private stakeholders together to directly influence U.S. policymaking, supported by working groups of experts from their respective organizations, could and should be leveraged as part of driving these transitions.
The author also successfully demonstrates how the United States lags behind other countries, notably Japan and South Korea, in ITS research and deployment. Indeed, as a percentage of gross domestic product, both Japan and Korea spend twice as much as the United States. In Japan, drivers can use their mobile phones to access real-time comprehensive traffic information. South Koreans regularly use their mobile phones to access a Web site that provides them with a list of available public transportation options. Improved information of this nature allows drivers to not only choose alternate routes when traffic is bad on their preferred route, but to also consider public transit as an option.
The author urges the federal government to increase spending on ITS by up to $3 billion annually and to speed up research and deployment efforts. For example, he is critical “that it will take five years simply to research and to make determinations about the feasibility and value of IntelliDrive.” Yes, ITS holds immense promise, and in an ideal world it would be nice to accelerate deployment. However, some caution is in order.
The United States is more geographically diverse than Japan or South Korea.Within urban areas, the level of congestion varies greatly. Congestion has increased substantially in higher-growth areas such as Dallas, but much less in lower-growth areas such as Cleveland. Even in urban areas with less congestion, some have more congestion than others. Given scarce resources, it is prudent to carefully consider where and how to spend dollars on ITS most effectively. It is important to prioritize efforts in the more highly congested areas and corridors that can yield the greatest benefits. Expediting spending solely for the sake of speed can lead to different results.
We also need to link ITS investments to transit, alternate routes, and even telecommuting. Once ITS alerts them to a traffic situation, drivers need frequent and reliable service if transit is to be a viable option. In other cases, opting to telecommute may be better. We also need to link ITS investments to performance-based outcome measures. This is especially so if funding increases. Just as it is prudent to focus investment on where it makes the most sense, it is also prudent to see if those investments actually made a difference. In this respect, the author notes the need for better performance measurement and evaluation.
He writes that nationwide deployment and a single national standard are benefits in Japan, but I must honestly say that a unified system is more easily adopted nationwide in a country such as Japan, because Japan is geographically isolated from other countries. However, private companies, which provide vehicles, onboard devices, and communication devices, seek potential markets globally in order to be successful. To maximize their opportunities in global business, it is critical to remind ourselves that we should actively promote international collaboration. Creating a standardized global market will help those globally competitive companies. Reading the article has led me to reflect on whether sufficiently detailed and comprehensive preparations had been made in Japan before the current deployment of ITS. I am thinking of various areas: the study of mid- and long-term roadmaps for nationwide deployment, of management and maintenance after the nationwide deployment, and of preserving flexibility for future trends in technology.
Ezell also writes that the strong leadership role of the national government has been a benefit in Japan, but I think that a public/private partnership rather than the leadership of the national government alone has been an important key to the successful deployment of ITS. In Japan, a public/private partnership has been ongoing, but I think that it is important to enhance this partnership ever more, a view that I feel many countries hold.
I agree with his view that the implementation of performance measurement and further improving accountability for results are both important for the effective allocation of surface transportation funding. As is often mentioned in this arena, I understand that there are major challenges in implementing performance measurement, such as which measures should be adopted, how to continuously monitor performance measurement data, and how to link performance measurement to funding allocations. In spite of these challenges, the future direction that the article points out sounds right. ITS technology can be useful in solving many of these challenges in order to realize performance measurement and better accountability.
Ezell also states that investments in ITS deployment deliver superior benefit/cost returns. I think this view is exactly right when looking at certain aspect of ITS, but in regard to other aspects, such as safety improvements, the results are still being debated. In order to increase investments in ITS deployment, it would be important to evaluate all benefits accurately, and to make evaluated benefits visible and easy to understand for road users and taxpayers.
His article helps us realize the differences among the countries that have interests in ITS and encourages me to think afresh about my own country’s ITS. I will share this beneficial article with my co-workers in Japan, so that they also can use the information for our future activities at MLIT.
Personal health records
In “Personal Health Records: Why Good Ideas Sometimes Languish” (Issues, Summer 2010), Amitai Etzioni suggests that a “Freudian macroanalysis” (FMA) of proposed policy remedies, seeking “subterranean forces,” can aid our understanding of why some solutions aren’t adopted as quickly as one might expect. To illustrate, he examines the question, “Why aren’t more people using personal health records (PHRs)?” As interesting as his answers are however, he doesn’t go deep enough, nor does he psychoanalyze the most important party to the decision about whether to use a PHR. I’d like to suggest an expanded analysis.
First, Etzioni is correct in his observations about how much PHRs cost for doctors to use, doctors’ fear of patients using their PHRs to pursue lawsuits, and doctors’ concerns about entering data into PHRs that could confuse or scare patients. But among doctors, those aren’t subterranean concerns at all. In fact, the great majority of doctors readily admit to all of them. An FMA, as I understand it, should seek to uncover hidden causes of actions or emotions—causes that one might be loath to admit play a role (Freud’s hypothesized “Oedipal complex” and “penis envy” constructs are well-known examples).
More important though, is that Etzioni’s original question wasn’t, “Why don’t doctors make it easier for patients to use PHRs?” That’s an interesting question, and one we have been pursuing in our survey research at the American Medical Association, in collaboration with the Markle Foundation, and it’s clearly related to the general adoption of PHRs by patients. But, to be blunt, if PHRs were really appealing to patients, they would be purchasing and using them with or without their doctor’s help. Yet they are not.
So, what subterranean concerns of patients might be hindering the adoption of PHRs? Here a Freudian analysis seems even more apropos. Terms like denial, avoidance, repression, regression, narcissism, and reaction formation come to mind as potential hidden reasons why most people don’t spend much time obsessing over their health data. The fact is, most people don’t enjoy spending time pondering illness, infirmity, and mortality, even when doing so might be beneficial to them. Or, as one commentator put it, PHRs are not like Quicken (a model for PHRs in the eyes of many, since financial data are also complex and confidential), because tracking one’s blood pressure will simply never be as much fun as tracking one’s investment portfolio.
How might we use such insights to generate greater uptake of PHRs? Data from population surveys show that the individuals most interested in using PHRs are those with chronic conditions or an ill relative. These people have been thrust, most unwillingly, into tracking medications, lab results, and symptoms. For them, PHRs might be a way to help regain a sense of control over their fate and to get better-quality care. But for the rest of us, those who are relatively healthy today, how might we surmount denial, procrastination, and avoidance to convince people that they should be spending time tracking their personal health statistics? And in doing so, is there a risk of moving from denial to obsession? These are combined policy and psychology questions, which deserve much further study.