Forum – Winter 2013
Bruce Everett (“Back to Basics on Energy Policy,” Issues, Fall 2012) reviews the history of government energy policy with discomforting accuracy. One can only hope that the article will persuade more of us that Walt Disney’s first law, “Wishing will make it so,” is fine for feel-good cartoons, but a very poor guide for policy. As the article makes clear, government can take rightful pride in its support of basic research on energy-related science and early-stage engineering. But grandiose schemes to remake our energy economy have not worked.
It is hard to quibble with the main ideas of “Back to Basics,” but successful technologies don’t always follow “the three distinct stages” of basic research, technical development, and commercialization that Everett mentions. For example, James Watt had only a rudimentary understanding of basic thermodynamics as he developed the high-efficiency steam engines that accelerated the Industrial Revolution. And in many of his most important inventions, Thomas Edison was not particularly concerned with having “a solid understanding of the science involved.” Neither Watt nor Edison became bogged down by government interactions. In fact, they hardly interacted with their governments at all, except in the important area of intellectual property.
Discussing externalities, Everett mentions that “Climate change scientists argue that increasing CO2 will have a catastrophic impact on humanity.” Quite a few distinguished scientists, including climate scientists, do not agree with this apocalyptic assessment. There are even credible arguments that more CO2, with its modest warming, will be a net benefit to humankind, for example, because of increased agricultural yields. The movement to demonize CO2 has many of the trappings of religious zealotry. One wonders what future historians will make of the cult-like hectoring of our citizens to reduce their “carbon footprints” or the uncritical promotion of “renewables,” including wind, solar, and ethanol, soon to be available in abundance from cellulose, in accordance with Walt Disney’s first law and congressional legislation.
“Back to Basics” has some of the flavor of Thomas Paine’s Common Sense, published in 1776, which reviewed the British government’s colonial policies in North America. Everett’s equally commonsensical review makes a persuasive case that our government’s policy on energy is in equal need of reform.
The author served as the director of the Department of Energy’s Office of Energy Research (now the Office of Science) from 1990 to 1993.
I find myself in agreement with Bruce Everett that some energy policies have been misguided, but find that he misses the overall reality. I would argue that there is little evidence of an efficient energy “market” that is effectively choosing among fuels and technologies or that through some mysterious process solves our national security and environmental problems.
First, his characterization of the policies that have promoted nuclear power and biofuels are quite compelling. In fact, he even shortchanges his argument by leaving out the market failure of insurance for nuclear power and the liability limitations of the Price Anderson Act. He also underplays just how flawed biofuels policy is in the United States and Europe, where by default, burning biofuels is assumed to be as carbon-neutral as wind, solar, and geothermal. Carbon credits are provided to low-efficiency woody biomass electricity plants despite their emitting 30% more CO2 per megawatt-hour than a coal steam turbine. Massachusetts may be the only political entity to have established its criteria for biomass emission credits on a scientifically sound basis.
However, Everett argues that even well-designed and targeted policies to induce changes in the national energy mix are misguided. He argues that cost and the market alone should decide what our energy supply mix should be, and government should retreat to only supporting basic research. He correctly acknowledges that there are circumstances in which government intervention may be needed to correct market externalities such as national security and pollution, but he does not consider climate change to be in that category despite the extent and irreversibility of its impacts. He also dismisses the Pentagon’s interest in efficiency, renewables, and alternative fuels as unnecessary to the military mission of defending our country, which requires a diverse and secure fuel supply. It makes military sense to replace diesel generators in combat with solar panels and lightweight batteries rather than sacrifice troops in the protection of fuel convoys in Afghanistan. He also ignores the past century of cash subsidies and policy advantages that the fossil fuel industry enjoys.
The International Energy Agency reports that globally, fossil fuel subsidies grew to more than $400 billion in 2010. Subsidies and tax breaks for oil, gas, and coal have averaged $3 billion to $5 billion per year in the United States. In addition, the indirect subsidies in the United States for below-market drilling and mining leases on federal lands, allowing mountaintop removal for coal extraction without restoration, the current lax enforcement of environmental rules for protecting water and land from gas and oil fracking, and the unregulated pollution that is permitted under air and water laws is not internalized. Coal is cheap simply because it does not pay the full cost of the damage suffered by others in its extraction and combustion. The vulnerability of our economy to wild spikes in world oil prices has preceded most of the recessions of the past half-century. Our heavy military investment to keep sea lanes open for oil transport is paid out of general revenues and not through an energy tax. Is it not in the national interest to reduce our vulnerability to oil supply disruptions and to spend a bit to reduce oil demand by making homes in the northeast more efficient and increasing fuel economy standards?
In short, the article contains some useful analysis of where policies in the United States have been more beholden to the corporate interests that pushed them than to the national interest. However, looking at other countries, we see that Denmark is now producing more than 25% of its electricity from wind, and Germany is producing more than 15% of its electricity from renewables. Contrary to Everett’s statements, these efforts have paid off with major job increases in renewable energy production for wind, solar photovoltaic, and solar hot water.
The market and market decisions are not necessarily well matched to global politics, national security, environmental threats, or the long time frame associated with climate change. But if we are to rely on them at all, we should eliminate the many financial and indirect subsidies and externalities enjoyed by the fossil fuel industry.
Each year, the author and Bruce Everett debate energy and climate issues at the Fletcher School.
Bruce Everett details several important misconceptions and failures that undermine our energy innovation system. But his conclusion that the military should retreat from this endeavor ignores technologies that have the potential to both enhance military capability and thrive in the commercial market. In these instances, we can and should leverage the unique institutional attributes of our military, which as Everett points out catalyzed technological change in the 20th century. Biofuels and batteries demonstrate the need for a more nuanced perspective. Because alternative fuels do not make ships go faster or planes fly higher, the Navy has been criticized for their efforts to link biofuels R&D with national security priorities. Battery technologies, on the other hand, offer clear tactical and operational benefits to the Army and Marines.
The challenge then is to design programs that effectively move engineering knowledge between carefully aligned military and commercial applications. It is not to dissolve government/industry partnerships. The three phases Everett describes—conceptual, technical, and commercial—are eerily linear, and it is a contradiction to cite the successes of military innovation while exalting the role of basic research. The military is not Lewis and Clark–like, looking for nothing in particular. The military is looking for something it can buy, operate, and maintain. And it is precisely this end-to-end, incremental approach that fostered so much innovation in the 20th century. Faith in another approach, whether for defense or energy or health, is like faith in the Seven Cities of Gold. It is not enough to understand the essence of research. We must strive to cultivate the essence of innovation.
There is one serious and indeed fatal flaw in the article by Bruce Everett. It left out completely the great success story of energy-efficiency policies and technologies pursued over nearly 40 years, with the result of reducing the cost of many energy services to the consumer, often to the point of paying back the investment in a short period, much shorter than the lifetime of the technology purchased. In fact, the reduction in energy costs from only a few technologies supported by the Department of Energy (DOE) in concert with the private sector have repaid the entire government energy R&D budgets for energy efficiency from the beginning of DOE, as measured by dollars saved by the public. [See the 2001 National Research Council (NRC) report Energy Research at DOE: Was it Worth it?] Examples are low-e windows and high-efficiency ballasts for fluorescent lights. The same was true for fossil energy R&D. One particular innovation, the diamond incorporated drilling bit, has contributed significantly to the directional drilling so important to the current natural gas bonanza. A more recent study of the outlook for energy efficiency’s continuing role is given in the 2009 NRC report Real Prospects for Energy Efficiency in the United States. It estimated that cost-effective efficiency improvements could reduce U.S. energy use by 19 to 36 quads by 2030.
Another important point ignored by Everett is that overall energy use per unit of gross domestic product has steadily decreased. This reduction has been influenced by the offshoring of much of our manufacturing and the structural shift to a more service-based economy, but the biggest influence has been the use of more energy-efficient technologies and practices. From this, one can deduce that energy efficiency has contributed more to energy supply than have any of the supply-side technologies.
To evaluate the history of energy policies, one needs to look at the whole picture. Everett’s paper is fatally flawed, even if it had been titled “Back to Basics on Energy Supply Policy.”
From 1975 through 2002, the U.S. government poured more than $10 billion in tax credits, research funds, and technology into a long-shot idea that commercial volumes of natural gas could be extracted economically from source rock. The effort was superlatively successful, triggering today’s shale gas and shale oil bonanza, which has transformed energy, balance of payments, and geopolitics for the United States. It has also especially benefitted one private player, a sole wildcatter with the patience and stubbornness to endure the years of uncertainty. That would be George Mitchell and his company, Mitchell Energy. In 2002, having absorbed and concentrated the lessons of the quarter-century of federally funded research into work in the Barnett Shale near Fort Worth, Texas, Mitchell sold his company to Devon Energy for $3.1 billion.
In his impressive article on how energy is best developed, Bruce Everett describes succinctly why private players almost always are best at divining the technologies that will actually work commercially. Advocates of robust government research efforts usually cite the Internet, GPS, and semiconductors as examples of what can be done. But, Everett writes, other high-priced government-funded triumphs—the Moon landing and supersonic flight among them—have yet to result in direct commercial application. The difference is that government funding often gets distorted by politics, whereas private players only have the bottom line in mind. Because of this history, Everett argues, the government should finance basic research and avoid joint efforts with industry, not to mention tax credits and renewable mandates. Above all, do not “pick winners” by supporting individual companies.
In his conclusions, Everett aligns with a broad swath of thinking. He is not alone in his advice. But this philosophy would hold up better if it defended itself in the context of the shale boom, which some call the greatest development in energy in a century.
If one had strictly observed Everett’s recommendations, the global economy would be in much worse shape, and the United States in particular would be, because oil and gas supplies would be much tighter. As for Mitchell, he would still be a billionaire from his previous ventures. But he would be at least a couple of billion dollars poorer. Some might say that, given the scale of the triumph when the government did step in effectively on behalf of a single player, picking winners is not necessarily a bad bet.
Bruce Everett has once again with clarity of vision and hard-hitting facts delivered a critique of U.S. energy R&D policy over the past 40 years that should be read by every high-ranking energy official in the administration, members of Congress, and the head of every energy trade association in Washington. With devastating detail, he shows how government financial support for the nuclear and renewable energy industries has cost Americans billions of dollars while doing little to change the country’s overwhelming dependence on fossil fuels. His assessments of the state of the U.S. solar, ethanol, and wind (especially offshore) industries are particularly stark, given the minimalist role that they play in the nation’s energy balance. Given this disappointing record, one has to ask why serious energy analysts continue to believe that we can have an energy future based predominately on renewables, demand-side management, and energy efficiency, when the facts clearly say otherwise. How long are we going to continue to hear the mantra that if the price of all fuels just reflected their “real” social and “environmental externalities,” we could make the conversion to a carbonfree future in the next 25 years? If this were so, why does every forecast for the next 25 years project global increases in fossil fuel consumption and rising CO2 emissions?
Where one has to quibble a bit with Everett, however, is in his singular lack of critique of the subsidies received by the fossil fuel industry, where he spent a large portion of his career. There is virtually no mention of the various tax loopholes or advantages for domestically produced oil or the favorable treatment of foreign-produced oil. Although this author has in the past written on how important some of these tax provisions are for independent oil and gas companies who find most of the oil produced in the United States each year, or how certain tax breaks on foreign-produced oil are necessary to level the playing field for U.S.-based international oil companies against their foreign competitors, these provisions are still subsidies little different from those received by the nuclear and renewable energy industries against which Everett rails. Likewise, he is singularly silent about the newest economic engineering tool (master limited partnerships) used increasingly to finance a host of energy projects, particularly natural gas and petroleum product pipelines, owing to their favorable tax treatment.
As a free-market economist, Everett obviously loathes interference in the market geared to forcing the commercialization of new technologies before they are able to compete in the marketplace on their own footing. Although this philosophical stance is respectable, why doesn’t he suggest that it is an equally viable proposition to get rid of all special tax advantages on all energy forms as well as all Clean Energy and Renewable Portfolio standards and put on a serious carbon tax and let the fight begin to see who can compete in the marketplace. At least we would get a reduction in CO2 and other greenhouse gas emissions, while keeping our offshore areas pristine and our food prices perhaps lower both at home and abroad.
Changing science education
Carl Wieman’s “Applying New Research to Improve Science Education” (Issues, Fall 2012) brings a welcome and refreshing focus on science learning. Creating learning environments that support all students in developing expert-like thinking in science is essential, whether they will join the science and engineering workforce or rely on scientific approaches and findings in making health care choices or voting on land-use issues. As Wieman notes, we have the evidence in hand to improve K-16 learning, synthesized in the National Research Council reports Taking Science to School and Discipline-Based Education Research. Yet the evidence also reveals a striking lack of widespread implementation of teaching practices that take into account students’ prior knowledge and structure their learning with challenging but doable practice in solving problems like experts.
Gateway science, math, and engineering undergraduate courses are key to improving K-16 learning. These courses model science learning and teaching for future teachers and create impediments for more than half the students who enter college intent on a science or engineering major, but leave science because they are discouraged by the poor teaching. The new framework and standards for science education are firmly rooted in the evidence on science learning, but they will be difficult to implement fully if future teachers do not experience these evidence-based practices in their own undergraduate years. Further, all imaginable success in improving K-16 science learning will be for naught if high-school graduates enter traditional lecture-based college courses. The new standards can motivate change in the gateway courses, but this alone is not sufficient.
For the many reasons outlined in Wieman’s article, university culture works against improving teaching. Changing the incentive and rewards systems to create departmental and college-wide cultures that value and recognize effective teaching has been a Sisyphean task. Certainly policy levers focused on accountability are one way to push on the interconnected system of teaching practice, curriculum development, and assessment. Bottom-up efforts from faculty alone cannot leverage the scale of change required. Yet it is faculty behavior that must change. The typical professional development approaches of sharing “what works” and offering evidence has been quite successful in raising awareness of instructional strategies that support science learning, but a relatively small percentage of faculty persist in using effective strategies. Lack of understanding of the principles behind a practice or of how to adapt the practice to one’s specific context are common barriers to persisting with a practice. We need to find ways to both help and provide incentives to faculty to put effective practices into place.
Lasting change needs to be at the level of departments and institutions, achievable only by applying multiple levers, including policy aimed at the incentive system and new forms of professional development. National efforts, including programs by the National Science Foundation, National Institutes of Health, and Howard Hughes Medical Institute Partnership in Undergraduate Life Science Education, model creative ways stakeholders can partner to push change at scale. However, as Wieman indicates, implementation must be anchored in the research findings on science learning.
The author chaired the National Research Council’s Discipline-Based Education Research committee.
The excellent article “Qualitative Metrics in Science Policy: What Can’t Be Counted, Counts” by Rahul Reki and Neal Lane (Issues, Fall 2012) reminds me of a situation that Neal Lane and I lived through during the time he was director of the National Science Foundation (NSF) and I was chair of the National Science Board. We had heard that an Ohio congressman was likely to vote against the NSF budget. Neal had the opportunity to meet with him in his office. I do not remember for sure whether I was along. The congressman asked whether it might be a fair estimate to believe that only one-third of the funds NSF gave led to research having societal impact. One-third might be an overgenerous estimate. Maybe it was more like one-tenth. The congressman said that, granting him this hypothesis, he believed the NSF budget should be cut by two-thirds! Some moments of unease followed. It was agreed that to spend taxpayer dollars having no societal consequences was a waste. The problem was determining which projects had societal benefits.
For example, in the late 1930s, a few people such as Rabi at Columbia University were trying to measure the energy-level structure of atoms having nuclear spin in a magnetic field. These energy spacings were minuscule, much smaller than the energy of thermal fluctuations at room temperature. No one could imagine at the time what practical consequences these radiofrequency experiments on atomic beams could possibly have. Later, Bloch at Stanford and Purcell at Harvard would independently discover that different atomic nuclei within a molecule resonate at different radiofrequencies for the same magnetic field strength, allowing us to learn chemical and structural information about the molecule. Still later, in the 1970s people such as Lauterbur at the University of Illinois and Mansfield at the University of Nottingham would show that a magnetic field whose strength changes over space allowed images to be made. This ushered in what is called magnetic resonance imaging, which is used to distinguish pathologic tissue (such as bone breaks or tumors) from normal tissue. It is why today doctors know how to fix various bone fractures without first cutting you up to see what is wrong.
The congressman understood and did vote for the NSF budget. The real point of this story, however, is to emphasize what Reki and Lane have already stressed: A smart science policy is not simply based on enumeration.
In reading the article by Rahul Reki and Neal Lane, I was reminded of Lord Kelvin’s famous remarks of a century ago that have had an enormous impact on thinking about what a strong field of study ought to be like. “When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind.” However, this proposition implicitly denies the fact that much of solid human knowledge about the world is not expressed in numbers. Much of scientific knowledge is not quantitative. You and I agree that the tree we see whose leaves are turning red is a maple tree. Only a very small portion of Darwin’s description of species in his great book, or of the description of phenomena addressed by geology, is expressed quantitatively. These scientific fields could be even stronger if some of the phenomena that presently are described qualitatively had a strong quantitative characterization. But we still know a lot about many phenomena that don’t. Kelvin’s argument is greatly exaggerated.
And thinking of this kind has led in a number of fields to a messianic effort to construct measures or indicators of the things being studied, together with a position that the science ought only to be about the phenomena that are characterized quantitatively. This raises two kinds of questions. One is how good particular quantitative indicators are in characterizing what we are really interested in understanding. The other is whether the range of good quantitative indicators we have covers what we want to know.
These issues are presently playing out dramatically regarding the evaluation of what students are learning in school and the quality of the teachers who are trying to help those students learn. Nobody is arguing that it would not be convenient if we had good quantitative measures of both. The argument is about whether the indicators we have and use can adequately characterize what we want to know about.
This is exactly the issue regarding the insistence that a rate–of-return calculation be made regarding aspects of academic science. Many years ago, in my first testimony before a congressional committee, I was asked what the rate of return was on university research. I answered that we did not know, that calculations of that could be all over the map, and that all the computational algorithms that I knew about were focused on only a small portion of how scientific research influences the human condition. That still strikes me as right. A good portion of what is important is not readily quantitative, indicators that are developed are likely to miss much of what we are interested in, and at the same time a considerable amount can be learned about those things by qualitative observation and analysis.
My argument, of course, is not that we shouldn’t try to develop quantitative indicators. But we should resist becoming their servants.
The article by Rahul Rekhi and Neal Lane (no relation) was very thoughtful and made a number of important points about science measurement. Indeed, they are almost exactly the same points that the Office of Science and Technology Policy Science of Science Policy interagency group made in our 2008 Roadmap, in which we concluded that “the data infrastructure is inadequate for decisionmaking.”
Rekhi and Lane are exactly right: There is increased pressure to document the results of federal investments regardless of whether those investments are in education, welfare, foreign aid, health, workforce, or science. The scientific community should take this pressure seriously; the American Recovery and Reinvestment Act (ARRA) reporting requirements are a likely harbinger of future requests. And scientists who landed men on the Moon and mapped the human genome should be the last group to argue that measurement is too hard to do or should not be done, particularly when there is ample literature that spells out how to do such measurement in a scientific manner in many other contexts. I will illustrate with comments on four of the key points made by Rekhi and Lane.
(1) Much of the benefit associated with science is in training students. Rekhi and Lane make the argument that it is “intuitive” that research opportunities provided to undergraduates have value, and cite the National Science Foundation’s (NSF’s) Research Experiences for Undergraduates program. The 2012 budget for this program was $68 million. Intuition might well be supported by evidence. That evidence does not need to be in higher test scores or salaries. The point is made that it is “a critical aspect of the educational process of becoming a truly 21st-century–ready scientist or engineer.” How might that be determined? Here are some obvious and I hope noncontroversial steps fully in the spirit of Louis Pasteur’s swan flask experiment.
Define the population. NSF and other science agencies should be able to tell the public exactly how many students are supported by science funding at all levels. Our experience with ARRA reporting was that universities and science agencies were unable to provide that information in a consistent, reliable manner. Write down the outcome measures. What does it mean to be a truly 21st-century–ready scientist and engineer? These can be qualitative or quantitative, but we should be able to enunciate them clearly. Write down a theory of change. What do scientists think is the causal effect of the program on the outcome of interest? Write down a counterfactual. What would the results be if the program did not exist? This then leads to the next point.
(2) It is difficult to quantitatively measure the results of science. True. It is difficult to measure almost everything (including the human genome). But scientists use both quantitative and qualitative data routinely to make judgments about other scientists: in tenure and promotion decisions, whom to include in conferences, who is “good” and not good. Write down what those data are.
(3) The benefits are not just economic. This argument has not been made by the science of science policy community. In fact, the concern of that community is that if scientists do not develop their own scientifically grounded measures, the only measures that will be used will be economic, as we saw with ARRA reporting.
(4) The benefits take a long time to accrue. That is clearly the case, but is no less true than of investments in education, health, transportation, and many other areas of federal investment. The processes by which scientific ideas are created, transmitted, and adopted are as well understood and well studied as in any other field of human endeavor. Numerous trace studies have described the network processes in great detail, and the fact that the processes are not deterministic does not mean that there are no covariates (see points 1, 2, and 3 above).
Einstein was right; not everything can be counted. But we know better as scientists to take that argument to its full reductio ad absurdum extent. Scientists can, and do, attempt to measure almost all aspects of the universe. Science itself should be no exception.
Water is perhaps the world’s most vital natural resource. All humans need clean fresh water for drinking, cooking, and washing. Modern society depends on adequate water supplies for agriculture and industry, fisheries and forestry, to generate power, and to eliminate wastes. As a result, the possible forms and motivations of potential social tensions and political frictions over water management are as varied as the societal benefits that are supplied by water.
Examining the links between water and social conflict, Ken Conca (“Decoupling Water and Violent Conflict,” Issues, Fall 2012) rightly emphasizes that the greatest risks of large-scale violence stem not from actual scarcity of water but from ineffective, illegitimate, ill-adapted, or even absent institutional arrangements to govern water, and that organizational structures and policy mechanisms that narrowly define the problem as physical water supplies ill-prepare decisionmakers to sustainably manage water resources.
Establishing and empowering effective and accepted water institutions, however, can be a herculean task. Freshwater sources and flows, river basins, groundwater aquifers, and lakes ignore political and bureaucratic boundaries. Yet responsibilities for managing water are typically divided internationally between riparian countries and fragmented domestically between rival agencies, often representing different interests. For the past two decades, much of the global water policy community has striven to tackle these challenges by developing strategies for integrated water resources management (IWRM). IWRM seeks to balance supply and demand dynamics, coordinating between multiple uses, constituencies, and ecosystem needs, as well as across geographic areas. IWRM recognizes water as both a social and an economic good, and so promotes participatory policy approaches engaging stakeholders at all levels in an effort to manage water resources both efficiently and equitably.
Despite this conceptual promise, however, many observers fear IWRM has proven more successful as an incantation than in implementation. Yemen, one of the world’s most water-stressed countries, provides a cautionary case in point. As recently as 2006, the fourth World Water Forum lauded Yemen for incorporating IWRM tenets into the letter of its national water policy. Behind this notional commitment, though, relevant agencies lack the human and technical capacities to administer and enforce water policy. Powerful vested interests in Yemeni state and society often oppose IWRM approaches. Perverse subsidies, for example, on diesel fuel for well pumps, have abetted the rampant expansion of groundwater irrigation. Critically, groundwater furnishes 70% of total water withdrawals in Yemen, but the country is now depleting its aquifers two to four times faster than nature can replenish them. At that rate, a 2010 World Bank assessment concluded that Yemen will essentially exhaust its groundwater reserves by 2025–2030. Although Yemen embraces IWRM in principle, in practice the country stands on the brink of water crisis.
Around the world, recent regional and global status reports reveal that many envisaged IWRM reforms have yet to be fully implemented, while progress in essential areas such as environmental monitoring and integration, climate change adaptation, stakeholder participation, knowledge-sharing programs, and sustainable financing is lagging. Meeting the world’s growing water needs will require rectifying these shortcomings. To ensure sustainable supplies of water, policymakers must ensure the supply of capable institutions.
Ken Conca’s essay provides a sweeping overview of many issues pertinent to the global water situation. Unfortunately, it contains a number of internal inconsistencies and contradictions that leave the reader wondering what solutions Conca may actually be offering, or whether he believes that water and conflict are, in fact, coupled.
For instance, Conca cites a recent African study that found that ”deviations from normal rainfall patterns led to an increase in social conflict, including demonstrations, riots, strikes, antigovernment violence, and communal conflict between different social groups.“ He cites results from another study suggesting that water-related conflicts occur in the Middle East/ North Africa region every 2.5 days on average. Those studies seem to provide considerable evidence that water shortages can result in conflict. Yet Conca concludes that “efforts to correlate water scarcities or drought with the onset of civil war have for the most part not found a statistically significant link” and “If there is a risk of large-scale violence, it stems not from physical water scarcity but from the institutionalized arrangements that shape how actors govern water.” If Conca is suggesting that inadequate governance and not physical scarcity is at the root of water conflict, then he has inappropriately placed the governance cart before the water horse.
Some clarification and reinterpretation are warranted here. It is painfully clear that physical water scarcity—a result of water consumption approaching or exceeding the limits of water availability—can be a potent ignition source for conflict among those sharing the water resource. When a watershed community is collectively using more water than the watershed can bear, trouble and conflict are close at hand. It is also clear, given great disparities in water availability and water demands across the borderlines of local watersheds, that water conflicts are intensely localized in their origins. At the exact same moment, one watershed can be experiencing severe scarcity while an adjacent watershed has sufficient water to satisfy all demands. The primary issue with governance, then, is a rather personal one: Are we managing our demands within the limits imposed by local water availability?
With my son beginning his first semester of college, I am finding it exceedingly difficult not to draw parallels between the basic skills of managing a personal checking account and managing a local water resource. I have advised my son that he will experience all sorts of temptations to spend more than his monthly budget will allow, and that there will be plenty of surprises like parking tickets and library fines to pay. But ultimately, if my son or my local watershed community ends up in overdraft, they should not lay blame on the bank or their government or other external parties for causing the problem. We must look first to our own internal conflicts over our individual or collective lack of discipline for managing ourselves in ways that avoid scarcity and enable us to be highly functioning citizens of the world.
We’re not dopes
I share much of the communitarian emphasis and values that Amitai Etzioni has capably expressed in numerous writings. I also share Etzioni’s critical stance toward rational choice theory. That said, I find his article “The Limits of Knowledge: Personal and Public” (Issues, Fall 2012) disappointing.
The article begins by noting that one of the basic assumptions underlying much of Western thinking is that individuals are rational beings. Etzioni then introduces the relatively new field of behavioral economics, arguing that this field “has demonstrated beyond reasonable doubt that people are unable to act rationally and are hardwired to make erroneous judgments.”
I think Etzioni is too eager to throw the baby out with the bathwater (although imperialist rational choice theorists might need to take a bath of bounded rationality). The many laboratory experiments he cites of Kahneman, Thaler, and Ariely of course indicate limitations on the pure tenets of rational choice theory. When it comes to “real-life” studies, it is harder to interpret a finding: So on page 53, the case of Israeli parents arriving late to pick up their children at a daycare center with a 10-shekel fine might well be a rational action of having decided that their time was worth more than the price of the fine; similarly for the next example of the majority of participants turning down an annuity (depending on the costs of the annuity, a lump sum payment might be more advantageous). Too much of Etzioni’s arguments seem to view actors as cognitively limited cultural dopes with “limited capacity to digest data” (p. 54). I doubt that any behavioral economist would argue that humans only act irrationally.
I don’t see how with this rather bleak perspective, his benevolent communitarianism can hope for an intellectual shift “of a Copernican magnitude” (p. 55). I would highly recommend that Etzioni consider as a needed supplement the vigorous new economic thinking of 1998 Nobel Prize winner Amartya Sen and especially 2009 Nobel Prize winner Elinor Ostrom for her emphasis on developing “a more general theory of individual choice that recognized the central role of trust in coping with social dilemmas … The frameworks and empirical work that many scholars have undertaken in recent decades provide a better foundation for policy analysis.” That, it seems to me, is a more promising macro-oriented thrust to give new vigor to communitarian thinking.
Energy and human behavior
In “What Makes U.S. Energy Consumers Tick?”, Kelly Gallagher and John Randell (Issues, Summer 2012) succinctly summarize a wide range of fundamental and applied research questions to which policymakers, consumers, governments, and energy producers/distributors will need clear answers, if the nation is to break from unsustainable and environmentally detrimental energy consumption habits.
The authors articulate key research questions and through their overview (1) highlight the immense and complex role of human behavior in energy production and consumption; (2) identify the critical need to better understand the most fundamental influences on individual, group, and societal behavior; (3) underscore science findings showing that behavior is inextricably integrated with economic prosperity, technology, and the health of civil society; and (4) emphasize the behavioral and economic implications for the shelf life of our standard of living and prospects for improving that of future generations.
Gallagher and Randell identify a monumental agenda for industry, policymakers, and science, generally. I say science, generally, because grand challenges such as these require “convergent science,” integrated responses across the full range of sciences; the challenges cannot be solved within one disciplinary framework. Diverse applied and interdisciplinary domains of physical, behavioral, and social research are necessary to ensure that we understand how we can take collective control of our energy practices.
Gallagher and Randell, however, do not address one critical issue affecting our ability to tackle this research agenda. Specifically, the community of behavioral and social scientists who are trained and interested in tackling these research questions is quite small. Our talent reservoir is shallow. A colleague at the November 2012 workshop on “Integrating Social and Behavioral Energy Research Activities,” organized by the American Academy of Arts and Sciences, lamented facetiously that the number of research topics alone almost outstrips the number of current researchers. But the National Science Foundation (NSF) is helping expand this community through a number of innovative interdisciplinary programs. For example, the cross-agency SEES (Science, Engineering, and Education for Sustainability) initiative draws on every part of NSF’s research and education portfolio and engages physical, social, and behavioral sciences. NSF programs such as SEES Fellows, Dynamics of Coupled Natural and Human Systems, Sustainability Research Networks, and Interdisciplinary Research in Hazards and Disasters are developing the critically important interdisciplinary scientific talent pool.
NSF is also engaging the President’s Council of Advisors on Science and Technology, the Department of Energy, and the National Oceanic and Atmospheric Administration in discussions on reducing technical and behavioral barriers to energy efficiency, while simultaneously helping to build convergent research communities and agendas through its support of SEES Research Coordination Networks and workshops. Numerous NSF programs support interdisciplinary energy-related research, training, and team development through graduate student support, Research Experiences for Undergraduates, INSPIRE (Integrated NSF Support Promoting Interdisciplinary Research and Education), the Science of Organizations, Decision Making Under Uncertainty, and many other programs that play a critical role in addressing the human resource shortfall.
In this era of “big data” (for example, from electricity “smart meters”), a growing community of interdisciplinary social and behavioral scientists can develop a sound understanding of effective levers by which we can control our energy use and costs (personal and environmental), providing robust market-governed options that enable Americans to create their own energy future.
The energy-related behavioral and social science research agenda outlined by Kelly and Randell could well pave the way for a 21st century convergent science knowledge base that can inform effective policies to address other grand challenges (for example, waste generation and management and crime prevention). In addition, this agenda could serve as a model for the many domains in which an understanding of behavioral drivers, modulators, and influences is critical to the functional health, sustainability, and advancement of modern living standards.