Environmental Apparitions

Brandon Ballengée, "SOS Humpback Black Seadevil Anglerfish" (2023)

Brandon Ballengée, SOS Humpback Black Seadevil Anglerfish, 2023, salvaged latex house paint on thrifted cotton bed sheets, 77 x 96 inches. Courtesy of the artist and Various Small Fires, Los Angeles, CA. 

Since the 2010 Deepwater Horizon oil spill, much of my art and research as a biologist has focused on the Gulf of Mexico. The spill’s long-term impact on fishes, along with other biota and Gulf ecosystems, is still not well understood. Additionally, there have been thousands of smaller spills, including the Taylor Energy oil spill—the longest running in US history—which began in 2004 and continues to leak today. Through my art, I want to give visual form to these environmental insults and inspire individual actions toward positive socioecological change.

In Ghosts of the Gulf, I depict fish species collected in the Gulf after the 2010 oil spill. Many marine species that were once common have declined in recent decades and these ghosts express that loss. The process of preparing the fish for exhibit involves preserving specimens by placing them in an acid bath with a blue stain that adheres to cartilage. They are then masticated in a digestive enzyme called trypsin, which clears away other tissues. Following this, specimens are placed in an alkaline solution bath with red dye that bonds with bone, and then into a series of washes ranging from potassium hydroxide to glycerin, in which the fish tissues become transparent. As a result, photographic images show bones and cartilage vividly revealed in red and blue. I think of these works as a kind of environmental apparition.

In 2016, I was part of an interdisciplinary research team at Louisiana State University that published research showing that 14 fish species that were endemic to the Gulf of Mexico had not been reported following the Deepwater Horizon spill. But even before the spill, several Gulf fish species remained elusive and had not been found between 1950 and 2005. Little is known about these species; the only records we have of their existence are a handful of preserved specimens scattered among natural history collections.

My Crude Oil Paintings are a way to give form to these lost species. I created portraits using historic specimens in the Tulane University Biodiversity Research Institute’s Royal D. Suttkus Fish Collection, the largest preserved fish collection in the world, located in several converted World War II bunkers in Belle Chase, Louisiana. Other species I photographed and radiographed as a 2017 artist-in-residence at the Smithsonian National Museum of Natural History, the second largest fish collection in the world. The works in this series are painted with oil from Deepwater Horizon; “fresh” crude oil from the ongoing Taylor Energy spill; marshland sediment that has been contaminated by oil; and anaerobic bacteria and iron oxide mixed with a chemical dispersant used in oil spill cleanup efforts.

Brandon Ballengée, "SOS Slender Snipe Eel" (2023)

Brandon Ballengée, SOS Slender Snipe Eel, 2023, salvaged latex house paint on thrifted cotton bed sheets, 88 x 90 inches. Courtesy of the artist and Various Small Fires, Los Angeles, CA.

The SOS Paintings are interpretations of deep-sea species put at risk by new deep-water mining in the Gulf of Mexico. Almost nothing is known about the natural history of these species or their environment, the abyssal zone. In this deep, dark region, many fish use bioluminescence and other adaptions to survive. Mining is likely to alter the ocean floor, create subsea sounds, and cloud the water—but how these actions will affect fish and other creatures is not well understood. These new paintings, created using thrifted bed sheets and latex house paint, are the largest I have made in decades. I see them as part of an “aesthetic of loss,” where I am giving visual form to absent species.

Through my art, I speculate on future outcomes, question behaviors, express concerns, and mourn. As a biologist, however, I must remain analytical and report unbiased information. For me, art and science are complementary methods of trying to understand the world and ourselves, along with the complex socioecological challenges we face.

Pricing Unknowable Risks

In addressing efforts to estimate the benefits of combating climate change, David Simpson’s article asks in its title, “How Do We Price an Unknowable Risk?” (Issues, Winter 2022). To be fair, many dimensions of climate change risks are understood, subject to uncertainty, including economic damages. For example, refer to the Intergovernmental Panel on Climate Change’s recent 3,600+ page report.

The impressive advances in the damages literature over the past decade enable a rigorous updating of the social cost of carbon (SCC)—the monetized damages associated with another ton of carbon dioxide emissions—used to inform assessments of federal regulations. These assessments show whether the benefits of a regulatory action justify their costs. This is analogous to a business deciding whether a new investment will yield returns in excess of its costs, and to a household weighing the lists of pros and cons in making a decision.

Despite a rich understanding of climate impacts, Simpson claims that the “unknown and unknowable risks of climate change argue for caution” as an alternative to SCC-informed policies. But what is “caution”-based policy? The precautionary principle may sound appealing, but that’s because it can have different meanings for different people. Operationalizing the concept quickly becomes ambiguous and political, resulting in different applications and outcomes in different contexts. Under the precautionary principle, an opaque political benefit-cost analysis often substitutes for a transparent regulatory analysis that draws from tools and insights among multiple disciplines’ peer-reviewed literatures.

Another alternative that Simpson discusses—estimating a target-consistent price trajectory that represents the least-cost attainment of a specified long-term emissions target—likewise suffers from political shortcomings. First, it focuses exclusively on costs in how it frames the environmental objective. By making costs so transparent in policy implementation without any accounting or presentation of benefits, this framing could reduce public support for climate policy.

By making costs so transparent in policy implementation without any accounting or presentation of benefits, this framing could reduce public support for climate policy.

Second, the starting point of this approach is the political decision of an emissions target level and year. Then, this approach requires an assumed set of policies that deliver the carbon price trajectory through its target year. In effect, the target-consistent price relies on political decisions about goals and the means to achieve them, not science. This stands in sharp contrast to the approach of the SCC, that starts by integrating scientific understanding of the impacts of adding more greenhouse gasses to the atmosphere. Indeed, the real-world experience with such a target-consistent price shows how arbitrary political decisions can influence the price path. In 2008, the United Kingdom employed this approach for an 80% reduction target and assumed that the target would be met by purchasing lower-cost emission reductions from other countries.

The decarbonization of the modern global economy will represent one of the most profound transformations of economic activity in history. Policymakers driving such a change will make better decisions (getting the job done through a wise use of resources) and tell a more compelling story (the benefits of these ambitious actions justify their costs) through the use of the social cost of carbon.

Professor of the Practice of Public Policy

Harvard Kennedy School

States as Laboratories for Science Policy Innovation

In 1932 Associate Justice of the Supreme Court Louis Brandeis first popularized the idea of states as laboratories for policy innovation and experimentation. In his dissent in the case of New State Ice Co. v. Liebmann, Brandeis wrote, “It is one of the happy incidents of the federal system that a single courageous state may, if its citizens choose, serve as a laboratory; and try novel social and economic experiments without risk to the rest of the country.” This policy experimentation can generate effects that extend far beyond state borders, and in the case of science policy, it can deliver tangible results that are purposely customized to fit local needs.

Today the urgency of climate change, combined with intensified partisanship and gridlock in US federal policymaking, elevates the importance of states as laboratories of democracy. Through policy experimentation and investments in research and development, states complement the federal role in generating science-informed policies that benefit the nation and the world, meeting needs for public services that national governments typically cannot address, and providing visible evidence of the value of public institutions in the daily lives of their residents. In addition, as federal science priorities and funding levels have waxed and waned, states have taken more prominent roles in setting research agendas that generate long-term social benefits.

As leaders of the California Council on Science and Technology, a state organization that provides scientific advice to policymakers, we relate here how scientists and policymakers have worked together in our state to create the type of civic and political environment from which innovative science-based programs can take root and spread. Our track record in this endeavor also demonstrates why building science policy at state as well as federal levels increases the chances for future success.

States as pathfinders

California has long been a leader in developing science-based policies with environmental aims. Although other states, territorial and local governments, and tribal nations have been pathfinders in addressing issues such as air pollution, energy use, and climate change, and in building resilience into public services and policies, they have not achieved California’s impacts. As the world’s 5th-largest economy and 12th-largest emitter of greenhouse gases, California is comparable to a nation-state and thus its actions have far-reaching consequences.

Through policy experimentation and investments in research and development, states complement the federal role in generating science-informed policies that benefit the nation and the world.

What’s more, California’s long history of effective action on the environment has built a reservoir of public trust in science-based solutions. In particular, this enabled state leaders to mobilize the political will needed to pass pioneering climate legislation in 2006, which has been followed by other ambitious legislation.

Among the state’s early environmental problems were vehicle emissions and tailpipe pollution. Smog in mid-twentieth century Los Angeles was so bad that schoolchildren were not allowed to play outside at recess during the frequent smog alerts. The combination of the federal Clean Air Act of 1970, state emissions regulations that exceeded federal standards in their stringency, and regional air quality monitoring led to a significant reduction of LA’s smog while generating new commercial opportunities within an expanding green economy. The fact that the state was able to deliver noticeably cleaner air without hurting the economy helped to build a sense of trust among the general populace that environmental initiatives could deliver multiple benefits. As of 2021, 14 other states and the District of Colombia had adopted California’s emissions standards, which remain more stringent than federal standards.

Similarly, California adopted energy efficiency measures in the 1970s that kept its per capita energy use flat for more than four decades while per capita consumption rose steadily across the nation. As Art Rosenberg, a physicist at Lawrence Berkeley National Laboratory, explained to then Governor Jerry Brown, the state could significantly reduce per capita energy use if it could find a way to make refrigerators and other appliances more energy efficient. He also suggested changing specific utility incentives. Once his suggestions proved successful, the so-called Rosenberg Effect became part of the state regulatory effort, which has vastly improved the energy efficiency of homes, appliances, vehicles, and other energy-consuming products in California over the past 50 years.

Building on these successes, in 2006 California became the first state in the United States to adopt a comprehensive climate program. Assembly Bill (AB) 32, the California Global Warming Solutions Act of 2006, required the state to reduce its greenhouse gas emissions to 1990 levels by 2020. Most notably this legislation and its attendant regulations were built on research done by scientists, economists, and sociologists at California universities. Indeed, the carefully designed local effort helped California meet the goal of AB 32 four years early, in 2016. Lawmakers followed up that success with Senate Bill 100, the California 100 Percent Clean Energy Act of 2018, which mandates that all of the state’s electricity production be carbon neutral by 2040. While popular support has been mixed and largely divided along political lines, several statewide referendums have supported the moves, reflecting a broad public perception that these efforts are good for both the economy and communities.

California adopted energy efficiency measures in the 1970s that kept its per capita energy use flat for more than four decades while per capita consumption rose steadily across the nation.

These legislative successes did not happen in a vacuum. California has been the vanguard for effective climate action in large measure because of its deliberate focus on connecting science and policy with the investments to match. In addition to taking action on climate, for example, California voters have twice authorized major investments in stem cell research in 2004 and 2020.

More broadly, the state has invested in building a science and technology infrastructure that has enabled it to be a global leader in innovation and productivity. These achievements would not have been possible without the confluence of multiple factors. Among them: the creation of public university systems that have produced a well educated workforce by increasing access to higher education for every resident regardless of economic status; the implementation of research and development funding that exceeds that of most of the world’s advanced economies; and a population of extraordinary diversity in race, ethnicity, culture, sexual orientation, gender identity, socioeconomic status, and lived experience. The result: California’s policies have enabled industries in key sectors—including aerospace, biotechnology, energy, and software—to move quickly, generating tremendous revenue and social mobility.

Building bridges

Of course having the necessary infrastructure does not, by itself, guarantee the meaningful adoption and implementation of science-based policies. Deliberate efforts to ensure substantive communication and collaboration between the scientific community and government officials are also required. Recognizing this need, a coalition of policymakers and leaders of scientific research institutions came together in 1988 to create the California Council on Science and Technology (CCST).

California has been the vanguard for effective climate action in large measure because of its deliberate focus on connecting science and policy with the investments to match.

A state-level organization, CCST was established to provide scientific advice on public policy issues to the governor, the legislature, and other civic entities. Each year CCST embeds 15 PhD-level scientists and engineers as fellows in legislative and executive branch offices. The CCST science and technology policy fellows support policymaking while gaining experience in policy and leadership. The fellowship is a public-private partnership supported by the government of California, the Gordon and Betty Moore Foundation, and other philanthropists.

Part of what makes CCST so effective is that it acts as a “boundary organization,” a term coined by political scientist David Guston. Boundary organizations convene and draw expertise from universities and nonprofit research institutions, the private sector, and government agencies to solve problems in ways that none of these organizations is capable of doing on its own. Distinct from lobbying or policymaking organizations in character, boundary organizations avoid advocating for specific political positions, agendas, or outcomes. Policymakers have many routes for accessing scientific advice, ranging from experts on staff, science advisors, investments in research and development, science-based fellowships, and partnerships with universities and national research laboratories that enable access to the state’s deep bench of technical experts.

As an example of the benefits of embedding scientific expertise in government institutions, consider Tony Marino’s work to reduce the risk of public utility accidents. As a CCST science fellow assigned to the California State Legislature, Marino led an analysis of the horrific 2010 San Bruno natural gas pipeline explosion that killed 8 people, injured 58, and destroyed 38 homes in a fire that burned for more than 17 hours. Following his fellowship, Marino remained on the legislative staff to continue work that uncovered gaps in public utility operating procedures, including poor construction and inspection practices as well as shortfalls in recordkeeping. His work led to the promulgation of new legislation that, unlike previous laws, strengthened the accountability of utility companies for safety procedures. This more careful approach is likely to improve public safety and disaster response by increasing transparency and accountability in public utility operations and infrastructure maintenance.

Boundary organizations convene and draw expertise from universities and nonprofit research institutions, the private sector, and government agencies to solve problems in ways that none of these organizations is capable of doing on its own.

Marino’s fellowship experience reflects one way that boundary organizations such as CCST can deliver societal value by training professionals to work at the nexus of policy and science, leading to enhanced communication between policymakers and technical experts. His subsequent impact demonstrates how these advantages are not limited to the fellowship year. Most of CCST’s 130-plus alumni fellows continue to work in roles related to state policy, drawing on their experience in government to develop solutions that are not just science-based but also politically feasible. CCST is also working with the Gordon and Betty Moore Foundation and other philanthropic partners to export the CCST fellowship program model to other states, eight of which have created similar programs. An additional 12 states have programs in various stages of development.

Planning ahead for a crisis

When crises strike, the activation of existing partnerships, together with engagement by boundary organizations, can facilitate collaboration at the speed of relevance. To enable this, governments, civil society, and the private sector need to build partnerships before disaster strikes. Partners should train together in tabletop exercises. These discussion-based scenarios can identify and address in advance any cultural, regulatory, or other constraints that could hamper rapid activation of a collaborative response. Early in the COVID-19 pandemic in 2020, for example, many California-based colleges, universities, and federal laboratories transformed their facilities to support diagnostic testing and to manufacture personal protective equipment (PPE). Despite urgent, widespread needs and critical shortfalls in testing and PPE, however, these same institutions struggled to secure required permissions to deliver support to local and state authorities. These potential roadblocks could have largely been foreseen.

Climate change, pandemics, and—in California—earthquakes, are well-known risks that allow for somewhat straightforward planning, even if preparation is increasingly complicated by their simultaneous intersection with other crises such as financial meltdowns, acts of terrorism, and war. But what about the problems with which society has only limited experience or that have not yet developed? Emerging and disruptive technologies, such as cyberattacks that disable critical infrastructure or perpetuate disinformation, increasingly present threats and vulnerabilities for the government, defense, and private sectors that should be considered in resilience planning.

To this end, in 2020 CCST began a new partnership with the California government, philanthropists, and academic research institutions to strengthen the state’s disaster resilience. Among the goals are developing new mechanisms for rapidly delivering independent, evidence-based advice and framing transdisciplinary solutions to emergent and over-the-horizon policy issues related to disasters. This work is intended to strengthen science and policy linkages before there is a need, thus enabling effective and inclusive resilience planning and timely collaboration in support of crisis response.

The biggest barrier is not lack of knowledge

Long-term policy planning that drives transdisciplinary and multisectoral solutions, targets actionable early interventions, and generates equitable societal impacts is crucial to driving and sustaining complex policy agendas. Rather than a lack of science and technical knowledge, however, the greatest barriers to implementing effective solutions to complex policy problems have often proved to be competing political objectives, economic disincentives, cognitive biases, and cultural values.

Emerging and disruptive technologies increasingly present threats and vulnerabilities for the government, defense, and private sectors that should be considered in resilience planning.

California’s experience with wildfires illustrates the profound influence cultural values can have on environmental policies. Prior to European settlement, the land management practices of California’s Indigenous peoples included the routine, deliberate application of fire to steward the land and maintain ecosystem processes. In contrast to Indigenous communities that had coevolved with fire, European settlers viewed fire as a threat and instituted fire suppression policies. While effective in the short-term, fire suppression policies are ecologically unsustainable.

California’s recent catastrophic wildfires are in part the direct result of conditions created by 130 years of fire suppression policies. These policies have remained in place in spite of decades of calls by Indigenous communities, forest managers, and ecologists for changes in forest management and land use. These needed changes include vast increases in deliberate and targeted burning to restore lower-intensity fire regimes in California wildlands. To date, political will has been insufficient to invest in and deploy these critical interventions at the scale required, in large part because of mainstream cultural perceptions of fire as inherently harmful. Today cultural perceptions of fire are changing, in part because megafires have negatively affected every Californian and raised awareness of the shortfalls of fire suppression policies. Smoke exposure from wildfires is now a statewide and regional issue, as well as the primary source of wildfire-related mortality. Today wildfires—through smoke exposure—kill more people in cities than in areas that actually burn.

Although the full costs of wildfires to human health cannot be calculated, we know enough as a society to make changes in policy that could save lives and taxpayer dollars. CCST’s 2020 report The Costs of Wildfire in California showed that many costs of wildfires (including impacts to human health and ecosystem services) are not fully counted. Yet even the subset of wildfire costs that are known have exceeded tens of billions of dollars annually in recent years. A growing body of research finds prevention and mitigation to be cost-effective, strengthening the case for investing more in holistic wildfire strategies that allow ecologically beneficial fires back on the land. As policymakers grapple with how much to invest in prevention and mitigation, this kind of independent advice, synthesized from multiple disciplines, is key to informing policy discussions.

Unmet opportunities

Despite California’s robust economy, its benefits have eluded many who live there. Trends in technology and automation, together with the COVID-19 pandemic, have disrupted the workforce and widened the gap between those who have access to the higher wage jobs that technological innovation delivers and those who do not. Meeting society’s most pressing challenges in ways that broaden economic opportunity will require engaging the full range of talent in our society. California and other states should continue to prioritize building a workforce that is more diverse—one that resembles the general population—in science, technology, engineering, mathematics, and medicine.

A growing body of research finds prevention and mitigation to be cost-effective, strengthening the case for investing more in holistic wildfire strategies that allow ecologically beneficial fires back on the land.

States are particularly well-placed to develop a more diverse workforce and to seed investments in specific regions that lag in growth. California could, for example, increase investments to accelerate the transformation of Southern California’s Imperial Valley into the “Lithium Valley.” Such an initiative would promote growth in the renewable energy sector, creating jobs in an area with a majority Latino population that has historically experienced high unemployment rates. Focused investments would also build the foundation of a market that could increase US competitiveness in a battery industry that is currently dominated by China.

Of course, mining lithium deposits—found in the Imperial Valley’s Salton Sea—raises questions of social equity and environmental justice tied to the health and well-being of local residents and workers. Integrating such questions of social equity in the development of public policies has become routine in many states, including California. Engagement with local communities to address these questions to inform public policies can have the added benefit of enhancing the social power of historically marginalized populations who are most affected by climate change and other environmental stressors.

Looking ahead to the next 75 years

The next 75 years will challenge humankind in ways impossible to predict today. Regardless of how the coming decades unfold, global challenges—including climate change, pandemics, and other complex shocks—are likely to manifest more frequently and acutely, requiring national and subnational governments to build ever greater resilience in public policies and services. At the same time, geopolitical power shifts and a hyperconnected and increasingly polluted information environment are likely to magnify ongoing social and environmental challenges.

Will the coming decades usher in a resurgence of open democracies or the expansion of authoritarianism? Against a range of potential future scenarios, emerging technologies are likely to magnify tensions, disruptions, and global competition for technological superiority.

Meeting society’s most pressing challenges in ways that broaden economic opportunity will require engaging the full range of talent in our society.

The future success of humankind requires embracing global interconnectedness and harnessing the best social, technological, and policy innovations, regardless of where they are created. Global society’s well-being relies on the generation of innovative and effective policy solutions. California’s highly experimental approach to policymaking and rulemaking, coupled with its flexible and adaptive implementation, has enabled state leadership to respond by making improvements, such as “greening the grid.” Future policymaking and rulemaking in an increasingly uncertain world is likely to require even greater experimentation, flexibility, and adaptation.

Against this backdrop, the effects of state-level actions in democracies provide strong counterpoints to arguments for autocratic models. California is investing heavily in building climate resilience, including with a $15 billion package approved in 2021 to build resilience and protect communities from climate risks such as catastrophic wildfire, extreme heat, and sea level rise. As of 2021, 30 states in the United States, together with the District of Colombia and Puerto Rico, had set goals of at least a 75% reduction in greenhouse gas emissions or the generation of at least 75% of electricity production from renewable or combined renewable and clean energy sources. Additionally, more than 50 tribal nations in the United States have completed climate assessments and action plans.

Generating solutions to society’s most complex problems will require expanded collaboration among state, territorial, local, and tribal governments; philanthropy; other segments of civil society; and the private sector. It will require greater investments in the boundary organizations that catalyze these collaborations. Progress in science-informed policy will also be contingent on repairing trust in science and in public institutions—a vast topic but one we recognize and highlight as vital to the preservation of democracy.

States have served as the laboratories of democracy for the first 246 years of the political experiment known as the United States of America. As the country looks to an increasingly uncertain future, states’ bold policy innovations and experimentation will play a vital role in meeting the needs of their denizens, the nation, and the world.

Talking Past Each Other

Among the many mostly awful consequences of 2020’s COVID-19 pandemic, one should be regarded as a gift: the revelation of the degree to which we order our society around tomorrow being essentially the same as today. Most of us have never understood this fallacy and have never needed to, although the results of the 2016 election might have given us pause had we stopped to consider them. We’ve simply assumed the status quo was more or less permanent, and many of us have gotten away with it for our entire lives.

But the political chaos of Donald Trump’s election and the social devastation of the pandemic have presented the possibility of real change. As Trump demolished established norms of politics, Black Lives Matter activists have reminded us that some long-established conventions—such as structural racism—can and should be demolished; the pandemic has upended our notions of what the future could hold, possibly motivating an overdue reconsideration of efforts to confront a future of climate change. Policymakers’ current inability to move past incrementalism on an issue of planetary scale is absurd, even morally offensive. We simply can’t do what we’ve done for three decades and expect different results. A carbon tax won’t save the planet.

The “climate change debate” in the United States has devolved into sadly familiar patterns of dehumanization and retrenchment, covered by both the media and academia as two camps fighting over who is smart and reasonable and who is dumb and naive. Calling this soundbite ping-pong a debate is just as intellectually lazy as the arguments passing for policy proposals on Twitter. This is nothing new. Cognitive biases and motivated reasoning run the show on all sides of the “debate”: we’re all lawyers for our own causes. The inability to create emotional distance between ourselves and our tribal identities—which seem increasingly to consist entirely of either Team Red or Team Blue—raises the stakes with every new report from the Intergovernmental Panel on Climate Change. (For a much more eloquent and thorough exploration of this, do yourself a favor and read the columns by the former Vox writer David Roberts on tribal epistemology.)

To quote Strother Martin’s character in Cool Hand Luke, “What we’ve got here is failure to communicate.” On the left, leaders gain status by being an evidence gatekeeper, by creating a panel of experts and presenting the experts’ findings to The People. They “listen to the scientists” and “take climate change seriously.” The right has constructed its own knowledge-production machine, built on a distrust of those same experts and scientists, and fueled by often legitimate wariness about schemes to improve the human condition. These parallel institutions have few self-correction mechanisms, creating hermetically sealed worldviews that demarcate acceptable opinion and banish heterodox views.

These parallel institutions have few self-correction mechanisms, creating hermetically sealed worldviews that demarcate acceptable opinion and banish heterodox views.

This dichotomy was on full display in the final presidential debate of the 2020 election. For the first time during such a debate, the moderator, Kristen Welker, dedicated a whole section to each candidate’s climate policies or proposals. In a turn of events that shocked no one, Joe Biden emphasized the dangers of global warming and presented a few heavily vetted, policy-by-committee recommendations to curb pollution and transition the United States to renewable energy. Donald Trump, amid diatribes about tiny windows and malfunctioning wind turbines, expressed concern over the economic costs of moving the country away from fossil fuels and the potential harm to those who work in those industries. Elevating this to a “debate” suggests that they engaged with each other’s ideas, when in fact the candidates simply presented entirely different, incompatible worldviews and priorities. What happens when political parties and their constituents can’t agree on the scope and causes of, let alone remedies for, a crisis of this magnitude? To use language that feels perfectly appropriate during a global pandemic: can a democracy, and maybe even our civilization, survive when its inhabitants have radically different epistemologies?

Bjorn Lomborg, FALSE ALARM (2020)

Michael Shellenberger’s Apocalypse Never and Bjorn Lomborg’s False Alarm, both published earlier in 2020, attempt to explain and address this crisis of epistemology from the perspective of what I’m calling the “panic skeptic.” The panic skeptic has found a third option in the dualistic climate debate: that of a detached, “rational” observer who argues that the “real” threat to the planet is panic about climate change—more so than climate change itself.

Both authors head knowledge production and advocacy organizations (Environmental Progress and the Copenhagen Consensus Center, respectively). They both operate from the assertion that things really aren’t as bad as “environmental extremists” make it seem. They argue that the people panicking about climate are trying to sell you something.

Shellenberger and Lomborg both rightly emphasize that these climate activists have their own dogma (for example, that we have only 12 years left to solve climate change before humanity’s time on Earth is prematurely cut short). The authors are also correct to point out that climate change is only one of many existential threats facing the world in the twenty-first century—a fact that the pandemic has made even more abundantly clear. (Lomborg goes so far as to suggest that society should determine which threats get which resources through some cost-benefit schema by which we reduce each threat to how “bad” it is and how much it would cost to solve it. I find this idea patently absurd.) I also agree with them that it’s misleading and disingenuous to invoke “the poor” in climate policy debates without acknowledging that many, if not most, of the past decade’s international climate policies (such as actual and proposed carbon taxes) depend on a neoliberal free-market system that has helped immiserate millions in the first place.

The panic skeptic has found a third option in the dualistic climate debate: that of a detached, “rational” observer who argues that the “real” threat to the planet is panic about climate change—more so than climate change itself.

Panic skeptics such as Shellenberger and Lomborg build their arguments on the rationality of scientific inquiry. Instead of presenting new or controversial findings, they claim a more impartial, rational interpretation of the same facts climate extremists build their arguments around. Stop making such a fuss, they say. And then they do precisely what the climate extremists they rail against do: use the mantle of purportedly disinterested science to support their political aims. For Shellenberger, this involves a crusade to move the whole species to a reliance on zero-emissions nuclear energy. For Lomborg, it’s a kind of utilitarianism that downplays the climate problem in an effort to increase global prosperity.

Both authors claim to have the cure to what ails climate discourse. They both seek to engage with what they perceive as conservative priorities. Lomborg, who has the easier task of dealing with conservatives outside the United States who acknowledge that anthropogenic climate change is a real thing, has a whole section of cost-benefit-analyzed policies to combat global warming—and global poverty while we’re at it. These include carbon taxes, green research and development, solar geoengineering research, and “making people richer” through various international aid and economic growth mechanisms. As if you could reduce the wicked problem of unequal and disproportionate burdens of climate change and climate “solutions” to a simple tabulation of profits and losses!

Instead of presenting new or controversial findings, they claim a more impartial, rational interpretation of the same facts climate extremists build their arguments around. Stop making such a fuss, they say.

Unsurprisingly, Lomborg sidesteps the issues of how to quantify the “appropriate” price for carbon for taxation purposes, and who ought to make such a determination. He elides the best way to structure and carry out these green R&D projects. If you can get past the white-savior overtones of his international aid arguments, you see very little original contribution to the debate about the role that international aid plays in the economies of developing countries. This is not to say that these are bad ideas; they just don’t go very far. Lomborg’s role is not to provide detailed proposals, but rather to highlight the many ways he would prefer policymakers make the world a better place for more people. In his view, climate alarmism not only clouds the issue but makes these other efforts less visible and less successful.

Michael Shellenberger, APOCALYPSE NEVER (2020)

In stark contrast to Lomborg’s purposefully matter-of-fact, detached tone in False Alarm, Shellenberger argues passionately in Apocalypse Never that “the conversation about climate change and the environment has spiraled out of control.” He makes a moral, rather than an economic, case for promoting human well-being against what he calls the “anti-humanism” of apocalyptic environmentalism. Like Lomborg (but in a higher register) he cites as evidence climate policies that, whether by accident or design, limit economic development in poor countries—actively making worse the lives of people who always seem to get shortchanged and overlooked. He unsurprisingly spends a considerable number of pages advocating for his light-on-the-details global nuclear agenda, the main advocacy goal of his foundation.

Shellenberger pairs this agenda with environmental “good news” to combat climate alarmism. He argues that journalism is much to blame for all the doom and gloom dominating the climate discourse, and he’s right that humanity has made some progress on dealing with climate. But the way he touts these successes downplays the moral and logistical complexities of the systems that brought them about. Shellenberger tries to make a greatest-hits list of climate issues and gives us his hot takes: he covers plastics, livestock production, and deforestation, but barely even mentions artificial meat substitutes, biofuels, precision agriculture, decarbonization, and electric vehicles, to name a scant few. Apocalypse Never suffers from its pedantic tone and stale structure, and it’s hard to stomach all the self-aggrandizement: “I published two long articles criticizing climate alarmism,” he writes as one instance. “Both articles were widely read, and I made sure the scientists and activists saw my article. Not a single person requested a correction. Instead, I received many emails from scientists and activists alike, thanking me for clarifying the science.”

Shellenberger’s book suffers from its pedantic tone and stale structure, and it’s hard to stomach all the self-aggrandizement.

Despite their many drawbacks, both Apocalypse Never and False Alarm do a service by shining a light on the unhelpfully extremist tone adopted by many climate change activists, journalists, and academics. Shellenberger and Lomborg remind us that it doesn’t have to be this way when it comes to talking about climate change. Our polarization is a product of our discourse, cognitive biases, tribalism, and profoundly entrenched politics. But overcoming the seeming intractability of formulating an appropriate response to climate change isn’t a matter of doubling down on fear tactics, building a stronger case with scientific facts and evidence, or gaslighting people that their panic is unfounded and making the problem worse.

I mentioned earlier that one benefit the pandemic has provided is destroying our illusion that the future will necessarily look like the past. This should prompt fresh thinking about how to deal with our many inherited and ongoing problems, to envision a brighter and more hopeful future. But there’s another positive lesson to be found amid the wreckage of 2020: with the end of the pandemic in sight, humans and our institutions are capable of confronting and overcoming enormous challenges. The climate crisis, which we’ve seen coming for a long time, could prove to be another of those challenges.

The Tragedy of the Climate Wars

Michael E. Mann, THE NEW CLIMATE WAR (2020)

Michael Mann has been in the climate wars for well over a decade now. As he reminds us frequently in this new book, he has been in the crosshairs of his enemies, has fought off the attack dogs, and carries the scars of battle. Even the environmentalist Bill McKibben’s promotional puff for the book valorizes Mann in terms of his “scars from the climate wars.” The military framing of climate change long predates Mann’s involvement, but it certainly is a framing he has done much to promote through his blogs, tweets, and general persona-at-large in public discourse.

And so it is not surprising that Mann’s new book continues his characterization of the politics of climate change through a series of complex military tropes and metaphors. Wars, battles, attacks, fights, and enemies litter its 260 pages. Much of what I said about Mann’s combative militancy in my review of his 2012 book, The Hockey Stick and the Climate Wars: Dispatches from the Front Lines, can be equally applied to this new one. Now, his central argument is that there is a new war afoot. The old war—fought mostly around the claims of climate scientists—has been (largely) won. But a new war has been ignited; Mann and his allies are now having to fight against the forces of inaction.

Mann is half right in his diagnosis. The main axes of public dispute and argumentation about climate change have changed. The politics of climate change manifest differently now than they did a decade ago. More centrally in focus—and this is a good thing—are the substantive and pressing questions about the sorts of actions, policies, and interventions that are needed, appropriate, and effective to attenuate the risks of a changing climate. What are their respective costs and benefits? How do different options interact with diverse cultural values and collide with vested interests? How do they complicate international geopolitics?

So in this observation Mann is correct. The focus of the issue has moved from “is there a problem?” to “what should be done about it?”

A new war has been ignited; Mann and his allies are now having to fight against the forces of inaction.

The tragedy, however, of Mann and people who think like him is that they view arguments about these questions through a Manichean lens: the source of all opposition to the “correct” view—Mann’s view—of what should be done about climate change is traced back to an orchestrated evil empire. The basic doctrine of Manicheanism is that of a structural conflict between good and evil. For Mann, the source of this evil is the fossil fuel industry representing, as he puts it, “the eye of Sauron,” that omnipotent dark power in The Lord of the Rings.

There is no doubting the need for an accelerating transition away from fossil fuels. And there is also no doubt that vested political interests have obstructed its progress. But Mann is so conditioned by his Manichean worldview that wherever he looks in the public, scientific, and political debates around climate change he sees the shadows of the Koch brothers (52 name checks in the book), Exxon Mobil (23), and the Heartland Institute (15). The nefarious hand of the fossil-fuel lobby is everywhere. This worldview leads him to some ludicrous contentions that, taken together, result in The New Climate War: The Fight to Take Back Our Planet offering an incoherent and distinctly unhelpful narrative on climate change. Let me give some examples of what I mean.

Take Mann’s assessment of an assortment of “solutions” to climate change that he ends up labeling as “non-solutions.” These include nuclear energy; solar climate engineering; various technologies of carbon dioxide removal, including carbon capture and storage (CCS), direct air capture (DAC), bioenergy and CCS, afforestation; and enhancing adaptation and societal resilience. In Mann’s bipolar world, all these technologies and policy goals are weapons of inactivism, part of the insidious strategy being waged by the evil empire. Really? Afforestation? Nuclear energy? Are these technologies and policy goals all to be dismissed out of hand because they don’t conform to the preferences of the enlightened? (And this is where the incoherence of Mann’s position becomes evident: he himself recognizes the value of DAC, equivocates about the merits of CCS and nuclear energy, and elsewhere in the book urges societies to adapt.)

Mann unhelpfully identifies eight alliterative groups of enemies who muster together under the battle flag of inactivism: dissemblers, deceivers, downplayers, dividers, deflectors, doomers, delayers, distractors. (Simple deniers now is not enough.) His circle of enemies has grown, mutated, and, perhaps most sinister of all, infiltrated “the climate movement” itself.

Indeed, he finds it necessary to create enemies out of a variety of scientists, scholars, writers, filmmakers, and think tanks that are actually engaged in the serious search for solutions to climate change—just not his solutions. People with whom Michael Mann disagrees—a long list that includes even such progressive stalwarts as Michael Moore and Bill Gates—become enemies: agents of the dark forces of inactivism, or contrarians, or “soft denialists,” or deflectors, or apologists, or defeatists. Mann’s playbook here is reminiscent of 1950s McCarthyism or the ideological purification pursued by the Communist International during the 1930s Spanish Civil War.

Are these technologies and policy goals all to be dismissed out of hand because they don’t conform to the preferences of the enlightened?

If one looks beyond the battle-posturing, the calling out of enemies, and the settling of Twitter disputes, what do we learn from The New Climate War about how to frame, enact, and deliver changes in the world that might ameliorate the risks of climate change? The strategy offered—it is of course a “battle plan”—has four elements: resist the doomists; learn from children; educate the uneducated; and focus on systemic change, not individual lifestyle choices. I certainly have a lot of sympathy for the first of these goals, having been arguing for the last 15 years that warnings of imminent global catastrophe are neither scientifically warranted nor politically constructive (although this does not prevent Mann from putting me on “the wrong side,” a contrarian).

But the most intriguing of his four points is the final one: changing the system requires systemic change. Now, “systemic change” can mean different things for different people, but for Mann it means pricing carbon and promoting 100% renewables for meeting the world’s energy needs (other technological innovations seem to be ruled out by Mann). This is certainly not what some climate activists—such as the anticapitalist Naomi Klein or the young Swedish environmentalist Greta Thunberg—would mean by systemic change, and it is notable that while he is willing to challenge Klein’s position, he works hard in the book to keep Thunberg inside his circle of the virtuous.

I am left wondering who will be impressed by this book? It certainly will help those who are looking for a tidy checklist of the good guys and bad guys in (Mann’s view of) the climate debates. And it may gather some recruits to his battle plan who believe that pricing carbon combined with the technofix of renewable energies will “take back our planet,” presumably from the dark forces of the fossil fuel industry.

The art of politics is not to get everyone to agree with you, but rather to find allies with whom you can find joint ways forward, even if sometimes compromised. Consistently demonizing those who think differently than you makes it harder, if not impossible, to forge alliances. And this is a shame, because in terms of practical climate policies Mann is in fact a centrist. A relentlessly pragmatic approach to tackling climate change would hold to this faith: that political left and right can find agreement about carbon pricing and market instruments; that ecomodernists and environmentalists can recognize that innovation is essential; that reformists and radicals can agree that the path ahead lies somewhere between nudge and revolution; that evangelicals and atheists can be equally motivated by an ethics of care; that nationalists and cosmopolitans can find common cause to “level-up” in terms of social welfare. If Mann could only disarm his discursive weapons of war he might actually find that he is surrounded by potential allies.

Consistently demonizing those who think differently than you makes it harder, if not impossible, to forge alliances.

But Mann seems uninterested in building the alliances necessary for political change. Above all, the book offers little for those seeking a guide to the complex global politics of climate change. This is an America-first book. It perpetuates the fallacy that the global politics of climate change can be read through the peculiar lens of American political partisanship. The other climate superpowers—the European Union (6 mentions), China (8), Brazil (3), and India (0)—seem bit players for Mann. There is no analysis about the political economy of the global energy transition, and he is dismissive of the global challenge of alleviating energy poverty (“a contrived concept”). And Mann uses a trick he accuses his enemies of using—trivialization—when the concerns of those arguing for a just transition for the world’s poor are swept aside with his disdainful comment “there are always winners and losers.”

The German theorist Carl von Clausewitz characterized war as “an act of violence intended to compel our opponent to fulfil our will.” This is not a good way to think about climate politics in a democracy. “In wars we have winners and losers. We take sides, and the solution is conquering and defeating your enemy,” observes John Besley, a professor of public relations at Michigan State University. “Do we want people to see scientists as angry, frustrated people or people who are doing [their] best to solve problems to make the world better?” The danger with Mann’s combative militancy is that it ends up being a destructive form of advocacy.

[Editor’s note: The headline of this book review has been changed. It was previously “The Manichean Mann.“]

Can Infrastructure Keep Up With a Rapidly Changing World?

Any investment in infrastructure must anticipate a rapidly changing world, where future climate, technologies, politics, and basic needs will be very different from today. The Biden administration’s $2 trillion infrastructure plan includes improving access to high-speed internet and support for a growing caregiving community as well as tackling transportation inequities—elements that strike some observers as not being infrastructure at all, but instead a way of delivering social aspirations.

But to constrain our definition of infrastructure to a narrow historical understanding that emphasized hardware (bridges and roads) over software (the people served by infrastructure and the institutions that govern it) is to miss a critical opportunity to invest in the services and systems that will ensure that future generations can thrive.

Much as telephone wires have been replaced by wireless cell networks, and banking is increasingly shifting from brick-and-mortar buildings to the internet, what constitutes “infrastructure” is evolving rapidly. Any policy proposals aimed at improving infrastructure must recognize that the systems that deliver basic and critical services and support economic strength must be able to respond to a rapidly changing world. Biden’s infrastructure team should aim to invest in both the hardware and services that help Americans thrive in a future that by all indications will be very different from today.

It’s helpful to examine the history of American infrastructure to understand why it is poorly positioned to adapt to today’s rapidly changing world. The core technologies and bureaucratic structures that define US infrastructure have persisted for decades. Some infrastructure has been around for a century or more.

Today’s infrastructure was built around technologies and goals that are quickly becoming outdated. Twentieth-century engineers and planners operated in an environment in which the Earth’s climate was considered relatively stable and information technologies were in their nascent stages. They managed the financial risk of large capital investments by building infrastructure assets with very long lifetimes, based on the assumption that long-term demand and growth would produce sound investments.

Any policy proposals aimed at improving infrastructure must recognize that the systems that deliver basic and critical services and support economic strength must be able to respond to a rapidly changing world. 

Because of this historical set of preferences, most people today think of infrastructure as large, rigid systems, comprising purely physical assets: roads, rail networks, pipes, powerlines, pumps, and bridges. But these components cannot be separated from the “software” that influences and manages our infrastructure. The bureaucratic structures that define the governance of infrastructure today also still reflect traditional assumptions about stable operating conditions. The bureaucracy that emerged to manage infrastructure was excellent for standardized goals (e.g., meeting performance metrics for roadway maintenance). But today, the governance models that steer infrastructure decisions are struggling to navigate the growing complexity of the world. Preparing for the deep uncertainty associated with climate change, protecting systems and users from cyberattacks, managing demand amid growing numbers of services that influence behavior (such as ride sharing, electric vehicles, smartphone navigation, and drone delivery), and building consensus among increasingly polarized voices all require new expertise, new ways of thinking about what infrastructure is and how infrastructure systems operate.

As new volatility—climactic, economic, and technological—hits our aging infrastructure, a gap is emerging between what our systems need to do and what they can do. The country’s infrastructure and its governance systems simply can’t keep up. Transformational changes are coming on fast and combining with other shifts, such as the integration of information technologies and artificial intelligence. The rigidity inherent in legacy infrastructure is at odds with these changing conditions, and vicious feedback is emerging.

The inability of infrastructure to keep pace with these changes can give new actors the opportunity to assert control. Amazon and Google are creating a new infrastructure of drone delivery that allows them to circumvent the risk of aging roadways. Tesla is increasingly managing energy use and storage. Google and Apple direct drivers using a navigational cognition of cities that travelers and transportation agencies are incapable of. Whereas city planners once determined which streets would be major thoroughfares and supplied those roadways with resources for control and maintenance, smartphones now route drivers down formerly sleepy side streets in the name of efficiency.

As new volatility—climactic, economic, and technological—hits our aging infrastructure, a gap is emerging between what our systems need to do and what they can do.

Traditionally, individuals have decided how to incorporate infrastructure services (e.g., transportation or energy) into their lives, but software informed by massive data streams is now increasingly making those decisions. Thus the nature of how infrastructure is defined and who controls it is changing. As we experience the very nascent stages of this shift, we—engineers, city planners, technology developers, government officials, and the public—must modernize our understanding of infrastructure, what we think it should encompass, and how it can best be managed.

In the future, we need to do a better job of aligning the systems that provide basic and critical services with the complexity of the changing world we’re learning to live in. We must focus on “loose fit” infrastructure solutions—assets that are modular, multifunctional, and scalable, with the ability to flex and change as society and the environment place new and unanticipated demands on them. In order to build these new infrastructures, we also need new ways of understanding them.

First, we should recognize that the scale and scope of human activities has grown so large that the dichotomy between infrastructure and the environment is shrinking. Human activities enabled by infrastructure increasingly have planetary effects, which in turn create complex conditions for that infrastructure. Rather than imagining that we are building one road or one power line, we need to recognize that we are building complex, integrated systems that require the negotiation of trade-offs across social, environmental, and infrastructural dimensions. Gone are the days when a transportation agency could consider its purview as simply roads and traffic. The infrastructure portfolio now contains a complex network of private, public, and shared technologies, operating between negotiated physical and virtual access, increasingly steered by cloud-based software, and subject to increasing disturbances, such as extreme weather events and cyberattacks. And, of course, the transportation sector is increasingly viewed as a potential design space for mitigating climate change.

Rather than imagining that we are building one road or one power line, we need to recognize that we are building complex, integrated systems that require the negotiation of trade-offs across social, environmental, and infrastructural dimensions.

Second, we should transform our notion of infrastructure from one focused exclusively on hardware to one that reckons with the wicked, complex processes that require negotiation and adaptive management—grand problems such as sustainability, resilience, and equity, coupled with growing technical and social complexity. Technical complexity arises from decades of infrastructure build-outs layered on top of one another combined with newly integrated information technologies. Social complexity derives from the decentralizing of control, a growing number of players who decide how services are used, and the polarization that increasingly influences infrastructure decision-making. Understood in their full complexity, these challenges show that updated hardware alone cannot be the sum total of tomorrow’s infrastructure.

Finally, we can no longer ignore the fact that infrastructure is increasingly connected and is itself an information conduit. The rapid integration of cybertechnologies into infrastructure creates new capabilities and efficiencies, but it also produces vulnerabilities. Smart and connected technologies in infrastructure create opportunities for safety and cost improvements. However, those technologies also create opportunities for foreign adversaries and criminal hackers to interrupt daily life in surprising ways. War is shifting: the whole of society can be targeted through our increasingly connected systems. Infrastructure that once seemed quite safe has become a potential battlespace.

Infrastructure must be able to adapt and transform to keep up with society’s demands as well as shifts in the physical environment. Simply making legacy infrastructure do more work or work faster is not the solution. Nor is installing more rigid infrastructure. Instead, we must recognize the shifting nature of what infrastructure is, what society needs it to do, and who controls it. These new concerns necessitate new ways of approaching infrastructure.

Infrastructure investment at any level, whether it’s Biden’s trillions of dollars of federal spending or municipal budget allocations for local systems, must let go of the notion that the infrastructure of the past is appropriate for the accelerating and increasingly uncertain conditions of the future. Investing in the future means creating the conditions for infrastructure agencies to adapt and transform, integrate cybertechnologies, actively anticipate climate change and other extreme events, and restructure themselves to meet the challenges and needs of the next century. This will require imagining what we as a society will need in the future and how services will differ from today. Fundamentally, we must question whether today’s infrastructure as it’s currently structured and governed is adequate for the future, and if it is not, determine what new technologies and governance models are needed.

How to Start Governing R&D to Mitigate Solar Radiation

The world is not making sufficient progress in reducing greenhouse gas emissions to avoid debilitating climate damage. As a result, interest among decisionmakers and scientists has grown in geoengineering—human intervention in the global climate system—as a climate control mechanism. Solar radiation mitigation (SRM), because it likely can be deployed (relatively) cheaply and rapidly, receives greatest attention. One approach to SRM would lower global average temperature by lofting millions of tons of sulfur into the upper atmosphere that turn into sulfate ions and reflect a small portion of incoming solar radiation back into space.

There is no deployed SRM system, and scant research and development has been done to understand the very considerable uncertainties in magnitude of its impacts, duration, and reversibility. Little is known about many SRM potential climate impacts such as ocean acidification, ozone depletion, effect on precipitation, regional climate variability, and consequences of rapid termination of the efforts. Thus far, essentially all technical SRM R&D has been done in laboratories on physical models, and experts agree that substantial controlled, atmospheric experiments are needed to validate reflective particle architecture, efficient mechanisms for injecting particles into the atmosphere, duration and distribution of cooling, and other unanticipated effects, not to mention cost. Collecting and interpreting SRM experimental data will be a slow process because it requires attribution of climate effects in the presence of natural variability and feedbacks.

The present level of knowledge about SRM climate risks and the design, operation, and cost of a deployable SRM system is inadequate for even beginning to assess any possible future role for SRM.

One view is that there is an urgent need to undertake R&D to obtain as much information as possible to inform assessment of whether the benefits outweigh the risks of deployment—a judgment that depends on the severity of climate change and the benefits and costs of alternative climate-control mechanisms. The opposing view is that any R&D on SRM presents a moral hazard that cheap but risky SRM will crowd out less risky, but more expensive, emission-reduction efforts. Opponents stress that there is no governance system to control who will have access to planning and participation in SRM R&D, or in decisionmaking on possible deployment. Up to the present, the understandable caution of public and environmental leaders to encourage human intervention in the climate has kept federal support of SRM (and other solar geoengineering options) very low. The federal budget does not have a line item for SRM R&D, but we estimate that total expenditures are less than $10 million per year.

We believe the United States should launch an SRM R&D program (one can imagine including other solar geoengineering projects as well) at an initial funding level of $50 million per year. Such funding could support 10 to 15 university-based SRM research centers at a level of $2–$3 million per year, selected on a competitive basis, and augmented by conferences and studies to engage a variety of stakeholders. These funds should be provided by the National Oceanic and Atmospheric Administration, the National Aeronautics and Space Administration, and the Department of Energy, and funds should be newly appropriated, not reprogrammed from existing research efforts. The SRM R&D technical program should be formulated within the context of broad climate change research. Formal international collaboration should be considered after the United States’ SRM R&D effort has been established and has two or three years of operating experience.

Current uncertainties, combined with the strong and well-defended opinions both for and against SRM as a potential tool for combatting climate change, dictate that all decisions about the potential deployment of SRM technologies, and even the future course of SRM R&D itself, would necessarily be provisional and subject to change.

Congress should meanwhile establish an independent, blue-ribbon SRM Oversight Committee accompanying this R&D effort, empowered to monitor all aspects of federally supported activities and mandated to submit an annual report to Congress on progress in narrowing key uncertainties and quantifying risks. This independent committee (administratively housed in NOAA, NASA, or DOE) would not have its agenda set by the agencies funding geoengineering R&D. To assure transparency, like all other federal advisory committees, it would be subject to the Federal Advisory Committee Act of 1972. It would not have the authority to cancel or modify the congressionally approved R&D program. Members would be nominated by the president and confirmed by the Senate. 

The purpose of this oversight committee would be to communicate to Congress, and therefore to the public, about progress being made in federal R&D programs to validate the cost and performance of SRM, and narrow uncertainty about potential adverse impacts. The SRM Oversight Committee would serve as an initial, “soft,” transparent technology governance step.

An independent blue-ribbon oversight committee, operating transparently and with accountability to Congress, can provide a soft approach that provides a mechanism for discussion between SRM advocates and skeptics, while researchers accumulate sufficient knowledge to justify hard decisions about if, how, and when to pursue this technology. 

The proposed oversight committee should not be expected to succeed in resolving the differences between geoengineering proponents and opponents. But similar initiatives related to nuclear power suggest that substantive progress is possible. For decades the public has been concerned that the government’s commercial nuclear power R&D and licensing programs paid insufficient attention to the issues of reactor safety, radioactive waste management, and nuclear proliferation. Congress’s establishment of the Nuclear Regulatory Commission’s Advisory Committee on Reactor Safety and DOE’s Nuclear Energy Research Advisory Committee has reduced public unease by increasing transparency and providing legitimate avenues to raise and deliberate on technical and policy issues.

The present level of knowledge about SRM climate risks and the design, operation, and cost of a deployable SRM system is inadequate for even beginning to assess any possible future role for SRM. Current uncertainties, combined with the strong and well-defended opinions both for and against SRM as a potential tool for combatting climate change, dictate that all decisions about the potential deployment of SRM technologies, and even the future course of SRM R&D itself, would necessarily be provisional and subject to change. An independent blue-ribbon oversight committee, operating transparently and with accountability to Congress, can provide a soft approach that provides a mechanism for discussion between SRM advocates and skeptics, while researchers accumulate sufficient knowledge to justify hard decisions about if, how, and when to pursue this technology.