New Voices in the Future of Science Policy

The latest issue of the Journal of Science Policy & Governance invited early career authors to reimagine the next 75 years of science policy. Supported by The Kavli Foundation and published in collaboration with the American Association for the Advancement of Science, the special collection offers bold, innovative, and actionable ideas for how the US scientific enterprise can become more equitable and inclusive, helping to contribute to a brighter future for all Americans.

These articles seek to broaden the view of how scientists can participate in achieving positive social impact. Authors focused on such issues as citizen involvement in science, promoting trust in science, embracing democratic principles, and addressing the needs of the American people.

More specifically, in the issue’s three winning essays, the authors argue for making rural regions a priority in US science policy in order to make the benefits of research and innovation more broadly beneficial and equitable; improving global scientific communication and collaboration by translating STEM papers into languages other than English; and reframing science policy and funding to emphasize social benefits as much as knowledge generation in scientific research.

Thinking Like A Citizen

In “Need Public Policy for Human Gene Editing, Heatwaves, or Asteroids? Try Thinking Like a Citizen” (Issues, Spring 2021), Nicholas Weller, Michelle Sullivan Govani, and Mahmud Farooque call on President Biden to invest in opportunities for Americans to meaningfully participate in science and technology decisionmaking. They argue this can be done by institutionalizing participatory technology assessment (pTA) at the federal level.

I’ve worked for many years as a pTA facilitator and researcher, and have spent the last three years researching the factors that make pTA successful (or not) in federal agencies. Like the authors, I would underscore the value pTA brings to decisionmaking. Participants in pTA forums—including those who normally think of “politics,” “government,” or even “public engagement” as dirty words—learn through their participation that they can engage with their fellow citizens on topics of importance in productive and generative ways. The authors suggest that, given the tremendous potential pTA has for improving democratic discourse and decisionmaking, now is the time for federal investment in and institutionalization of these approaches. 

But such investments should be made thoughtfully. My research team’s work on pTA in federal agencies underscores three important realities. First, pTA efforts are vulnerable to shifting political and administrative priorities. Seeking ways to institutionalize engagement efforts within agencies—and not just as one-off “experiments” or as part of more ephemeral challenges, prizes, or grant-making—is more likely to lead to lasting change. The creation of Engagement Innovation Offices within agencies, for example, would increase long-term capacity and organizational learning. Such offices should be separate from communications offices, whose remit is different, and should focus on experimenting with and developing dialogic forms of public engagement. 

Second, our research shows that successful public engagement requires skilled engagement professionals who understand the importance of deliberative approaches. These are agency personnel who are technically literate but also formally trained in public engagement theory and practice. They understand agency culture, are good at collaborating across departments and directorates, know how to navigate administrative rules, get how leadership and power function in the agency, and have enough technical and political knowledge and agency clout to innovate in the face of tradition and resistance.

Third, academic programs in science and technology studies (STS) have an important role to play, and agencies should develop partnerships with them to create pipelines for these skilled professionals to move into the federal government. Academic programs should prioritize training in pTA and other public engagement tools, and provide experiences for STS students with technical training, literacy, or both to be placed in agencies as a form of experiential learning. Agencies should facilitate such placements, perhaps via the proposed Engagement Innovation Offices.

The federal Office of Science and Technology Policy will be an important player in these efforts. It can provide training, organization, funding, and influence. But running pTA efforts out of a centralized office may prove to be less robust and sustainable than embedding Engagement Innovation Offices and professionals within agencies, making them more resistant to, though not totally insulated against, political headwinds and shifting budget priorities.

Professor of Public Policy and Administration

School of Public Service

Boise State University

Like Nicholas Weller, Michelle Sullivan Govani, and Mahmud Farooque, I am encouraged by the current surge of interest in how science and technology policy couples with social policy. But I am somewhat concerned by the authors’ narrative of participatory technology assessment (pTA) as a new breakthrough of “postnormal” participatory engagement into heretofore isolated scientific-technological domains. Enthusiasm for such participatory breakthroughs has a more than 50-year history, which offers some warnings. I submit two examples from the work of historians of technology.

Jennifer Light’s book From Warfare to Welfare (2003) follows the migration of defense analysts into urban policy in the 1960s and ’70s. During that period, these analysts and their liberal allies in municipal politics readily embraced citizen participation as a means of quelling urban “alienation,” leading them to champion cable television as a way to create new avenues for local engagement. Those efforts floundered before cable later flourished in the hands of media companies uninterested in such matters.

In his book Rescuing Prometheus (1998, note the title’s implied narrative), Thomas Hughes prefigured the current authors’ interest in postnormal policy by announcing the advent of “postmodern” technological systems characterized by stakeholder engagement. Unhappily, his exemplar of such efforts was Boston’s then-in-progress Central Artery / Tunnel project, which did eventually reform the cityscape but also became notorious as the project management boondoggle known as the “Big Dig.”

Emphatically, my point is not to imply that participatory decisionmaking is a fatally flawed concept. Rather, it is to encourage a healthy awareness that, instead of displacing a perceived plague of scientific-technical solutionism, initiatives such as pTA could end up writing new chapters in the checkered history of social scientific solutionism if they are not thought through. Solutionism is no less solutionism and an expert is no less an expert simply because the innovation being proffered is social in nature rather than a gadget or an algorithm.

By portraying their pTA approach as a breakthrough social intervention, the authors arrive quickly and confidently at their recommendation of instantiating it as a new box on the federal org chart. I would challenge them to grapple more openly with the lessons of participatory decisionmaking’s long history and the difficulties facing their particular branded method.

Critical questions include: At what points in the decisionmaking process should such exercises be conducted? How much additional burden and complexity should they impose on decisionmaking processes? What sorts of decision should be subject to alteration by public opinion, how should issues be presented to public representatives, and how should contradictory values be reconciled?

More fundamentally, we may ask if this is a true method of participatory decisionmaking, or is it an elaborate salutary regimen for cloistered program managers? With a public deluged with requests for feedback on everything from their purchases to their politicians, is feedback into the multitude of technical decisions within the federal government what the public wants, or is that notion itself a creation of the expert imagination? 

Senior Science Policy Analyst

American Institute of Physics

Author of Rational Action: The Sciences of Policy in Britain and America, 1940–1960 (MIT Press, 2015)

Is Science Philanthropy Too Cautious?

It was a delight to see Robert W. Conn’s coherent, synthetic history of the role of philanthropy in support of US science and technology, presented in “Why Philanthropy Is America’s Unique Research Advantage” (Issues, August 11, 2021). The field I was trained in, molecular biology, originated in large part through the vision of Warren Weaver at the Rockefeller Foundation, and early practitioners were supported by the Carnegie Corporation. Now philanthropies in biomedical research are hugely important complements to the National Institutes of Health and other government funders.

I have been puzzled for over two decades by a simple observation, and I would welcome Conn’s thoughts. Why has it taken so long, and relied entirely on initiative within government, to develop a DARPA-like component in biomedical research, when this was an obvious niche to fill? Conn describes how major R&D initiatives draw on a very broad array of investigator-initiated research projects—the vaunted R01 NIH grant and its equivalents. But now the convergence of science and information technology has naturally led to larger teams that require management, direction, and vision: examples being CERN, the Human Genome Project, and the BRAIN Initiative. And now there is serious talk of cloning and adapting the DARPA framework to address problem-oriented research—that is, to systematically pursue grand challenges that will necessarily entail many teams pulling in harness.

Why has it taken so long, and relied entirely on initiative within government, to develop a DARPA-like component in biomedical research, when this was an obvious niche to fill?

Yet most philanthropies have mainly cherry-picked successful science from the ranks of stars in the NIH- and National Science Foundation-funded constellations. It is an effective, successful, but very conservative strategy. It is powerful and successful, for sure—witness the amazing contributions of Howard Hughes Medical Institute investigators or the outsize influence of the Bill & Melinda Gates Foundation in global health. But serious efforts to pool resources toward social goals that won’t be achieved otherwise seem outside the box. Witness the amazing story of developing vaccines but failing to distribute them equitably or even effectively because the incentives to the companies that control the products do not align with public health goals.

It seems there must be some incentives within philanthropy that thwart thinking at scale or hinder cooperation among philanthropies, or that cleave to existing frameworks of intellectual property and control that nonprofit funding might be able to work around. Could the Science Philanthropy Alliance that Conn cites do better? What would that look like?

Professor, Arizona State University

Nuclear Waste Storage

In “Deep Time: The End of an Engagement” (Issues, Spring 2021), Başak Saraç-Lesavre describes in succinct and painful detail the flawed US policy for managing nuclear waste. She weaves through a series of missteps, false starts, and dead-ends that have stymied steady progress and helped to engender our present state—which she describes as “deadlocked.”

Her description and critique are not meant to showcase political blunders, but to caution that the present stasis is, in effect, a potentially treacherous policy decision. The acceptance of essentially doing nothing and consigning the waste to a decentralized or centralized storage configuration is in fact a decision and a de facto policy. To make the situation worse, this status quo was not reached mindfully, but is the result of mangled planning, political reboots, and the present lack of a viable end-state option.

Although there may be some merit to accepting a truly interim phase of storing nuclear waste prior to an enduring disposal solution, the interim plan must be tied to a final solution. As decreed in the Nuclear Waste Policy Act, and reinforced by the Blue Ribbon Commission on America’s Nuclear Future, centralized interim storage was to be the bridge to somewhere. But the bridge is now looking like the destination, and it would be naive not to view it as another disincentive to an already anemic will to live up to the initial intent.

To make the situation worse, this status quo was not reached mindfully, but is the result of mangled planning, political reboots, and the present lack of a viable end-state option.

Saraç-Lesavre seems to believe this current impasse constitutes the end of an earlier era in which shaping decisions and outcomes was once driven by a “moral obligation to present and future generations.” She sees the current unfolding scenario as a reversal of a once-prevailing ethos.

I have been involved for the past 20-plus years in just about every sector dealing with nuclear waste disposal. Beginning with the formation of a nongovernmental organization opposed to incineration of waste, I have conducted work on stakeholder engagement with the Blue Ribbon Commission, the Nuclear Regulatory Commission, the Bipartisan Policy Commission, and the Department of Energy, as well as with a private utility, a private nuclear waste disposal company, and an emerging advanced reactor company. From these perspectives and my experience with them, my impression is that the issue of nuclear waste management is continually given consideration, but rarely commitment. Lip service is the native language and nearly everyone speaks it.

For US policymakers—and truly all stakeholders involved with nuclear waste—it will require a steely and coordinated commitment to solve the problem. This has always been the case, but now the problems are becoming more complex, the politics more partisan, and a path that once appeared negotiable is now nearly unnavigable. The reason for this is less about a lack of resolve to comprehend the “deep time” in which we need to consider the implications of nuclear waste, and more about the impediments and cheap workarounds wrought by the short cycles of “political time.”

Until we can take the politics out of nuclear waste disposal, it will not be the most sound decisions that prevail, but those with the prevailing wind in their sails.

Mary Woollen Consulting

Başak Saraç-Lesavre raises some fundamental and important issues. Spent nuclear fuel (SNF) was created over the past 40-plus years in return for massive amounts of clean nuclear-generated electricity. Are we going to begin to provide a collective answer for managing and disposing of SNF or are we going to shirk our clear moral responsibility and leave a punishing legacy to our children and future generations? More than 50 reactor sites across the United States continue to store SNF on site with no place to send it.

Saraç-Lesavre appears to support the recommendation of important but selective parties whose advice is that “spent nuclear fuel should be stored where it is used.” However, while championing consent-based siting, she does not include the views of those communities and states that now house this stranded SNF, nor the views of much of the science community that is working to provide a viable solution. When those nuclear power plants were originally sited, it was with the understanding that the federal government would take responsibility for removing SNF and disposing of it, allowing the sites to be decommissioned and returned to productive use. Siting and opening a repository for permanent disposal will take many decades even under the most optimistic scenarios; the nation needs to develop one or more interim storage sites that can be licensed, built, and opened for SNF acceptance decades earlier.

Saraç-Lesavre mentions the Obama administration’s Blue Ribbon Commission on America’s Nuclear Future conclusion that siting a nuclear waste repository or interim storage facility should be consent-based. However, the commission made eight fundamental recommendations while also making it clear that it was not a matter of picking just one or several of them; rather, they were all required to resurrect an integrated US program and strategy that had the best chances for success. Two of the commission’s recommendations called for “prompt” actions to develop both a repository for the permanent disposal of SNF (and other high-level radioactive wastes) and centralized interim storage to consolidate SNF in the meantime. The reasoning was detailed and sound, and those recommendations remain highly relevant today.

A healthy, enduring federal repository program is needed and needed promptly. Whether through government, private industry, or a private/public partnership, consent-based centralized interim storage remains needed as well.

Former Lead Advisor, Blue Ribbon Commission on America’s Nuclear Future

Başak Saraç-Lesavre’s commentary on the interim storage of commercially generated spent nuclear fuel raises a variety of important issues that have yet to be addressed. Here I would like to expand on some of her most salient points.

One of the consistent characteristics of the US strategy for the back-end of the nuclear fuel cycle has been the absence of a systematic understanding of the issues and a failure to develop an encompassing strategy. In the report on a two-year study by an international team of nuclear waste management experts, Reset of America’s Nuclear Waste Management Strategy and Policies, sponsored by Stanford University and George Washington University, one of the most important findings was that the present US situation is the product of disconnected decisions that are not consistently aligned toward the final goal—permanent geologic disposal. The isolated decision to consolidate spent fuel inventories at just a few sites is another example of this same failed approach. Any decision to go forward with interim storage needs to be part of a broader series of decisions that will guarantee the final disposal of spent fuel in a geologic repository.

Another critical issue is the meaning of “interim” storage, as interim may well become permanent in the absence of a larger strategy. The present proposal is for interim storage for some 40 years, but it will almost certainly be longer if for no other reason than it will take the United States some 40 to 50 years to site, design, construct, and finally emplace spent nuclear fuel and high-level waste at a geologic repository. The siting of an interim storage facility and the transportation of waste to that facility will be a major undertaking that will take decades. One can hardly imagine that once the waste is moved, there will be an appetite for another campaign to move the waste again to a geologic repository, particularly as time passes, funding decreases, and administrations change. One must expect that as 34 states move their nuclear waste to just a few locations, such as to Texas and New Mexico, the national political will to solve the nuclear waste problem will evaporate. 

The siting of an interim storage facility and the transportation of waste to that facility will be a major undertaking that will take decades.

What is the alternative to interim storage? There is an obvious need is to secure the present sites by moving all the spent fuel into dry storage containers that should then situated below grade or in more secure berms. There may be good reason to move the casks from closed reactor sites to those that are still operating. As reactor sites shut down and are decommissioned, there may be value in retaining pools and waste handling facilities so that spent fuel casks can be opened, examined, and repackaged as needed. As my colleagues and I have reported, even this short list of requirements reveals that the selection between alternatives will be a difficult mix of technical issues that will have to be coordinated with the need to obtain consent for local communities, tribes, and states.

Interim storage, by itself, will not solve the United States’ nuclear waste problem. In this case, today’s solution is certain to become tomorrow’s problem.

Center for International Security and Cooperation

Stanford University

Başak Saraç-Lesavre’s article addresses the important topic of our moral obligations to future generations, but its focus only on nuclear waste is too narrow. The most important questions for our long-term obligations involve the long-term problems of all wastes generated by all energy technologies.

The focus on nuclear wastes is logical, in the same sense that it’s logical to look for lost keys under a streetlight. One of the major advantages of nuclear waste, compared with wastes that other energy technologies produce, is that it’s plausible to plan for and implement reasonable approaches to safely manage the waste for decades, centuries, and millennia into the future.

We need to shine a brighter light on the question of the very-long-term environmental and public health consequences of all the wastes produced by energy technologies, whether it be the rare-earth mill tailings in Baogang, China, thousands of coal-ash impoundments worldwide, fracking waste waters reinjected into wells, or most importantly, the thousands of gigatons of carbon dioxide and methane released from our current use of fossil fuels.

It sounds deeply dissatisfying, but our current de facto policy to use interim storage for spent nuclear fuel makes sense. Today’s lack of consensus about the permanent disposal of spent fuel is logical, because we do not now know whether it is actually waste, or is a valuable resource that should be recycled in the future to recover additional energy. We cannot predict this today, any more than in the 1970s one could predict whether shale oil could be a resource or should be left in its existing geologic isolation. Certainly with the shale technology of the 1970s, which involved mining, retorting, and generating large volumes of tailings, shale oil was not a resource. But technology has changed, and based upon statistics from the Department of Energy’s Energy Information Agency, the advent of shale fracking gets most of the credit for recent reductions in US carbon dioxide emissions from electricity generation.

Regardless of whether existing spent fuel is recycled in the future, there will still be residuals that will require deep geologic disposal. The successful development of the Waste Isolation Pilot Plant in the United States, for the deep geologic disposal of transuranic wastes from US defense programs, provides evidence that geologic disposal can be developed when societal consensus exists that materials really are wastes, and where clear benefits exist in placing these materials into permanent disposal. Today the United States needs to rethink its approach to managing nuclear wastes. Deep geologic disposal will be an essential tool, and development of multiple options, focused on disposal of materials that are unambiguously wastes, makes sense as the path forward.

William and Jean McCallum Floyd Endowed Chair

Department of Nuclear Engineering

University of California, Berkeley

He is also the Chief Nuclear Officer for Kairos Power, and is an early investor and advisor to the start-up company Deep Isolation

Başak Saraç-Lesavre provides an excellent account of the failed US government effort to implement a long-term solution to manage and dispose of the country’s growing backlog of spent nuclear fuel, and makes a compelling case that the prolonged absence of a national nuclear waste strategy has led to a lack of coordination and direction that could lead to inequitable and unjust outcomes. In particular, the development of consolidated interim storage facilities—if made economically attractive for nuclear plant owners—would likely undermine any political consensus for pursuing the challenging goal of siting and licensing a geologic repository.

For this reason and others, the Union of Concerned Scientists does not support consolidated interim storage facilities, and has consistently opposed legislation that would weaken current law by allowing the government to fund such facilities without closely coupling them to demonstrated progress in establishing a geologic repository. However, Saraç-Lesavre references our organization’s position in a way that could give the misleading impression that we support surface storage of spent nuclear fuel at reactor sites for an indefinite period. Although we strongly prefer on-site interim spent fuel storage to consolidated interim storage, provided it is stringently regulated and protected against natural disasters and terrorist attacks, maintaining a network of dispersed surface storage facilities forever is in no way an adequate substitute for a deep underground repository.

Maintaining a network of dispersed surface storage facilities forever is in no way an adequate substitute for a deep underground repository.

The challenge, of course, is how to revive the defunct repository siting program and execute it in a manner that addresses both environmental justice and intergenerational equity concerns. This is not straightforward. Locating a repository site in a region far from the reactors where spent fuel was generated and the areas where the electricity was consumed may seem unjust to the host community, but it could well be the best approach for minimizing long-term public health risks. In that case, one means of redress for the affected community would be fair compensation. The likely need to provide such compensation must be factored into the total cost of the repository program.

An alternative path is offered by the company Deep Isolation, which has proposed to bury spent fuel at or near reactor sites in moderately deep boreholes. While this concept raises significant technical and safety issues and would require major changes to current law and regulations, it does have the political advantage of obviating the need to find a centralized repository location. Whether it is a more just solution, however, is an open question.

Director of Nuclear Power Safety

Union of Concerned Scientists

Başak Saraç-Lesavre’s article on nuclear waste storage offers valuable insights, as do the other two articles on nuclear energy, but none addressed two glaring energy issues.

While not directly related to nuclear power, increasing renewable power will affect the electricity market. Sophisticated market mechanisms balance power output and distribution against demand to ensure fair electricity pricing. The temporal, seasonal, and weather-dependent variations in renewable production, along with the current inability to store large amounts of surplus energy, can upset that market. Backup resources, currently provided mostly by fossil-fuel generation, are critical to electricity marketplace stability.

Excess energy from renewables can force prices to plummet, discouraging investment and exacerbating price uncertainty. A lack of backup forced price spikes in Texas in February 2021 when the industry could not produce or obtain sufficient energy to meet needs driven by an unusual cold spell.

This ties into the three nuclear articles in that they did not acknowledge the advances in energy technology that could solve two issues: what to do with nuclear waste and how to back up renewables. There are at least 50 groups developing new concepts in nuclear fission and fusion energy. Few will survive, but those that do will affect storage of nuclear fuel policy, grid reliability, and uranium mining.

A nuclear power plant concept that one of us developed—the molten uranium thermal breeder reactor (MUTBR)—could reduce nuclear waste, provide backup to renewables, and reduce uranium mining.

There are at least 50 groups developing new concepts in nuclear fission and fusion energy. Few will survive, but those that do will affect storage of nuclear fuel policy, grid reliability, and uranium mining.

The MUTBR is mostly fueled by molten uranium and plutonium metals reduced from nuclear waste. They are its fuel and, as liquids, are pumped through a heat exchanger to give up their energy. It has large fuel tubes to facilitate uranium-238 fission and is a “breed-and-burn” reactor. In operation, it “breeds” enough plutonium from the plentiful isotope of uranium to fully replace the easily fissionable but scarce isotope of uranium (the primary fuel in conventional reactors) and plutonium that has fissioned (burned). The MUTBR may be a way to deal with used nuclear fuel while producing copious amounts of carbon-free power on demand.

The MUTBR could provide flexible backup power in the way hydroelectric dams can do to smooth out changes in production and demand. A dam’s reservoir storage capacity facilitates production management. MUTBR can facilitate backup in three ways.

First, it can send excess heat (beyond what is profitable to sell) to thermal reservoirs. Its high operating temperature would enable cheap heat reservoirs using molten sodium chloride salt. That energy can be released to the grid when demand and prices are higher. Second, if this salt heat storage is depleted but electricity demand is high, biofuels could be used to generate heat energy for its electric generators for backup. The generators would be sized to convert substantially more heat to electricity than the maximum available from its reactor. Third, when the thermal salt reservoirs are fully charged and power demand is low, MUTBR’s patented control mechanism provides flexibility to reduce its fission rate. This system would also improve safety by automatically throttling fission if there is a system failure.

These features of the MUTBR design could provide an excellent non-carbon-producing complement to renewable resources, filling in when renewable production is low and reducing output when renewables are near maximum output. These economic considerations are basic to having a reliable electric power industry.

Washington, DC

Washington, DC

The Next 75 Years of Science Policy

In this special section, we will be publishing dozens of ambitious, challenging, and innovative proposals on how to structure the resources of science to enable the best possible future. Contributors will include everyone from recognized global leaders to early career researchers, policymakers, businesspeople, and our readers, creating a forum for the exchange of ideas about reinvigorating the scientific enterprise.

The Greatest Show on Earth

Long before Times Square blinked to light, New York City had the shimmering Crystal Palace, the centerpiece of the Exhibition of the Industry of All Nations, a World’s Fair that began in the summer of 1853. For a 25-cent ticket, throngs of people marveled at the multitiered structure of iron and steel. Poet Walt Whitman called it “Earth’s modern wonder.” Inside the palace were the technological wonders of the age. An English whaling gun was said to look as if it could “do some execution upon the monsters of the deep.” An automated tobacco roller cut and wound 18 cigars a minute, superseding—ominously, in hindsight—hand labor.

But the wonder that still resonates today began with a May 1854 demonstration by Elisha Graves Otis. The 42-year-old engineer was a bedframe maker and a tinkerer with a passion for fixing faults and frailties. In his trim Victorian suit, lush beard, and silk stovepipe hat, Otis mounted a wooden platform secured by notched guide rails. His assistant then hoisted the platform some 50 feet above the ground, grabbing the crowd’s attention.

Otis was there to correct a fault of his own making. He had developed an elegant solution to the problem of cable failure in platform elevators that made use of a hoist with a passive automatic braking system—but none had sold. It wasn’t because people didn’t need them; elevators often catastrophically broke down in granaries and warehouses, killing and maiming their passengers. Otis realized that his design, though superior and straightforward, needed showmanship. The World’s Fair was his moment to flaunt his vertical flight of fancy and function.

Otis had developed an elegant solution to the problem of cable failure in platform elevators that made use of a hoist with a passive automatic braking system—but none had sold.

When the assistant dramatically used an ax to cut the suspension cable holding the platform, the crowd gasped in shock. It appeared to be an act of lunacy—and suicide for Otis, who stood on the platform. However, the platform stopped with a jerk just a couple of feet lower as the braking system arrested the freefall. “All safe,” Otis reassured the audience, “all safe.”

And thus, the crucial safety innovation that led to the launch of the modern vertical city was enabled by a now-legendary stunt. It’s impossible to imagine urban life without it.

Otis’s demonstration exemplifies a time-honored formula that mixes technology and design with entertainment. In some fields, “demo or die” has come to supplant “publish or perish”—highlighting the fact that products or people, no matter how deserving, will not advance unless they are first noticed. From Thomas Edison’s electric theatrics to Steve Jobs’s turtlenecked stage flair, the demo culture has thrived on symbolism, spotlight, and special effects in which pomp is the essence of persuasion.

Still, magicians will tell you that a trick will fail if it lacks meaning, no matter how incredible. There must be a link between the magic and its purpose. Showmanship “brings out the meaning of a performance and gives it an importance that it might otherwise lack,” writer and magician Henning Nelms observed. “When showmanship is carried far enough, it can even create an illusion of meaning where none exists.” And, of course, meaning is something that technology often needs desperately when it has not yet attained a place in our lives.

Meaning is something that technology often needs desperately when it has not yet attained a place in our lives.

When someone asked Phineas Taylor Barnum to describe the qualifications of a showman, the brisk ballyhooer said that the person “must have a decided taste for catering for the public; prominent perceptive faculties; tact; a thorough knowledge of human nature; great suavity; and plenty of ‘soft soap.’” When asked just what “soft soap” was, he clarified: “Getting into the good graces of people.” These human factors are relevant to engineers as well.

Although showmanship is frowned upon when it is pursued too overtly, it is sometimes unavoidable. Consider the rousing words of President Kennedy in 1962: “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” The showmanship in his words was apparent, and it got us to the moon. But what of “the other things?” If we are to take up Kennedy’s bold challenge, it’s time to elevate showmanship to do the other things he alluded to—perhaps the necessary things that are less sexy and more vexy.

Showmanship for prosocial needs could move people to action if the emphasis is on mindful mending rather than blank boosterism. Just imagine a prime-time commercial for roads and public works that inspires infrastructure improvements rather than promoting the latest new feature or flavor. Or a promo for ventilation, sanitation, and disease surveillance systems, all significant public health achievements made possible by invisible engineering. Or a modern-day Elisha Otis demo that captures the public’s attention about the powers of safety standards, quality management, and preventive maintenance in elevators? All of these are actions and technologies that our lives—literally—depend upon.  

Maintenance may seem too pedestrian to be a candidate for showmanship. As scholars Daniel Kammen and Michael Dove suggest: “Academic definitions of ‘cutting edge’ research topics exclude many of the issues that affect the largest number of people and have the greatest impact on the environment: everyday life is rarely the subject of research.” For this reason, innovation and the nonstop narrative around it have become a cultural default, as ambient as elevator music.

Innovation and the nonstop narrative around it have become a cultural default, as ambient as elevator music.

But when the dazzling prominence of innovation overshadows the subtler, kinder, and attentive acts that characterize maintenance, it leads to the collapse of everyday expectations. And these little maintenance misfortunes may ultimately put a stop to legitimate big-picture innovations. Why, after all, build a system if there is no ethic to maintain it well? Maintenance is not a static process; it builds on change, and just like innovation, it fuels change. Innovators often claim to make history, but maintainers start from and sustain the necessary continuities of history. There can be no useful innovation without a vast, invisible infrastructure of maintenance activity that keeps civilization running. 

Nestled between the duties of innovation and maintenance is a responsibility for cultural engineering that does not end when a commission or contract comes to completion. It is a perpetual effort to be attentive to future neglect and decay in our shared dependencies. Very few subjects are as relevant, and also neglected, as care and maintenance—acts integral to our survival and progress and as crucial as the creation itself. Indeed, maintenance over a system’s life cycle may consume more than it took to make a new system. But the result is often a catastrophe avoided. Engineers are full of such half-jokes: today’s innovations are tomorrow’s vulnerabilities. Without maintenance, failures flourish.

Moonshots and their like may inspire us to attempt the impossible. Still, far more practical value has come from suitcase wheels than Ferris wheels, no matter how flashy the latter are. Maintenance is the unsung partner that enables innovation. It is both life and—in its connection to history, present, and the future—larger than any single life. And it needs showmanship to attract the attention it requires to assume its proper place in our civic priorities.

Otis never thought he would become a showman at the Crystal Palace, but P. T. Barnum did. History records that Otis received a hundred dollars for his stunt from the man. There was no need for an elevator pitch.

A Viable Nuclear Industry

In “Reimagining Nuclear Engineering” (Issues, Spring 2021), Aditi Verma and Denia Djokić self-define as intellectual anomalies. There is an unwritten rule in the nuclear sector that only people with nuclear engineering degrees are legitimized to have a valuable opinion or knowledge about anything nuclear. Thus, it is very unusual for someone from within the nuclear sector to recognize the major intellectual shortcomings of a discipline that’s increasingly insular and siloed, and which receives any knowledge coming from outside its own ranks as a threat. Verma and Djokić, as nuclear-trained people, are legitimized in the eyes of the sector, but they are also breaking the second major rule of the nuclear sector: militancy. Indeed, they are exceptional among nuclear engineers. Both are a new and most-needed type: the humanist nuclear engineer.

Having researched and written about nuclear economic history for over a decade, I have come across these two unwritten rules far more often that I would like to acknowledge. Yet I reckon that nuclear engineers (and nuclear institutions such as the International Atomic Energy Agency and the European Atomic Energy Community) are, for the most part, the victims of their own training and traditions. As Verma and Djokić expose, in the academic curricula of nuclear engineers across the globe there is little room for self-critical reflection of a sector with over half a century of history to ponder. For sure, reflection upon nuclear incidents and accidents exists, but wider self-introspection about nuclear impacts on society rarely occurs within nuclear training.

Those who do not understand that some scientific advances cause concerns in society express their surprise by arguing that “technology has no ideology.” Even if one could accept this premise, it is impossible to ignore that the groups, institutions, and people who promote a particular technological option do have implicit and explicit ways of understanding what society should be. Technologies are not installed in a vacuum, but rather are inserted into a specific place and time. This social context determines whether a technology has (or not) a similar reception in different societies. Alternative technologies have different potentials to alter the social fabric and ways of life, generate interest or anxiety, and promote businesses (or cause the disappearance of others). In short, technology and society interact and transform mutually.

In the academic curricula of nuclear engineers across the globe there is little room for self-critical reflection of a sector with over half a century of history to ponder.

When asked about these issues, many of the nuclear engineers we interviewed for our research claim that those are issues that do not concern engineers. After all, they are concerned with the design, building, and use of nuclear reactors and infrastructures. The impacts of those and the associated societal complexities are for the politicians to solve, according to most of the engineers we interviewed.

Verma and Djokić aim at building bridges to close the gap between nuclear and social sciences. By introducing these other aspects into the academic curricula of nuclear engineering, nuclear engineers may become more aware of how their decisions have long-lasting impacts beyond the technology itself and may help to improve some of the blind spots that are likely to prove problematic down the line. This is a wake-up call for creating a new curriculum for the humanist nuclear engineer of the future.

Full Professor of Economic History

Director, Institute for Advanced Research in Business and Economics

Universidad Publica de Navarra (Spain)

Aditi Verma and Denia Djokić call for rethinking our collective approach to the benefits and risks of nuclear technology—a call that is crucial and timely. As humanity confronts the catastrophic consequences of climate change, questions related to the viability of nuclear energy to achieve a decarbonized world abound. The authors, however, push the boundaries of the current conversation by arguing that what is required to make nuclear energy “viable” for the twenty-first century is much more than just an exercise in technological development.

Nuclear energy has a role to play if investments in this technology are informed and driven by a human-centered approach. This requires nations to act to mitigate the risks that the nuclear technology enterprise generates and unevenly distributes across societies. It also demands engineers to become more self-aware of their role as “servants of societies” so that in their design of complex nuclear technological systems, they also account for critical social issues including equality, environmental sustainability, and intergenerational justice.

Two critical arguments emerge as central in the authors essay.

First, nuclear technological decisionmaking ought to be embedded into broader multidimensional societal processes. Throughout history, technological advancements have shaped societies, cultures, and nations. Almost always, new technologies have brought about significant benefits but equally altered social norms and environmental habitats. The acceleration and disruption of technological innovation, especially in the past century, have too often taken place in the absence of strong national mitigation strategies. Nuclear power plants, for example, while contributing to economic opportunities in the communities where they operate, have also heightened local safety risks, and led to the production of nuclear waste that remains today one of the most serious intergenerational environmental issue our societies remain incapable of solving.

The acceleration and disruption of technological innovation, especially in the past century, have too often taken place in the absence of strong national mitigation strategies.

Verma and Djokić explain how the calculation of risks in the nuclear field all too often remains the purview of a small and often homogenous group of decisionmakers (whom the authors of a related Issues article call the gatekeepers). To make nuclear energy viable for the future, nuclear technological investments must be pondered and assessed based on broader factors including intergenerational justice, environmental sustainability, and community needs for economic equity and safety.

Second, to achieve a human-centered approach to nuclear technology, future generations of nuclear engineers must be educated in both the arts and the sciences. While Verma and Djokić praise their scientific training, they also acknowledge how their exposure to other disciplines, including the social sciences, has helped them become more conscious of their social responsibility as engineers.

In redesigning and rethinking how future nuclear engineers ought to be trained, the authors point to a radical rethink of the current approach to the probabilistic risk assessment that dominates the field. While probabilistic risk assessment relies on the rule of logics and plausible reasoning, it also severely limits out-of-the-box thinking, experimentation, and creativity. An interdisciplinary education will provide nuclear engineers with a full toolbox of strategies and approaches, and make them more socially aware and therefore more effective in their own work as engineers.

Ultimately, the authors’ argument is powerful and reaches beyond the nuclear field. In a time of social and racial reckoning in the United States and around the world, they call for engineers to contribute to this historical moment by embracing a broader and deeper meaning of their role for the good of their communities, nations, and the world.

Executive Director, Project on Managing the Atom, Belfer Center for Science and International Affairs

Harvard Kennedy School

At a time when addressing climate change has refocused the world on the possibilities of nuclear energy and when commercial developers envision a new wave of twenty-first century nuclear products, Aditi Verma and Denia Djokić wisely ask the nuclear energy community to pause, reflect, and reconsider their approach to deploying nuclear technology.

Deploying nuclear technology is a socio-technical challenge whose success is far less likely if treated solely as a technology development challenge. The authors wisely describe the task in terms of their personal stories, recognizing that acceptance of the technology is the sum of many personal stories. Their article should stand up to history as a critical contribution in the philosophy of nuclear energy development.

Professor and Chair

Department of Nuclear Engineering & Radiological Sciences

University of Michigan

Data for the People!

In “The Path to Better Health: Give People Their Data” (Issues, Winter 2021), Jason Cohen makes an important contribution to the discussion of data privacy.

Data privacy in the midst of data integration, data organization, interoperability, and advanced analytics are table stakes for health care organizations—but challenging. The rush to commercialize on personal health data represents a particular risk for underserved populations, who already suffer poor outcomes due to lack of access to health care. There should be an approach to being thoughtful for these populations and filter for critical review of algorithmic bias. Creating a framework for ownership of health data that empowers these populations is essential to ensuring that they receive the benefits of the data science revolution.

Principal and Founder, JDB Strategies

Chief Clinical Product Officer and Medical Director, Medical Home Network

Jason Cohen presents some interesting perspectives. Several points in particular jumped out at me.

It is very true that patients making poor decisions is at the center of chronic health problems. With the application of artificial intelligence and data, health tech companies are well positioned to make a difference and deliver personalized patient engagement programs. These engagement platforms can educate patients and drive behavior change to help them adopt healthy habits.

Patients owning their own data and being able to control who gets to use them is a great concept. If such tools are developed and adopted, patients certainly will have a lot more control and power. Some of this is already happening with Apple’s iPhone, Microsoft’s Office 365, and Google’s search queries, where the phone or device is keeping track of communications happening between individuals and the world around them. Big data analysis of the tone of the messages and the time spent on various apps or the content that is consumed can provide leading indicators of a person’s mental state.

Future use of such technology seems positioned to expand.

Founder & CEO

RediMinds Inc.

Jason Cohen makes several excellent points, but he does not mention the practical importance of data context. For example, radiologic images require skilled interpretation, and even the image characteristics or “findings,” may then support only a probabilistic measure of the health or prognosis of the patient. Clinicians that use such information to guide patient management are well aware of the reasons the imaging was requested, the context in which such measurements are acquired, ways findings might be interpreted by the local radiologist, and confounding factors specific to the patient, but the future data user doesn’t have that advantage. The data mining algorithms used by a third party years later may not be sufficiently sophisticated or the information in the training data sets may not be available to provide accurate support to the caregiver.

The article by Ben Shneiderman in the same issue, “Human-Centered AI,” discusses these challenges. It is not made clear how care will be better for everyone if each patient owns his or her data, but it seems obvious that countries that have nationalized patient data repositories, such as Norway, offer their citizens a better foundation for clinical practice.

Assistant Professor of Radiology, Retired

Harvard Medical School, Brigham and Women’s Hospital

A Higher Ed Maelstrom

Kumble R. Subbaswamy has provided a useful guided tour of American public universities in the wake of the pandemic wreckage. His narrative, not surprisingly titled “Public Universities,” part of the postpandemic special section (Issues, Winter 2021), reminds me of Edgar Allen Poe’s classic 1841 short story, “Descent in the Maelstrom.” For Poe’s narrator, the only way to survive a furious ocean hurricane and sinking ship was to tread water, keep calm, and thoughtfully observe one’s own predicament. It’s a fitting metaphor for university presidents whose academic ships have been battered since the pandemic’s beginning last March. All constituents in American higher education would do well to read and to heed this remarkable profile of what public universities are facing.

Subbaswamy’s account is enduring because he avoids polemics, opting instead to provide thoughtful analysis about the endangered residential campus model for public higher education. Even before the COVID-19 crisis exposed and increased the liabilities of the traditional residential campus, we have had new models for innovative higher education. For example, I have been intrigued by the Universities at Shady Grove, launched in 2000. Located in Rockville, Maryland, near Washington, DC, the University of Maryland system has brought together nine of the state’s public universities to cooperate in offering upper-division and graduate-level degree programs, most of which are attuned to the changing national economy and demand for educated professionals. It provides an alternative to the model of the rural state land grant university campus that started to flourish in the early 1900s.

Even before the COVID-19 crisis exposed and increased the liabilities of the traditional residential campus, we have had new models for innovative higher education.

Elsewhere there are comparable signs of innovation. But what happens to public universities that are mortgaged into a traditional residential campus? The problem is pronounced because a decade ago numerous presidents, boards, and donors pursued massive building campaigns, often marked by grand structures. The price tag often was construction debt between $1 billion and $2 billion, much of which will be paid by future generations of students who are charged mandatory fees. By 2015 some ambitious universities’ expansion projects were featured in national media, a publicity meteor that was difficult to sustain—and now is difficult to afford. The high-stakes gamble by some aspiring public university presidents was that this was a way to transform a provincial institution into a prestigious architectural showcase. Less evident is whether these provided the right infrastructure for the science research. So, even though the traditional grand campus may no longer be necessary or effective, the nation is stuck with these monuments that perpetuate American higher education’s “edifice complex.”

Furthermore, in communities ranging from small towns to major cities, a college or university often is the largest landowner and employer. That powerful presence brings responsibility to institutional leaders in renegotiating “town and gown” relations. If all that campus real estate and new magnificent buildings are no longer necessary, how ought these be reconfigured to appropriate new uses? What do we now identify as the essentials of a college education and degree? Thanks to Chancellor Subbaswamy’s thoughtful essay, we have an invitation to a great conversation that can generate light as well as heat in revitalizing public universities in the postpandemic era.

University Research Professor

University of Kentucky

Author of American Higher Education: Issues and Institutions and A History of American Higher Education

Making Roads Safer for Everyone

While I might quibble with a few of the details of “New Rules for Old Roads” (Issues, Winter 2021), by Megan S. Ryerson, Carrie S. Long, Joshua H. Davidson, and Camille M. Boggan, I agree with the basic premise: the way we measure safety for pedestrians and bicyclists is inadequate and ineffective compared with a proactive approach.

For example, pedestrian safety research has consistently found that people walking are more likely to be killed on higher-speed, multilane roadways than in other environments. High Injury Networks tend to show that these roadway types are also problematic for bicyclists and motorists. Yet instead of proactively addressing known risky road types, in many cases transportation professionals wait for, as the authors note, a certain number of injuries a year or overwhelming demand in order to justify inconveniencing drivers with countermeasures that result in delay. Even when changes are made, they often occur at spot locations, rather than throughout a system.

Yet, making spot changes to a system without addressing the root cause of the problem only kicks the can down the road. Additionally, from an outside perspective, prioritizing the people already protected in climate-controlled metal boxes over those who are unprotected—particularly when the former disproportionately cause harm via air and noise pollution and injury, and the latter may be unable to drive, whether due to age, ability, income, or choice—seems questionable at best. The premise of prioritizing the driver is thick with inequity, yet it is the backbone of our current system.

The way we measure safety for pedestrians and bicyclists is inadequate and ineffective compared with a proactive approach.

The authors argue that part of the problem is a lack of consistent metrics to adequately measure the experiences of people walking and bicycling, and I welcome their data-driven examination of stress measures for bicyclists in various environments. This kind of research can augment crash data analysis and guide the design of user-responsive roadway environments and countermeasures, such as the protected bike lanes measured in the study, before additional crashes occur. At the same time, we should avoid creating rigorous requirements for research to change standards when that rigor was not met when creating the initial standard. There is power in simply asking people about the types of facilities they want for walking and bicycling and where they feel safe and unsafe, and then believing and prioritizing those perspectives, which are often consistent between studies. People inherently want safe, comfortable, and convenient access, and are clear about where those needs are met or not.

Additionally, more recent efforts to examine safety systemically, promoted by the Federal Highway Administration and aided by research from the National Cooperative Highway Research Program, have developed methods to analyze patterns in crash data that can allow for more holistic safety solutions. These efforts identify combinations of features that tend to be associated with crashes, allowing cities to proactively address them with retrofits or countermeasures before additional crashes occur.

Ultimately, the nation needs new design standards that reduce the need for studies for each city or roadway. Through incorporating biometric, stated preference, near miss, and crash studies into a systemic effort, we can identify high-risk road types and create metrics and design standards to ensure that high-risk roadways are transformed to be safe and comfortable for all users over time.

Assistant Research Professor, School of Geographical Sciences and Urban Planning

Arizona State University

Owner, Safe Streets Research & Consulting

Missing Millions

Reflecting on William E. Spriggs’s article, titled “Economics,” part of the postpandemic special section (Issues, Winter 2021), led me to focus on his main point that “modern economics … greatly rests on a host of assumptions.” In the context of novel coronavirus pandemic, Spriggs argues that there are several revealed shortcomings in the assumptions and models that economists traditionally use for decisionmaking—assumptions that led to missed opportunities and perhaps negative impacts on the health and well-being of the nation’s workforce.

Yet there are many extensions to traditional models that include the interdisciplinary work between economists and psychologists (neuroeconomics), economists and political scientists (political economy of digital media), economists and computer scientists (data science), and economists and medical practitioners (health economics and analytics). Research in these areas has led to breakthroughs that get us closer to solutions to the problems related to the human condition. However, even with these tools there is one major shortcoming beyond assumptions and models: the paucity of data representing all residents in America.

There is one major shortcoming beyond assumptions and models: the paucity of data representing all residents in America.

The “missing millions” is a concept that has emerged in the discussion about the need for greater diversity, equity, and inclusion in science, technology, engineering, and mathematics—the STEM fields. An extension of this missing millions concept in the COVID-19 pandemic era relates to the lack of access to health and communications services for millions of marginalized residents. A recent New York Times article titled “Pandemic’s Racial Disparities Persist in Vaccine Rollout” stated that “communities of color, which have borne the brunt of the Covid-19 pandemic in the United States, have also received a smaller share of available vaccines.” More importantly, the article stated that the data were inconsistent and that the full accounting of individuals of various ethnicities was unknown, noting that “in some states as much as a third of vaccinations are missing race and ethnicity data.”

No matter the mea culpa of Spriggs’s article on behalf of economists regarding the gaps in economic analysis related to false assumptions; more importantly, our empirical analyses, policies, and implementation of those policies are grossly inadequate because of gaps in data collection and accountability. The nation can do much better at protecting all members of the workforce if we can deploy the vaccine—and clean water, energy-saving technologies, job opening announcements, and other public goods, all things that rely on knowing the magnitude of these problems in underserved communities. Models and algorithms that decisionmakers rely on have limited efficacy because of the missing millions problem. The invisible people, not the invisible hand, is the problem to be solved. How can we make better economic policy if everyone isn’t counted?

Dean, Ivan Allen College of Liberal Arts

Georgia Institute of Technology

Think About Water

An Ecological Artist Collective

Think About Water is a collective of ecological artists and activists who got together to use art to elevate the awareness and discussion of water issues. Created by the painter and photographer Fredericka Foster in early 2020, the collective was intended to celebrate, as the organizers describe it, “our connection to water over a range of mediums and innovative projects that honor this precious element.” Think About Water is a call to action that invites viewers to a deeper engagement with the artwork. 

Lisa Reindorf, "Tsunami City" (2020)
Lisa Reindorf, Tsunami City, 2020. Oil and acrylic gel on panel, 40 x 60 inches.

In her work, Lisa Reindorf combines knowledge from architecture and environmental science. Her paintings examine the environmental impact of climate change on water. In aerial-view landscapes, she creates interpretations of coastal areas, in particular rising seas.

The collective’s first group exhibition is titled Think About Water. Curated by collective member Doug Fogelson, the exhibit was presented in virtual space through an interactive virtual reality gallery. Artists included the exhibit were Diane Burko, Charlotte Coté, Betsy Damon, Leila Daw, Rosalyn Driscoll, Doug Fogelson, Fredericka Foster, Giana Pilar González, Rachel Havrelock, Susan Hoffman Fishman, Fritz Horstman, Basia Irland, Sant Khalsa, Ellen Kozak, Stacy Levy, Anna Macleod, Ilana Manolson, Lauren Rosenthal McManus, Randal Nichols, Dixie Peaslee, Jaanika Peerna, Aviva Rahmani, Lisa Reindorf, Meridel Rubenstein, Naoe Suzuki, Linda Troeller, and Adam Wolpert.

Fredericka Foster, "River Revisited" (2017).
Fredericka Foster, River Revisited, 2017. Oil on canvas, 40 x 60 inches.

Fredericka Foster has been painting the surfaces of moving water in their infinite variety for years. She believes that painting, using tools of color and composition, can be an aid to societal change: “Art accesses another way of knowing, and it takes both rationality and emotional connection to create lasting change.”
Ilana Manolson, "Current" (2019)
Ilana Manolson, Current, 2019. Acrylic on Yupo paper, 69 x 75 inches.

Artist and naturalist Ilana Manolson finds herself drawn to the edges of swamps, ponds, rivers, and oceans. “As water changes, it changes its environment whether through erosion, flooding, nutrition, or drought. And what we as humans do upstream, will, through the water, affect what happens downstream.”
Linda Troeller, "Radon Waterfall, Bad Gastein, Austria" (2015)
Linda Troeller, Radon Waterfall, Bad Gastein, Austria, 2015. Photograph, 16 x 20 inches.

Linda Troeller is interested in water as a healing power. Bad Gastein, Austria’s thermal waterfall, was first referred to in writing in 1327 as “medicinal drinking water.” According to Troeller, “It is very fresh, crystal-clear—the droplets contain radon that can be absorbed by the skin or through inhalation or from drinking from fountains around the town.”
Rosalyn Driscoll, "River of Fire" (2011)
Rosalyn Driscoll, River of Fire, 2011.

Rosalyn Driscoll writes of her work, “I explore the terrain of the body and the Earth by making sculptures, installations, collages and photographs that connect people to their senses, the elements, and the natural world. My interest in bodily experience and sensory perception led to making sculptures that integrate the sense of touch into their creation and exhibition.”

For more information about the collective and the show, visit www.thinkaboutwater.com. Images courtesy of Think About Water and the individual artists. 

COVID and Disability

In her article, “Time,” part of the postpandemic special section (Issues, Winter 2021), Elizabeth Freeman observes that the COVID-19 pandemic has drawn us all into the alternate temporality that the disability community names as “crip time.” Perhaps the most relevant framework is that of chronic illness whose very nomenclature encodes temporality, as in the Twitter hashtag coined by the activist Brianne Benness, #NEISvoid (No End In Sight Void), an apt motif for this pandemic year.

Yet some return, if not to normal, then to a world beyond the crisis stage of the pandemic will arrive. What will this new world look like? It will be profoundly shaped by disability alongside other social categories such as race, gender, and class. Disability is not a mere matter of medical defect or rehabilitative target, but a complex of cultural, economic, and biopsychosocial factors in which “disability” materializes at the point of interaction between individuals and environments. Thus, for example, a wheelchair user is perfectly able so long as the built environment includes ramps and elevators and the social environment is inclusive. This crucial truth, so often overlooked in narrowly medical understandings of disablement, must inform us moving forward.

Disability is not a mere matter of medical defect or rehabilitative target, but a complex of cultural, economic, and biopsychosocial factors in which “disability” materializes at the point of interaction between individuals and environments.

We must at last reckon with the full range of disability’s social and cultural meanings. COVID-19 has been devastating to disabled people. In early 2021, the United Kingdom’s Office for National Statistics reported that 60% of its COVID-19 deaths thus far were of people with disabilities. Yet their disabled population has not been prioritized for vaccination, and disabled people were long excluded from vaccine priorities in the United States. Clearly forces are at work beyond the logics of science, as the weight of the cultural stigma of disability means that our lives are quite literally seen as less valuable.

Meanwhile, we are on the cusp of a vast explosion in the disabled population in the United States. “Long COVID,” as it is termed, is already producing a range of disabling chronic illnesses, causing such diverse disorders as cardiac damage, neurological dysfunction, chronic pain, and brain fog, often affecting previously healthy young people. As reported by JAMA Cardiology, a stunning 30% of Ohio State University football players who had mild or asymptomatic cases of COVID-19 were found to have significant heart damage afterward. And already people with long COVID in the United States are contending with the medical doubt and struggle for basic survival that typifies the chronically ill experience.

As after each of the nation’s major wars, a rapid expansion in the disabled population offers both challenge and opportunity to forward new technologies, reimagined infrastructure, and cultural recognition of the range of human abilities. Such innovations benefit disabled and nondisabled people alike. Will we allow our deeply inadequate disability support structures to totally collapse under the weight of long COVID? Or will we seize this opportunity to remake those structures to benefit disabled and nondisabled people alike? Disabled people must be at the table making these decisions about our lives, but it is crucial that all who seek a more equitable and sustainable society join us there.

Associate Professor of Disability Studies, English, and Gender and Women’s Studies

University of Wisconsin-Madison

Innovating Nurses

In “Innovating ‘In the Here and Now’” (Issues, Winter 2021), Lisa De Bode shares several accounts of nurses in the United States who leveraged their own innovative behaviors to problem solve for the benefit of their patients’ health status during the COVID-19 pandemic. Nurses developed innovative workarounds at scale as a result of significant unmet needs due to a lack of sufficient and available resources for their hospitalized patients and themselves.

While workarounds are not new to nurses, as Debra Brandon, Jacqueline M. McGrath, and I reported in a 2018 article in Advances in Neonatal Care, the circumstances of the pandemic are new to us all. We have not seen a health crisis of this caliber in over 100 years. The COVID-19 pandemic revealed the many systemic weaknesses in the nation’s health care delivery system. De Bode shares a few aspects of how those systemic weaknesses revealed unmet needs affecting nurses’ ability to provide quality care. In response to these pervasive unmet needs, nurses were left to their own devices. Nurses amplified their own innovative behaviors to create workarounds at scale.

Nurses developed innovative workarounds at scale as a result of significant unmet needs due to a lack of sufficient and available resources for their hospitalized patients and themselves.

I am delighted to see nurses’ innovative behaviors highlighted and shared with the world. I am also grateful for how De Bode so eloquently integrates the historic role of nurses in the 1918 Spanish flu pandemic: “nursing care might have been the single most effective treatment to improve a patient’s chances of survival.” This year, 103 years later, nurses were voted the most trusted profession for the 19th year in a row. Thus, the value of nurses on the health of the public has sustained over a century. Yet we continue to expect nurses to work around system-level limitations within health care organizations instead of recognizing how these workarounds are placing nurses, patients, and their families at risk for suboptimal care and the potential for medical errors.

To innovate is to address unmet needs for a population of people that brings positive change through new products, processes, and services. De Bode’s article reveals a population of people, the nursing workforce, who have sustained significant unmet needs for an enduring period with no visible end in sight. As a profession, an industry, and society, we cannot ignore that nurses are human beings, too, also in need of care and resources.

If nurses do not have what they need to provide quality care for patients, then time is unnecessarily spent working to first solve for the unmet need, in order to then care for the patient. Researchers have found that time spent on workarounds can be upward of 10% of each nurse’s shift, a factor likely contributing to symptoms of burnout. Months before the pandemic, the National Academy of Medicine reported in Taking Action Against Clinician Burnout that 34% to 54% of nurses were experiencing symptoms of burnout.

This empirical data combined with the enduring COVID-19 pandemic should be more than enough for our profession and the health care industry to recognize the need to reevaluate how we invest in our nurses and the environment in which they deliver care. We may be able to work around a lack of equipment and supplies, but we cannot risk working around a lack of nurses in the workforce.

DeLuca Foundation Visiting Professor for Innovation and New Knowledge

Director, Healthcare Innovation Online Graduate Certificate Program

University of Connecticut School of Nursing

Founder & CEO, Nightingale Apps & iCare Nursing Solutions

Nursing has long been a poorly respected, poorly paid, but high-risk profession. Historically in Europe and North America, nurses were volunteers from religious denominations; in other societies, nurses typically were family or community caregivers. Even as nursing professionalized and added requirements for classroom education and clinical training, it remained lower status than other medical disciplines. Numerous studies have tracked detrimental impacts of this dynamic on patient outcomes; in extreme but strikingly frequent cases, intimidation by surgeons has prevented nurses from speaking out to prevent avoidable medical errors.

As Lisa De Bode describes, nurses nevertheless have played a central role as innovators throughout history. She cites Florence Nightingale’s new guidelines on patient care and the efficacy of nursing during the 1918 flu pandemic before noting that nursing generally is considered a field of “soft” care that enables physicians and surgeons to invent “hard” tools, therapeutics, and other biomedical machinery. Yet as Jose Gomez-Marquez, Anna Young, and others in the Maker Nurse and broader nurse innovation communities have identified in recent years, nurses have been “stealth innovators” throughout history. Interestingly, this work was recognized within the profession at times. From 1900 to 1947 the American Journal of Nursing ran an “improvising” column of nurse innovations that met criteria of efficacy, practicality, and not creating new risks to patients or attendants. After 1947, the journal ran a regular series to share innovations, “The Trading Post,” which included sketches, lists of materials, and recipes. Ironically, as nursing professionalized, recognition of the tinkering mindset and peer-to-peer sharing of ideas declined.

De Bode’s article provides diverse examples of rapid response, nurse-originated innovations during the ongoing COVID pandemic. She also observes and subtly pushes against definitions of innovation that are based solely on “things,” such as pharmaceuticals and medical devices. Innovations—and inventions—that originate from nurses typically fall into vaguely classified categories of “services” and “care.” They aren’t patentable, reducible to products that can be licensed to other clinics, or the basis for making a pitch deck to present to venture capitalists. Like the invention of hip-hop, the creation of new clothing styles by individuals in the Black community, and the work of thousands of inventors who are Black, Indigenous, or people of color in low-status professions, these advances are not treated as property of the inventor and often are not archived and celebrated as breakthroughs.

Just as 80% of the mass of the universe is made up of unobserved dark matter, we ignore the majority of the innovations that ensure that hospitals function or that myriad other aspects of our daily lives actually improve year on year. Ironically, even as the United States celebrates itself as an innovation-based economy and advocates for stronger intellectual property systems worldwide, it ignores the majority of its domestic innovations. A reset in how we define “inventor” and which innovators we resource with funding and recognition is overdue.

Director, Lemelson Center for the Study of Invention and Innovation

Smithsonian Institution

Lisa De Bode has cast a critical spotlight on the role of innovation undertaken by nurses, particularly within the crisis of the COVID pandemic. Many nurses would not consider themselves as inventors or entrepreneurs, nor do many others in the health system—but in fact often they are. Nurses are often commonly considered as the doers, executing the plans of others and for the most part this is true. As De Bode explains, nurses often engage in “workarounds,” tailoring approaches designed for them, not designed with them or by them.

Many nurses would not consider themselves as inventors or entrepreneurs, nor do many others in the health system—but in fact often they are.

But the fact is that many nurses devise innovative approaches and designs. As innovators, nurses can drive changes in systems and processes that impact care delivery and patient outcomes and improve the working life of nurses and other health professionals. Increasing collaborations with patients, their families, health providers, and members of other disciplines, such as engineers, demonstrate significant promise. De Bode has created a window into the working lives of nurses. Listening to their views and opinions and leveraging their expertise is vital to solving the complex problems of our health systems.

For decades nurses have been voted the most trusted profession. Clearly, our patients value us. So it is important that those who design and fund our institutions and models of care to listen to the voices of nurses and their advocacy for patients. The impacts are potentially transformational.

Dean

Johns Hopkins School of Nursing

The Importance of a Computer Science Education

In “A Plan to Offer Computer Science Classes in All North Carolina High Schools” (Issues, Winter 2021), Lena Abu-El-Haija and Fay Cobb Payton make a compelling case for how to improve the way computer science (CS) is taught in high schools. Here we want to extend their insightful plans by focusing on how the authors’ well-stated goals can be achieved through a culturally responsive computing lens. We suggest four points of consideration when implementing their CS education plans for Black, brown, and other underserved students.

First, culturally responsive computing, as a frame for developing learners’ CS competencies through asset building, reflection, and connectedness, should inform barometers of success for CS programs. Thus, the proposed high school CS education initiatives should not mimic college programs—that is, they should not measure their effectiveness based on where students go (e.g., employment at top tech companies) but on what students do once they arrive there. Achievement markers should shift to focus on how students use their computing knowledge as a tool to solve challenges affecting them and their communities. We know from developing and implementing our own culturally responsive CS programs (e.g., COMPUGIRLS), success comes only when participants have space and resources to use their newly acquired technology skills in culturally responsive and sustaining ways.

Second, the culturally responsive frame can further inform the curriculum of the proposed high school CS programs. As an early-career Black woman in computing, I (Stewart) can confirm that current CS education focuses on domain-knowledge and how to build systems. Beyond an ethics course late in my undergraduate program, there was little emphasis in my training on the social context surrounding technology, including questions of when and why we build systems. Who the technology might affect and whether the technology yields acceptable justice-oriented outcomes are rarely posed in CS programs. Although the human-computer interaction community pursues answers to these questions, all aspects of the technology-creation pipeline, and by extension CS education, need to critically reflect on these and other contextualized interrogatives.

Achievement markers should shift to focus on how students use their computing knowledge as a tool to solve challenges affecting them and their communities.

Third, culturally responsive computing needs to be embedded in all aspects of the proposed plans. To achieve this, teacher training must include far more than increasing educators’ CS competencies. Computer scientists, educators, and social scientists should collaboratively design preparation programs (and ongoing professional development) that equip teachers with the knowledge of culturally responsive pedagogy. Computer scientists alone cannot ignite the necessary “evolution” the authors describe.

Fourth, and finally, to achieve a culturally responsive vision of CS education, equitable allocation of funding is crucial. Resources must be distributed in a way that considers the sociohistorical realities of racist policies that led to the marginalization of Black and brown students in education, in general, and in computer science, in particular. For example, districts that are the results of redlining should receive more resources to implement CS education programs than districts that benefited from this practice.

In sum, we, too, call on policymakers to apply a culturally responsive, justice-focused perspective to these CS education programs in order to empower the voices of Black and brown innovators. To do anything else will ensure the nation remains limited by its innovations.

Postdoctoral Fellow, Human-Computer Interaction Institute

Carnegie Mellon University

Professor, School of Social Transformation

Executive Director, Center for Gender Equity in Science and Technology

Arizona State University

Lena Abu-El-Haija and Fay Cobb Payton lay out both a strong argument and solid steps for why and how computer science (CS) can be a part of every student’s educational pathway. The authors share research describing the lack of quality CS education—especially for Black and Brown students—that poses problems for both the future of North Carolina’s children and the state’s economy (which depends heavily on tech companies in Research Triangle Park). Building on the momentum of recent efforts to address these issues, the authors call for actions that move beyond “episodic intervention” toward “comprehensive change” with a designated CS ambassador to oversee regional implementation, CS accountability measures for rating school success, and more.

Their suggestions are brilliant, much needed, and could set a valuable example for other states nationwide. They also inspired the following questions. First, how can statewide plans ensure buy-in across the educational landscape? I wondered if their plan might allow space for a multistakeholder committee working with the CS ambassadors, consisting of students, teachers, administrators, counselors, researchers, policymakers, and industry professionals? My own state, California, has formed the CSforCA multistakeholder coalition, ensuring that diverse perspectives can inform what decisions get made, for whom, and for what purpose toward sustaining long-term local implementation.

California has formed the CSforCA multistakeholder coalition, ensuring that diverse perspectives can inform what decisions get made, for whom, and for what purpose toward sustaining long-term local implementation.

Relatedly, how can we elevate students’ voices—and particularly those of populations underrepresented in computing—toward shaping more meaningful CS education experiences? Students know best about what motivates their engagement, yet rarely are they invited to shape the direction of schooling. As the authors astutely note, “cultural context, competency, and relevancy in the teaching of the subject are key.” How can youth help drive the movement toward exactly this kind of CS education?

I also believe including diverse stakeholders would ensure that the plan’s school rating system adequately accounts for the different kinds of hurdles that low-resource schools face that wealthier schools don’t, and how that impacts CS education implementation differently.

Additionally, economic drivers for CS education are valuable for gathering diverse communities behind computing education; almost everyone agrees that all people deserve to thrive professionally and financially. However, our research focused on students’ perspectives in CS education reveals that youth are thinking about more than their future careers. They are asking how computing can solve challenging problems that negatively impact communities. CS is a form of power; technology shapes how we communicate, think, purchase goods, and so on, while social media and newsfeeds influence our mental health, ethical convictions, voting habits, and more. Yet CS continues to be controlled by a population that does not reflect the diversity of experiences, values, and perspectives of the nation’s low-income, Black, Brown, Indigenous, female, LGBTQ, disabled, and minoritized communities.

The authors emphasize that a CS education plan is needed to ensure greater diversity in computer science fields. But we also have a moral imperative to question the ethical implications of our increasingly computerized world and prepare our youth to do the same, regardless of whether they pursue computing careers.

Director of Research

UCLA Computer Science Equity Project

Lena Abu-El-Haija and Fay Cobb Payton offer a compelling and comprehensive plan for moving forward. Too few girls, too few Black and Latino students, have access to and participate in computer science courses and pursue college majors and careers in the field. Increasing piecemeal access to computer science one school or district at a time won’t reverse these trends, and the authors are right to call for more comprehensive action. To extend their important argument, I offer one additional rationale for this course of action.

Abu-El-Haija and Cobb Payton rightly observe that there is substantial economic opportunity available to young people with computer science skills. This argument fits within the dominant conception of schools within American public discourse: schools as sites of workforce training and labor market preparation. But the earliest arguments from public school advocates such as Thomas Jefferson and Horace Mann were not fundamentally economic, but civic. Communities fund public schools because a common preparation for all young citizens poses the brightest possible future for our shared democracy.

US democracy is strongest when it most comprehensively represents its citizenry, and right now, the field of computer science is woefully out of sync with the broader population. Multiple studies of the demographic makeup of the largest technology companies in the United States reveal that these companies are overpopulated with white and Asian men, and these concentrations are more pronounced when analyses focus on engineering jobs. When the digital infrastructure of society is developed by people who fail to represent the full breadth and diversity of the nation, we cannot be surprised when problems and disasters ensue.

A rapidly growing body of scholarship reveals a wide variety of ways that new technologies fail to serve all users: facial recognition tools that can’t identify dark skinned faces, language technologies trained on datasets filled with bias and prejudice, pregnancy tracking apps with no mechanism for responding meaningfully and compassionately to miscarriages. The list goes on and on.

When the digital infrastructure of society is developed by people who fail to represent the full breadth and diversity of the nation, we cannot be surprised when problems and disasters ensue.

It isn’t the case that women or people of color are not interested in opportunities in computing. Indeed, in its earliest days, computing and programming were seen as “women’s work,” requiring attention to detail, organization, and persistence. But then, throughout the 1980s, especially as personal computers entered the marketplace, computing was deliberately marketed to white boys, the composition of graduate programs changed dramatically, and the conditions for our current industry became locked in place: women and minoritized people who tried to enter the computing field faced the twin challenges of learning a complex and challenging field while simultaneously overcoming their outsider status.

We will live in a better society when our computational tools—increasingly essential to our markets, democracy, and social lives—are built by people from all backgrounds and all walks of life. The most promising pathway to that better future, as Abu-El-Haija and Cobb Payton suggest, involves giving all young people an early introduction to computer science and supporting diverse students beyond that introduction.

Associate Professor of Digital Media

Massachusetts Institute of Technology

Director, MIT Teaching Systems Lab

Why Buy Electric?

It is true that the United States, once the global leader in electric vehicles, is falling behind China and Europe, as John Paul Helveston writes in “Why the US Trails the World in Electric Vehicles” (Issues, Winter 2021). The policies the author references in China and Norway have an underlying theme: they make gasoline vehicles more expensive and less convenient to own compared with electric vehicles. In the United States where vehicle purchase tax and gas prices are very low, buyers have no reason not to purchase a gasoline car. Every time a household purchases a new gasoline vehicle, it is more comfortable, more efficient, cheaper to run, safer, and better equipped than ever before. There is nothing pushing car buyers away from gasoline vehicles, and therefore consumers do not seek alternatives such as electric vehicles.

On the issue of US car dealerships not selling or promoting electric vehicles, we should look to automakers, not dealerships, to get to the source of this issue. Dealerships sell the vehicles that automakers produce; if automakers don’t produce electric vehicles in large numbers, dealerships cannot sell them in large numbers, and therefore won’t be motivated to train salespeople on selling electric vehicles.

There is nothing pushing car buyers away from gasoline vehicles, and therefore consumers do not seek alternatives such as electric vehicles.

On the issues of government regulations, the increase in electric vehicle sales in Europe is largely attributed to European emissions standards, which are difficult to comply with without selling electric vehicles. In the United States, federal fuel economy standards may not be sufficient to do this, and the zero emission vehicle (ZEV) sales mandate, which is often credited with the commercialization of electric vehicle technology, needs to be updated to encourage more electric vehicle sales.

Without more progressive fuel economy standards and more ambitious ZEV sales targets coupled with higher gasoline prices and higher vehicle tax, the United States may continue to lag behind Europe and China. As Helveston notes, a more aggressive approach is certainly needed.

Plug-in Hybrid and Electric Vehicle Research Center

Institute of Transportation Studies

University of California, Davis

Maintaining Control Over AI

A pioneer in the field of human-computer interaction, Ben Shneiderman continues to make a compelling case that humans must always maintain control over the technologies they create. In “Human-Centered AI” (Issues, Winter 2021), he argues for AI that will “amplify, rather than erode, human agency.” And he calls for “AI empiricism” over “AI rationalism,” by which he means we should gather evidence and engage in constant assessment.

In many respects, the current efforts to develop the field of AI policy reflect Shneiderman’s intuition. “Human-centric” is a core goal in the OECD AI Principles and the G20 AI Guidelines, the two foremost global frameworks for AI policy. At present, more than 50 countries have endorsed these guidelines. Related policy goals seek to “keep a human in the loop,” particularly in such crucial areas as criminal justice and weapons. And the call for “algorithmic transparency” is simultaneously an effort to ensure human accountability for automated decisionmaking.

There is also growing awareness of the need to assess the implementation of AI policies. While countries are moving quickly to adopt national strategies for AI, there has been little focus on how to measure success in the AI field, particularly in the areas of accountability, fairness, privacy, and transparency. In my organization’s report Artificial Intelligence and Democratic Values, we undertook the first formal assessment of AI policies taking the characteristics associated with democratic societies as key metrics. Our methodology provided a basis to compare national AI policies and practices in the present day. It will provide an opportunity to evaluate progress, as well as setbacks, in the years ahead.

The current efforts to develop the field of AI policy reflect Shneiderman’s intuition.

Information should also be gathered at the organization level. Algorithmic Impact Assessments, similar to data protection impact assessments, require organizations to conduct a formal review prior to deployment of new systems, particularly those that have direct consequences for the opportunities of individuals, such as hiring, education, and the administration of public services. These assessments should be considered best practices, and they should be supplemented with public reporting that makes possible meaningful independent assessment.

In the early days of law and technology, when the US Congress first authorized the use of electronic surveillance for criminal investigations, Congress also required the production of detailed annual reports by law enforcement agencies to assesses the effectiveness of those new techniques. Fifty years later, those reports continue to provide useful information to law enforcement agencies, congressional oversight committees, and the public as new issues arise.

AI policy is still in the early days, but the deployment of AI techniques is accelerating rapidly. Governments and the people they represent are facing extraordinary challenges as they seek to maximize the benefits for economic growth and minimize the risks to public safety and fundamental rights during this period of rapid technological transformation.

Socio-technical imaginaries have never been more important. We concur with Ben Shneiderman in his future vision for artificial intelligence (AI): humans first. But we would go further with a vision of socio-technical systems: human values first. This includes appreciating the role of users in design processes, followed by the identification and involvement of additional stakeholders in a given, evolving socio-technical ecosystem. Technological considerations can then ensue. This allows us to design and build meaningful technological innovations that support human hopes, aspirations, and causes, whereby our goal is the pursuit of human empowerment (as opposed to the diminishment of self-determination) and using technology to create the material conditions for human flourishing in the Digital Society.

We must set our sights on the creation of those infrastructures that bridge the gap between the social, technical, and environmental dimensions that support human safety, protection, and constitutive human capacities, while maintaining justice, human rights, civic dignity, civic participation, legitimacy, equity, access, trust, privacy, and security. The aim should be human-centered value-sensitive socio-technical systems, offered in response to local community-based challenges that are designed, through participatory and co-design processes, for reliability, safety, and trustworthiness. The ultimate hope of the designer is to leave the outward physical world a better place, but also to ensure that multiple digital worlds and the inner selves can be freely explored together.

With these ideas in mind, we declare the following statements, affirming shared commitments to meeting common standards of behavior, decency, and social justice in the process of systems design, development, and implementation:

As a designer:

  1. I will acknowledge the importance of approaching design from a user centered perspective.
  2. I will recognize the significance of lived experience as complementary to my technical expertise as a designer, engineer, technologist, or solutions architect.
  3. I will endeavor to incorporate user values and aspirations and appropriately engage and empower all stakeholders through inclusive, consultative, participatory practices.
  4. I will incorporate design elements that accept the role of individuals and groups as existing within complex socio-technical networks, and are sensitive to the relative importance of community.
  5. I will appreciate and design for evolving scenarios, life-long learning and intelligence, and wicked social problems that do not necessarily have a terminating condition (e.g., sustainability).
  6. I will contribute to the development of a culture of safety to ensure the physical, mental, emotional, and spiritual well-being of the end user, and in recognition of the societal and environmental implications of my designs.
  7. I will seek to implement designs that maintain human agency and oversight, promote the conditions for human flourishing, and support empowerment of individuals as opposed to replacement.
  8. I will grant human users ultimate control and decisionmaking capabilities, allowing for meaningful consent and providing redress.
  9. I will seek continuous improvement and refinement of the given socio-technical system using accountability (i.e., auditability, answerability, enforceability) as a crucial mechanism for systemic improvement.
  10. I will build responsibly with empathy, humility, integrity, honor, and probity and will not shame my profession by bringing it into disrepute.

As a stakeholder:

  1. You will have an active role and responsibility in engaging in the design of socio-technical systems and contributing to future developments in this space.
  2. You will collaborate and respect the diverse opinions of others in your community and those involved in the design process.
  3. You will acknowledge that your perspectives and beliefs are continually evolving and refined over time in response to changing realities and real-world contexts.
  4. You will be responsible for your individual actions and interactions throughout the design process, and beyond, with respect to socio-technical systems.
  5. You will aspire to be curious, creative, and open to developing and refining your experience and expertise as applied to socio-technical systems design.
  6. You will appreciate the potentially supportive role of technology in society.

As a regulator:

  1. You will recognize the strengths and limitations of both machines and people.
  2. You will consider the public interest and the environment in all your interactions.
  3. You will recognize that good design requires diverse voices to reach consensus and compromise through dialogue and deliberation over the lifetime of a project.
  4. You will strive to curate knowledge, and to distinguish between truth and meaning; and will not deliberately propagate false narratives.
  5. You will act with care to anticipate new requirements based on changing circumstances.
  6. You will be objective and reflexive in your practice, examining your own beliefs, and acting on the knowledge available to you.
  7. You will acknowledge the need for human oversight and provide mechanisms by which to satisfy this requirement.
  8. You will not collude with designers to install bias or to avoid accountability and responsibility.
  9. You will introduce appropriate enforceable technical standards, codes of conduct and practice, policies, regulations, and laws to encourage a culture of safety.
  10. You will take into account stakeholders who have little or no voice of their own.

Professor, School for the Future of Innovation in Society and the School of Computing and Decision Systems Engineering

Arizona State University

Director of the Society Policy Engineering Collective and the founding Editor in Chief of the IEEE Transactions on Technology and Society

Lecturer, School of Business, Faculty of Business and Law

University of Wollongong, Australia

Coeditor of IEEE Transactions on Technology and Society

Professor of Intelligent and Self-Organising Systems, Department of Electrical & Electronic Engineering

Imperial College London

Editor in Chief of IEEE Technology and Society Magazine

Reaping the Benefits of Agricultural R&D

In “Rekindling the Slow Magic of Agricultural R&D” (Issues, May 3, 2021), Julian M. Alston, Philip G. Pardey, and Xudong Rao focus on a critical issue: the decline in funding of agricultural research and development for the developing world. I believe, however, that they give too much credit to the public response to COVID-19. An equally proactive response to the climate crisis or the crisis of agricultural production/food security would in the end save more lives. That said, there are two further issues to note.

First, despite the scientific and long-term human importance of the Green Revolution, the experience taught us a great deal about the potential negative social consequences of new technologies. It taught us to distinguish development ideology from development technology; to apply the latter carefully in light of local power relations; and to think of rural development as more than just raising farm production, but also increasing rural populations’ quality of life. In many countries, for example, wealthy farmers or absentee landowners took advantage of labor-reducing, better-yielding technologies to increase productivity and production, but also to push smallholders and tenants off the land. (Although in part overcome over time, this lament was often heard in India and Africa.) We need to bear these lessons in mind as we go at it again so that new technologies do not have the same disruptive, inequality-increasing impact today as Green Revolution technologies had in earlier decades.

By the same token, a key weakness in the entire system has been its continued (and continuing) dependence on local governments. In a great many cases, rural problems—e.g., low farm-gate prices, lack of access to technology and knowledge, food insecurity itself—are the direct result of government policies. Today, countries are paying the price of such policies, as rural areas empty and onetime farmers give up in the face of increasing personal food insecurity. The loss of these farmers only increases national and international food insecurity in a world where food reserves are shrinking.

Governments and intergovernmental organizations deal with governments, so this may be beyond reach. But to the extent that external research designs can focus on the rural poor majority or constrain governments—or both—to put investment where it will help those in real need, not just those in power, it would be wonderful.

Cofounder and Codirector

Warm Heart Foundation

Phrao District, Thailand

Reliable Infrastructure

There’s much to applaud in Mikhail Chester’s “Can Infrastructure Keep Up With a Rapidly Changing World?” (Issues, April 29, 2021). I’d like to offer two reservations and one alternative from a different perspective—that of real-time operators in the control rooms of large critical infrastructures such as those for water and energy.

My first reservation is that the premature introduction of so-called innovative software has plagued real-time systemwide operations of key infrastructures for decades. Indeed, as long as there are calls for more and better software and hardware, there will be the need for control operators to come up with just-in-time workarounds for the inevitable glitches. System reliability, at least in large systemwide critical infrastructures, requires managing beyond design and technology.

Second, talk about trade-offs when it comes to the design and operation of these large systems is ubiquitous. Control operators and their wraparound support staff see real-time system demands differently.

As long as there are calls for more and better software and hardware, there will be the need for control operators to come up with just-in-time workarounds for the inevitable glitches.

Reliability in real time is nonfungible: it can’t be traded off against cost or efficiency or whatever when the safe and continuous provision of the critical service matters, right now. No number of economists and engineers insisting that reliability is actually a probability estimate will change the real-time mandate that some systemwide disasters must be prevented from ever happening. That disasters do happen only reinforces the public’s and the operators’ commitment to the precluded event standard of systemwide reliability.

What do these reservations (and others for that matter) add up to? Remember the proposed congressional legislation—introduced in 2007 and reintroduced in 2020—for the creation of a National Infrastructure Reinvestment Bank to fund major renovations of the nation’s infrastructure sectors? What was needed then and now is something closer to a National Academy for Reliable Infrastructure Management to ensure the tasks and demands of the rapidly changing infrastructures match the skills available to manage them in real time.

Coauthor of High Reliability Management (Stanford University Press, 2008) and Reliability and Risk (Stanford University Press, 2016)

AI and Jobs

During my tenure as program manager at the Defense Advanced Research Projects Agency, I watched with admiration the efforts of John Paschkewitz and Dan Patt to explore human-AI teaming, and I applaud the estimable vision they set forth in “Can AI Make Your Job More Interesting?” (Issues, Fall 2020). My intent here is not to challenge the vision or potential of AI, but to question whether the tools at hand are up to the task, and whether the current AI trajectory will get us there without substantial reimagining.

The promise of AI lies in its ability to learn mappings from high dimensional data and transform them into a more compact representation or abstraction space. It does surprisingly well in well-conditioned domains, as long as the questions are simple and the input data don’t stray far from the training data. Early successes in several AI showpieces have brought, if not complacency, a lowering of the guard, a sense that deep learning has solved most of the hard problems in AI and that all that’s left is domain adaptation and some robustness engineering.

But a fundamental question remains—whether AI can learn compact, semantically grounded representations that capture the degrees of freedom we care about. Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are. If perturbing a handful of pixels can cause a deep network to misclassify a stop sign as a yield sign, it’s clear that the AI has failed to learn the semantically relevant letters “STOP” or the shape “octagon.” AI both overfits and underfits its training data, but despite exhaustive training on massive datasets, few deep image networks learn topology, perspective, rotations, projections, or any of the compact operators that give rise to the apparent degrees of freedom in pixel space.

Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are.

To its credit, the AI community is beginning to address problems of data efficiency, robustness, reliability, verifiability, interpretability, and trust. But the community has not fully internalized that these are not simply matters of better engineering. Is this because AI is fundamentally limited? No, biology offers an existence proof. But we have failed our AI offspring in being the responsible parents it needs to learn how to navigate in the real world.

Paschkewitz and Patt’s article poses a fundamental question: how does one scale intelligence? Except for easily composable problems, this is a persistent challenge for humans. And this, despite millions of years of evolution under the harsh reward function of survivability in which teaming was essential. Could an AI teammate help us to do better?

Astonishingly, despite the stated and unstated challenges of AI, I believe that the answer could be yes! But we are still a few groundbreaking ideas short of a phase transition. This article can be taken as a call to action to the AI community to address the still-to-be-invented AI fundamentals necessary for AI to become a truly symbiotic partner, for AI to accept the outreached human hand and together step into the vision painted by the authors.

Former Program Manager

Defense Sciences Office

Defense Advanced Research Projects Agency

Technologists—as John Paschkewitz and Dan Patt describe themselves—are to be applauded for their ever-hopeful vision of a “human-machine symbiosis” that will “create more dynamic and rewarding places for both people and robots to work,” and even become the “future machinery of democracy.” Their single-minded focus on technological possibilities is inspiring for those working in the field and arguably necessary to garner the support of policymakers and funders. Yet their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor, and the authors’ wanderings into history, politics, and policy warrant some caution.

While it is heartening to read about a future where humans have the opportunity to use their “unique talents” alongside robots that also benefit from this “true symbiosis,” contemplating that vision through the lens of the past technology-driven decades is a head-scratcher. This was an era that brought endless wars facilitated by the one-sided safety of remote-control battlefields, and though there was an increase in democratic participation, it was in reaction to flagrant, technology-facilitated abuses that provoked outrage about political corruption (real and imagined) motivating citizens to go to the ballot box—thought to be secure only when unplugged from the latest technology. We should also consider the technology promises of Facebook to unite the global community in harmony, or the Obama administration’s e-government technology initiative expanding access and participation to “restore public faith in political institutions and reinvigorate democracy.”

Their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor.

As to the advances in the workplace, they did produce the marvel of near-instant home delivery of everything imaginable. But those employing the technology also chose to expand the size of the workforce that drew low pay and few benefits, and relied on putting in longer hours and working multiple jobs to pay the rent—all while transferring ever-greater wealth to the captains of industry, enabling them to go beyond merely acquiring yachts to purchasing rockets for space travel.

Of course, it might be different this time. But it will take more than the efforts of well-meaning technologists to transform the current trajectory of AI-mediated workplaces into a harmonious community. Instead, the future now emerging tilts to the dystopian robotic symbiosis that the Czech author Karel Čapek envisioned a century ago. Evidence tempering our hopeful technologists’ vision is in the analyses of the two articles between which theirs is sandwiched—one about robotic trucks intensifying the sweatshops of long-haul drivers, and the other about how political and corporate corruption flourished under the cover of the “abstract and unrealizable notions” of Vannevar Bush’s Endless Frontier for science and innovation.

For technologists in the labs, symbiotic robots may be a hopeful and inspirational vision, but before we abandon development of effective policy in favor of AI optimization, let us consider the reality of Facebook democracy, Amazonian sweatshops, and Uber wages that barely rise above the minimum. We’d be on the wrong road if we pursue a technologist’s solution to the problems of power and conflict in the workplace and the subversion of democracy.

Professor of Planning and Public Policy, Edward J. Bloustein School

Senior Faculty Fellow, John J. Heldrich Center for Workforce Development

Rutgers University

John Paschkewitz and Don Patt provide a counterpoint to those who warn of the coming AIpocalypse, which happens, as we all know, when SkyNet becomes self-aware. The authors make two points.

First, attention has focused on the ways that artificial intelligence will substitute for human activities; overlooked is that it may complement them as well. If AI is a substitute for humans, the challenge becomes one of identifying what AI can do better and vice versa. While this may lead to increases in efficiency and productivity, perhaps, the greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The degree to which AI will be a substitute or complement will depend upon the activity as well as the new kinds of activities that AI may make possible. Whether the authors are correct, time will judge. Nevertheless, the role of AI as intermediary is worth thinking about particularly in the context of the economist Ronald Coase’s classic question: what is a firm? One answer is that it is a coordinating device. Might AI supplant this role? It would mean the transformation of the firm from employer to intermediary, such as ride-sharing platforms.

Greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The second point is more provocative. AI-assisted governance anyone? Paschkewitz and Patt are not suggesting that Plato’s philosopher king be transformed into an AI-assisted monarch. Rather, they posit that AI has a role in improving the quality of regulation and government interventions. They provide the following as illustration: “An alternative would be to write desired outcomes into law (an acceptable unemployment threshold) accompanied by a supporting mechanism (such as flowing federal dollars to state unemployment agencies and tax-incentivization of business hiring) that could be automatically regulated according to an algorithm until an acceptable level of unemployment is again reached.”

This proposal is in the vein of economics’ Taylor rule, whose goal was to remove discretion over how interest rates should be set. Following the rule, rates should be pegged to the gap between the desired inflation rate and the actual rate. AI would allow one to implement rules more complex than this and contingent on far more factors.

We have examples of such things “in the small”—for example, whose income tax returns should be audited and how should public housing be allocated. Although these applications have had problems—say, with bias—the problems are not fundamental in that one knows how to correct for them. However, for things “in the large,” I see three fundamental barriers.

First, as the authors acknowledge, it does not eliminate political debate, but shifts it, from the ex post (what should we do now) to the ex ante (what should we do if). It is unclear that we are any better at resolving the second kind of debate than the first. Second, who is accountable for outcomes with AI-assisted policy? For even the “small” things, this issue is unresolved. Third, the greater the sensitivity of regulation to the environment, the greater the need for accurate measurements of the environment and the greater the incentive to corrupt it.

George A. Weiss and Lydia Bravo Weiss University Professor

Department of Economics & Department of Electrical and Systems Engineering

University of Pennsylvania

Building a Better Railroad

Carl E. Nash’s article, “A Better Approach to Railroad Safety and Operation” (Issues, Fall 2020), reflects an incomplete understanding of positive train control (PTC) technology, leading to misstatements about the PTC systems that have been put in place. Importantly, Nash’s assertion that full implementation of PTC is in doubt is simply false. The railroad industry met Congress’s December 31, 2020, deadline for implementing PTC systems as mandated by the Rail Safety Improvement Act of 2008.

The act requires that PTC systems must be able to safely bring a train to a stop before certain human-error-caused incidents can occur. Recognizing that trains often operate across multiple railroads, the law requires that each railroad’s PTC system be fully interoperable with other railroad systems across which a train might travel.

Nash believes the reason PTC was not completed earlier was money. The nation’s largest railroads have invested about $11 billion in private capital to develop this first-of-its-kind technology. Money was not the reason PTC was not completed earlier. PTC had to be designed from scratch to be a failsafe technology capable of operating seamlessly and reliably. This task was unprecedented. It took as long as it did to implement PTC because of the complexity of delivering on the promise of PTC’s safety benefits.

Nash falsely equates rail operations to highways, and implies that a system similar to Waze or Google Maps can work on rail operations. The two modes are not the same, and the level of precision necessary for a fully functioning PTC system is far more exacting than what helps you find the fastest route home.

PTC had to be designed from scratch to be a failsafe technology capable of operating seamlessly and reliably. This task was unprecedented.

Contrary to what Nash would have you believe, the predominant PTC system used by freight railroads and passenger railroads outside the Northeast Corridor does use GPS. Also contrary to what he stated, locomotives that travel across the nation are equipped with nationwide maps of PTC routes. The transponder system that Nash referred to is a legacy system limited to Amtrak’s Northeast Corridor and some commuter railroads operating in the Northeast, and it is used because the transponders were already in place.

Nash asserts that each railroad has its own PTC system. In fact, the freight railroads have collaborated on PTC, with the Association of American Railroads adopting PTC standards to ensure that there is no incompatibility as locomotives move across the railroad network.

Railroads are proud of their work to make PTC a reality and know that it will make this already safe industry even safer. What Nash does get right, though, is that PTC systems must be dynamic. They will continue to require maintenance and evolve to fulfill additional needs. Meeting the congressional deadline was not the end for PTC; it marked the beginning of a new, disciplined phase that promises to further enhance operations and improve efficiency. Armed with PTC and other cutting-edge technologies, the rail industry is poised to operate safely, efficiently delivering for us all.

Senior Vice President-Safety and Operations

Association of American Railroads

On September 12, 2008, a Union Pacific Railroad freight train and a Metrolink commuter train collided in Chatsworth, California, resulting in 135 injuries and 25 fatalities. In response, Congress passed the Rail Safety Improvement Act of 2008, which mandated that each Class I railroad (comprising the nation’s largest railroads) and each entity providing regularly scheduled intercity or commuter rail passenger transportation must implement a positive train control (PTC) system certified by the Federal Railroad Administration (FRA). Each railroad was to install a PTC system on: (1) its main line over which 5 million or more gross tons of annual traffic and poison- or toxic-by-inhalation hazardous materials are transported; (2) its main line over which intercity or commuter rail passenger transportation is regularly provided; and (3) any other tracks the secretary of transportation prescribes by regulation or order.

On January 15, 2010, FRA issued regulations that require PTC systems to prevent train-to-train collisions, over-speed derailments, incursions into established work zones, and movements of trains through switches left in the wrong position, in accordance with prescribed technical specifications. The statutory mandate and FRA’s implementing regulations also require a PTC system to be interoperable, meaning the locomotives of any host railroad and tenant railroad operating on the same main line will communicate with and respond to the PTC system, including uninterrupted movements over property boundaries.

FRA has worked with all stakeholders, including host and tenant railroads, railroad associations, and PTC system vendors and suppliers, to help ensure railroads fully implement PTC systems on the required main lines as quickly and safely as possible. Since 2008, the Department of Transportation has awarded $3.4 billion in grant funding and loan financing to support railroads’ implementation of PTC systems.

Currently, 41 railroads are subject to the statutory mandate, including seven Class I railroads, Amtrak, 28 commuter railroads, and 5 other freight railroads that host regularly scheduled intercity or commuter rail passenger service. Congress set a deadline of December 31, 2020, by which an FRA-certified and interoperable PTC system must govern operations on all main lines subject to the statutory mandate.

As of December 29, 2020, PTC systems govern operations on all 57,536 route miles subject to the statutory mandate. In addition, as required, FRA has certified that each host railroad’s PTC system complies with the technical requirements for PTC systems. Furthermore, railroads have reported that interoperability has been achieved between each applicable host and tenant railroad that operates on PTC-governed main lines. The Federal Railroad Administration congratulates the railroads, particularly their frontline workers, as well as PTC system suppliers/vendors and industry associations, on this transformative accomplishment.

Director, Office of Railroad Systems and Technology

Federal Railroad Administration

COVID-19 and Prisons

As progressive prosecutors, we read “COVID-19 Exposes a Broken Prison System,” by Justin Berk, Alexandria Macmadu, Eliana Kaplowitz, and Josiah Rich (Issues, Fall 2020), with great interest. COVID-19’s rapid spread in prisons and jails across the nation has created epidemics within the pandemic. As the authors note, as of August, 44 of the 50 largest outbreaks nationwide have been in jails or prisons. In San Quentin State Prison in California, for example, in the course of less than two months the prison went from no cases of the virus to a 59% infection rate for those incarcerated there.

In San Francisco, our office has worked hard with our justice partners to prevent a similar outbreak from occurring in our county jails. We listened to the advice of public health experts early in the pandemic and worked quickly to decarcerate to allow for the necessary social distancing inside our jails. We carefully reviewed every person in custody to determine if we could safely release them. Consistent with our office’s policy in ending money bail in San Francisco, we did not seek incarceration for nonviolent offenses. We early released people who were close to completing their sentences, and we identified cases where we could offer plea bargains without jail time. We worked with probation to avoid jailing people for technical violations of supervision. We coordinated with the public defender’s office to help secure safe housing and reentry support for people leaving custody. And we delayed the filing of charges when there were no public safety concerns necessitating immediate action. On March 17, the day San Francisco’s mayor announced a shelter-in-place order, the city’s local jail population was around 1,100 people; by late April our jail population dropped below 700—the lowest number in recent history.

Now more than ever prosecutors across the nation have a duty to work toward reducing the population in local jails and state prisons.

But, unfortunately, the story isn’t over. Our numbers have been creeping up again. It has become all too easy to feel complacent about the virus. With courts reopening and many businesses resuming some semblance of normalcy, the sense of urgency many of us felt back in March and April has dissipated. That is dangerous.

With the recent, rapid surge in cases, now more than ever prosecutors across the nation have a duty to work toward reducing not only the population in local jails and state prisons. It is easy for local prosecutors to shirk responsibility for the populations in prisons outside our counties. We should not; we must take responsibility for preventing the virus’s spread in jails, but we must also work to avoid prison sentences that contribute to mass incarceration without serving a public safety purpose. Prosecutors have a duty to promote public safety—that includes the safety, health, and well-being of those who live and work in jails and prisons. We commend the authors for emphasizing the ties between mass incarceration and public health.

District Attorney

Director of Communications/Policy Advisor

San Francisco District Attorney’s Office

As Justin Berk and coauthors illustrate, correctional facilities are at the center of the unprecedented COVID-19 public health crisis in the United States. They are hotspots of infection, especially where dormitory living cannot be avoided. In one large urban jail, during the peak of an outbreak, every infected person transmitted the virus, on average, to eight others. Because many of the conditions of correctional systems are not quickly fixed during a pandemic, decarceration is an essential and urgent strategy to mitigate transmission of the virus.

We recently cochaired a National Academies of Sciences, Engineering, and Medicine committee that found that while some jurisdictions have taken steps to reduce prisons and jail populations since the onset of the pandemic, the extent of decarceration has been insufficient to reduce the risk of virus transmission in correctional facilities. Reductions in incarceration have occurred mainly as a result of declines in arrests for minor infractions, jail bookings, and prisons admissions because of temporary closures of state and local courts, rather than from proactive efforts to decarcerate prisons and jails. There is little scope in current law for accelerating releases for public health reasons. Indeed, medical or health criteria for release, even in pandemic emergencies, are largely nonexistent at the state level, and highly circumscribed in the federal system.

As of November 2020, despite a 9% drop in the correctional population in the first half of the year, prisons and jails were growing again, expanding the risks beyond the 9% of incarcerated people already infected and the more than 1,400 incarcerated people and correctional staff who had died. Because of the large racial and ethnic disparity in incarceration, prisons and jails have likely fueled inequality in infections. And because correctional facilities are connected to surrounding communities—staff move in and out, and detained individuals move between facilities—the outbreaks in correctional facilities are associated with community infection rates and have especially affected health care systems in rural communities.

Further, efforts to decarcerate have not been accompanied by large-scale support of community-based housing and health care needs that are critical to decarcerating in a way that promotes public health and safety. Correctional officials, collaborating with community programs, should develop individualized reentry plans including COVID-19 testing prior to release and assistance for housing, health care, and income support. To facilitate reentry planning, obstacles to public benefits faced by formerly incarcerated people should be removed. Improving the accessibility of Medicaid, food stamps, and rapid housing programs is particularly urgent.

Decarceration in the service of public health will require sustained effort by elected officials; correctional and health leaders at the federal, state, and local levels; and community health and social services providers. Conditions created by high incarceration rates in combination with the pandemic have disproportionately harmed low-income communities of color. Answering the challenge of the pandemic in prisons and jails by decarcerating would reduce community-level threats to public health, improve health equity, and provide a safer environment for public health emergencies of the future.

Associate Professor of Medicine (General Medicine)

Director, SEICHE Center for Health and Justice

Yale School of Medicine

Bryce Professor of Sociology and Social Justice

Codirector, Justice Lab

Columbia University

Any way you read the statistics or take in the graphics that Justin Berk and coauthors provide, the conclusion is the same: prisons, jails, and other institutions of incarceration in the United States are exploding with COVID-19 cases. With the entire system of incarceration and its related health care realities grounded in legacies of institutionalized racism, the numerical caseload phenomenon is both a driver and reflection of the racialized disparities in COVID-19 across the nation. The authors aptly note that the pandemic behind bars exposes many failures of our carceral system.

Some jurisdictions have responded by reducing the number of people who are confined in these crowded spaces that are ill-designed and ill-equipped to contain a contagion and care for people infected with a pathogen as virulent as the novel coronavirus. The authors, and many others, call this effort decarceration, which they define as “the policy of reducing either the number of persons imprisoned or the rate of imprisonment.”

Hinging radical reform of our carceral system on a pandemic appears opportunistic, and we should distinguish the numerical efforts from the philosophical ones.

But calling this numerical effort decarceration is misleading, and it potentially undermines the decades of work of prison abolitionists who have called for, independent of any infection, reformulating the way society approaches the notion of criminal behavior. Reducing the number of people behind bars in response to an infectious disease is depopulation, and is a sound public health measure. Decarceration would involve a complete rethinking of how society relies on punitive confinement as a means of social control, of managing poverty in the absence of a robust safety net, and of sustaining white supremacy. Hinging radical reform of our carceral system on a pandemic appears opportunistic, and we should distinguish the numerical efforts from the philosophical ones.

Make no mistake, however, depopulation as an urgent public health crisis response lays the ground work for transforming the nation’s distinctly punitive and racist system of mass incarceration—sparked by the poignant questions Berk and colleagues pose. Though modest, many state prison systems have indeed reduced their populations, and a number of counties have reduced jail admissions by ceasing arrests for some minor charges. These depopulation efforts have not led to increases in crime. If we can depopulate, then we can decarcerate. The unfolding of the COVID-19 crisis behind bars has, as the authors note, “exposed numerous flaws” in our society. It has also exposed the porous connections between institutions of incarceration and surrounding communities. Most people think of jails and prisons as elsewhere, as cordoned off from society, and therefore find it easy to not think about what happens behind their thick walls. But this has never been the case. What happens behind those walls is happening within our communities, and what happens in our communities comes to bear on what happens in prisons and jails. The coronavirus travels freely between communities and institutions of incarceration—in the breath of workers and incarcerated people who come and go every day. The virus’s carceral travels provide a tragic but exemplary metaphor for why we all must care—and act—to reformulate the US criminal legal system.

Assistant Professor of Gynecology and Obstetrics

Johns Hopkins University School of Medicine

It is common to think of jails and prisons as islands, isolated and walled, distinct from the community, with real and imagined threats to the community sealed off and contained. In the public mind, there is something reassuring about that image.

The reality, though, is quite different. Jails and prisons are not islands. In fact, they have much more in common with bus stations. With inmate bookings, releases, and three shifts of staff, people are constantly coming and going. The vast majority of people who have been jailed within the past year are walking among us in the community every day. They are with us at work and play, and for a growing number of us they live in our homes and neighborhoods. As such, jails and prisons are an intimate part of the community.

We often justify society’s indulgence in mass incarceration by claiming that we are protecting public safety. But our thinking about public safety—and the public good—is too narrow. We often fail to consider the damage mass incarceration inflicts upon individuals, families, communities, the economy, and the public health. The COVID-19 pandemic has exposed the narrowness of our thinking. When the very structure and function of incarceration contribute to a public health threat, can we continue to justify our system in terms of public safety?

A system must be justified not by what we think or hope it does, but by what it actually does. At tremendous financial and personal cost, the US police, judicial, and prison system incarcerates more people per capita than any country in the world. While the effectiveness of incarceration in deterring crime is debated, the evidence of its impact in other areas is much clearer. For example, the judicial system is especially effective in confining Black men, a textbook example of institutional racism. It is effective in confining people with mental illness and addiction within the walls of institutions ill-suited to treating these conditions. And we can now add a new adverse outcome. With per capita infection rates over five times higher than the community, this institutional-based congregate living system is a highly effective contributor to the spread of COVID-19 within the walls. And with the constant comings and goings of staff and inmates, jails and prisons have contributed to the spread of the virus within the surrounding communities. In other words, the very structure and function of the system have caused harm to public safety by accelerating a deadly pandemic.

The COVID-19 pandemic has revealed a narrowness to our thinking about a costly and inhumane system. A desire for a punitive approach has corrupted clearer thinking and lulled us into believing in the myth of the prison as an island. If we truly want to protect the health and safety of our communities, it is time to back away from the mythical thinking underlying our addiction to mass incarceration in favor of more effective approaches to societal ills.

Professor Emeritus of Clinical Medicine

University of California, Riverside

School of Medicine

“You Have to Begin by Imagining the Worst”

Janet Napolitano has held many distinguished leadership positions, most recently as the president of the University of California and before that as secretary of the Department of Homeland Security (DHS) and as two-term governor of Arizona. In a conversation just days before reports of a massive cyberattack on the Pentagon, intelligence agencies, national nuclear laboratories, and Fortune 500 companies, Issues in Science and Technology editor William Kearney asked Napolitano about the pandemic and how threats to the homeland have evolved in the 20 years since 9/11.

Janet Napolitano, professor of public policy at the University of California, Berkeley’s Goldman School of Public Policy, former governor of Arizona from 2003 to 2009, former US secretary of homeland security from 2009 to 2013, former president of the University of California system from 2013 to 2020.
Illustration by Shonagh Rae

You were probably better prepared than most university presidents to manage a crisis on the scale of a pandemic given your experience leading DHS, but can you describe the shock to the system at the University of California when COVID-19 hit?

Napolitano: It really affected us in two major ways. First, we are a large health care provider as well as health care research enterprise, so we had to transform our hospitals to be basically COVID hospitals, not knowing how many patients we would be getting. We postponed a number of procedures in order to do that, and like all health care providers in the country, we were in a scramble for masks and PPE [personal protective equipment] and other things necessary to safely care for COVID patients. Our research laboratories also all basically converted to being COVID labs. We took quite a financial hit to our hospitals, and it’s going to take a while to catch up.

The second major impact was on the academic side where we had to turn on a dime and depopulate the campuses and convert to online remote learning. Faculty at all our campuses did a terrific job at that, so students could continue taking classes, making progress toward their degrees. Again, there was a financial implication in that we had to immediately refund more than $300 million in housing and dining fees, which was the right thing to do, but nonetheless that’s money out the door. I served as president until August 1, and throughout the summer we were working through various iterations to determine whether we could open the campuses in the fall. Could we return to in-person instruction? Could we put students back in the dorms? What kind of testing regimen would we need? How would we pay for that?

But as time went on, it became more and more clear that returning to in-person instruction was just not a viable option for the fall—and it looks like it won’t be in the spring either given that California just went back into shelter-in-place restrictions. So campuses have all adjusted, and classes continue to be taught, and students continue to make progress toward their degrees.

How can public research universities persevere through, and eventually recover from, the pandemic given the financial stress they were already under?

I think public research universities are part of the secret sauce of America. There are whole swaths of the American economy that derive from basic research that originated at these universities.

Napolitano: Well, first, I think the pandemic has illustrated to the American populace the value of science, most clearly through the rapid development of vaccines using mRNA technology, which is a relatively new technology. And it won’t surprise you to learn the number one thing we can do is provide more resources to public research universities. I think public research universities are part of the secret sauce of America. There are whole swaths of the American economy that derive from basic research that originated at these universities. Plus we’re training and educating the next generation of scientists. President-elect Biden is already indicating that he wants to put some serious money back into basic research, and I think a large part of that will go to public research universities, and that will be to everyone’s advantage.

In your 2019 book, How Safe Are We? Homeland Security Since 9/11, you emphasized the need for the United States to confront our real risks, not perceived ones. Pandemics were on your real list. You warned that the magnitude of a pandemic could be immense and that we remained ill prepared, which has tragically proven true. You also wrote that learning from mistakes is all too rare in government. So when it comes to lessons learned, where would you start?

Napolitano: I would start with evaluating how previous pandemics were handled; what went well, and what didn’t. At the beginning of the Obama administration, we had the H1N1 virus. We were lucky it turned out that it didn’t have a particularly high mortality rate. We were also lucky that it was a form of flu, not a new coronavirus, and therefore development of a vaccine went that much more quickly—although it still took a while to get the vaccine manufactured and begin mass distribution, which focused on children ages one to five, who were most susceptible. It became apparent then how much of the process for flu vaccine had been offshored. So recognizing that led to a lesson learned—the need to retain domestic research and production capacity for vaccines. 

I think when we go back and unpack what has happened with COVID, there will be volumes written about the US response, the very obvious mistakes that were made, and the deficiencies in our response. One can only hope that we get enough of the American population vaccinated in 2021 so that we can return to something approaching normal, but boy, we lost a lot of time and many, many lives unnecessarily.

How could the federal role in a pandemic be improved?

I think climate change is probably our number one national security risk. It affects us and affects the world in terms of persistent weather changes, increased extreme weather events, sea-level rise.

Napolitano: The federal government has a role in leading a national response and coordinating amongst federal agencies, obviously, but also with states and cities as well as with the private sector. It’s both a leadership role and a coordination role. Take, for example, the unseemly scramble for masks and other PPE—that should have been coordinated by the federal government. There should have been clear direction on how to obtain material from the Strategic National Stockpile. The federal government should have served basically as the lead procurement agency for the country. There should have been clear execution of a plan for how hospitals gained access to those materials. The Defense Production Act should have been used earlier and much more vigorously. There are things the federal government can do that states simply don’t have the wherewithal to do, and those capabilities in the federal government were never fully utilized in COVID.

Beyond pandemics, what are the other real threat priorities?

Napolitano: I think climate change is probably our number one national security risk. It affects us and affects the world in terms of persistent weather changes, increased extreme weather events, sea-level rise. From a national defense perspective, for example, there are more than a dozen military installations located on the coasts of the United States that are at immediate risk of sea-level rise, in places such as Norfolk, the site of our largest naval installation. And this is all related to the warming of the planet. We can anticipate effects on our forests, effects on our agriculture and food security. We can anticipate the relationship between climate change and the development of new disease vectors. There are any number of domino effects that come from the warming of the planet. I think we need to look at it in two ways. One is how do we mitigate the warming? How do we stop the pace of global warming? And the second is how do we adapt, including in the near term? Adaptation is probably the top feature where DHS is concerned.

You said that by the time you left office at DHS, you were spending 40% of your time dealing with cybersecurity.

Napolitano: That’s right. In the world of cybersecurity you have lots of potential bad actors—nation states, including Russia, Iran, and China; groups that may or may not be affiliated with nation states; and individual malefactors. So the threat environment is very large  and quite complicated. Attribution is always a problem in cybersecurity events. I think we’re really just at the beginning of dealing with cybersecurity as a threat and having a real national cybersecurity strategy. Again, I think it takes leadership from the White House and a unity of effort amongst all the federal agencies that have a role to play here; it’s DHS, the Department of Defense, FBI, the Department of Commerce, and others. One of the things I found when I was secretary was that we needed a clarification of roles—who has responsibility for what in cybersecurity?

We’re really just at the beginning of dealing with cybersecurity as a threat and having a real national cybersecurity strategy.

Understanding risks such as climate change and cybersecurity of course means understanding advances in science and technology. How does S&T fit into DHS?

Napolitano: There are two areas of DHS where science and technology are particularly relevant. One, we have a Science and Technology Directorate, led by an undersecretary. I think that has been an underutilized aspect of DHS, and I hope that in the next administration some attention is paid to that. A second area is what was formerly known as the National Protection and Programs Directorate, and is now the Cybersecurity and Infrastructure Security Agency, which does a lot of collaboration with the private sector that owns and operates much of our critical infrastructure. 

You refer to the other major threat we face as Terrorism 3.0. What do you mean by that?

Napolitano: Terrorism 1.0 was al-Qaeda as evidenced by the attack of 9/11, which was the precipitant for the creation of DHS. Terrorism 2.0 is all of the other terrorist groups like AQAP [al-Qaeda in the Arabian Peninsula] and al-Shabab that have similar beliefs to al-Qaeda. When I was secretary, we continued to get threats against aviation. In a way, I think aviation was viewed as the gold standard for terrorism given the success of the attacks on 9/11. But slowly but surely, I think the United States got control over that situation.

I think aviation was viewed as the gold standard for terrorism given the success of the attacks on 9/11. But slowly but surely, I think the United States got control over that situation.

Terrorism 3.0 is domestic. It’s the rise of domestic militia groups. It’s the rise of the so-called lone wolf. It’s primarily on the far right, if you use that kind of political spectrum, but there’s some on the far left as well. And here, you have a complication, because as you know the Constitution governs and limits what you can do as a law enforcement agency, and you can have real difficulties tracking a lone wolf, the individual who gets radicalized and decides to commit an act of violence. That’s almost impossible to prevent. We certainly don’t have good predictors for that. And we really don’t have good prevention methodologies.

At DHS you tried to proactively anticipate scenarios and said it’s important to have a good imagination, even a dark one. Why?

Terrorism 3.0 is domestic. It’s the rise of domestic militia groups. It’s the rise of the so-called lone wolf.

Napolitano: A key critique in the 9/11 Commission’s report was that we suffered from a failure of imagination. All the data were there, but we simply couldn’t imagine a complicated plot to take over aircraft and fly them into places like the World Trade Center. We couldn’t imagine how that could occur. That’s a challenge to leaders. When I say scenario-planning or scenario-thinking, it’s the what if questions: What if the mortality rate for COVID was even higher? What if extreme weather events take out Miami, take out all of our energy production facilities in the Gulf Coast? What if a malefactor is able to infiltrate the cyber systems of 10 major American cities at the same time, and threatens to shut down their 911 systems, unless a huge ransom were paid? And so once you say those kinds of problems, you can begin reverse engineering them. How would the federal government respond? How would you advise the White House? You have to begin by imagining the worst and then thinking, “Okay, what would you do?” 

Given your experience in the realms of both national security and academia, how do you believe we can balance the security risks, particularly with China, versus the need for international scientific openness and the need for researchers to collaborate across borders to solve global challenges such as the pandemic and climate change?

You have to begin by imagining the worst and then thinking, “Okay, what would you do?” 

Napolitano: Well, I don’t necessarily see a tension between the global research enterprise and national security. The fact that science advances by the sharing of information means that the more we share, the more we advance. For example, China last winter was sharing the genetic code for the coronavirus that enabled our scientists to get to work on vaccines and therapeutics. That kind of sharing of information is beneficial to everyone. Where we have tensions is in the intellectual property area. I think universities should have processes and policies in place that hold their scientists accountable so as not to allow the inappropriate appropriation of their research.

You wrote that “we must restore our sense of common purpose” so the nation can unite as it did in the aftermath of 9/11. How can leaders help us do that?

Napolitano: Well, it helps when the effort to reach across the aisle starts at the White House, and there’s a search for some common ground. I hope, for example, in the Biden-Harris administration, as they get started, that not only do we find some common ground in terms of the economic recovery that we need, but also in something like an infrastructure package, which would create jobs— and which is sorely needed. People from both sides of the aisle have spoken about the need for infrastructure, and I think undertaking some work that is successful might create a pathway to dealing with more difficult questions.

Any other advice for the new Biden administration?
Napolitano: Far be it from me to give Joe Biden advice; that would be quite presumptuous. He’s been around the block a few times. But one thing I hope he says, and says often, is that science is back!

“A Viable Path Toward Responsible Use”

Jennifer Doudna, a professor of chemistry and molecular and cell biology at the University of California, Berkeley, and codiscoverer of the CRISPR/Cas9 gene-editing technology, served on the organizing committee of the Second International Summit on Human Genome Editing, held in Hong Kong in late 2018. The editor of Issues in Science and Technology, William Kearney, was there too, managing communications for the US National Academy of Sciences and the US National Academy of Medicine, which cohosted the summit with the Royal Society of the United Kingdom and the Academy of Sciences of Hong Kong.

The summit made global headlines when the Chinese scientist He Jiankui stunned the organizers and the world by presenting how he had used CRISPR to edit the early embryos of two recently born twin girls in what he said was an effort to prevent them from contracting HIV. A little over a year after the summit, Kearney interviewed Doudna to ask her to reflect on the dramatic events that unfolded there, and how she hopes the clinical promise of genome editing is pursued responsibly—with proper consideration by society of its ethical implications—going forward.

Jennifer Doudna speaks at the Second International Summit on Human Genome Editing

Our Second International Summit on Human Genome Editing was a memorable event for obvious reasons, but I have one striking memory in particular, of helping you escape a gaggle of reporters in the aftermath of He Jiankui’s presentation. We snuck out a side door of the university auditorium, and you turned to me in the hallway and said, “Bill, I feel sick to my stomach.” “Because this is the day you feared?” I asked. “It’s exactly the day I feared,” you replied. Can you recall how you were feeling and what you were thinking after just hearing He describe how he used CRISPR/Cas 9 to edit the embryonic genomes of newborn twins?

I felt stunned and sickened. I knew it was a possibility that someone might cross what we thought was a clear ethical red line by going against scientific consensus and applying CRISPR in human germline cells. What I didn’t anticipate is that it would happen so soon and that we would find out about it only after the birth of the infants.

Do you feel any different about it a year later?

No, I am still shocked and disgusted by this news. The fallout for everyone concerned continues—the health and future of the children, the fate of the scientist, and the public perception of CRISPR technology. However, I am encouraged by the broad global rejection of the clinical process used in this case and by the calls for CRISPR’s ethical use supported by stronger regulations and consequences that will not stifle the potential of the technology.

You recently wrote in Science that “although human embryo editing is relatively easy to achieve, it is difficult to do well and with responsibility for lifelong health outcomes.” Do you worry that there are false assumptions that genome editing is more precise than it really is? What are the scientific and medical unknowns that need to be better addressed before ever considering embryo editing in clinical applications?

It is vital that safety concerns, including off-target effects and unexpected complications, are fully understood and resolved before these tools are widely used. I am encouraged by the careful studies currently underway in a US patient with sickle cell disease, and another with beta thalassemia. This is the sort of deliberate, medically necessary work that is aligned with current Food and Drug Administration regulations and will likely result in outcomes that are safe and effective.

The main challenge in embryo editing is not scientific—although scientific advances are still needed before this technology can be used safely—but rather ethical. What does it mean to give medical consent to a procedure that will impact not only your child but future generations? Which modifications should be considered as medical treatment, and which should be viewed as enhancements? Does eradicating a genetic condition create a stigma for those who continue to live with it? These are profound questions that require a broad public conversation.

What do you think about the notion of a moratorium on human germline editing? You haven’t signed on to calls for a moratorium, although you have been a member of the summit-organizing committees that stated it would be irresponsible to proceed now.

My colleagues and I effectively called for a moratorium (although we avoided using that term) in spring 2015 in a Perspective in Science. And yet four years later that “moratorium” was ignored by He Jiankui and his enablers. One bad actor decided to act out of self-interest, and it may be the beginning of a wave of unethical experimentation. I believe that moratoria are no longer strong enough countermeasures, and instead stakeholders must engage in thoughtfully crafting regulations of the technology without stifling it.

There have been reports that a number of US scientists may have known, or at least were growing increasingly concerned, that He Jiankui intended to implant edited embryos to establish a pregnancy. In hindsight, do you think alarms should have been sounded earlier? Does the scientific community need new mechanisms to report concerns if scientists become aware of potentially rogue behavior in the future—even if in another country?

While I believe that it would have been preferable if alarm bells had been sounded earlier, the reality of confidential conversations in science, and the preeminence of abiding by the rules of confidentiality, put the US scientists who might have had concerns about his research in a complicated position. In my opinion, one mechanism to avoid this issue from happening again would be a whistleblower line to report concerns anonymously to an organization such as the World Health Organization. Additionally, a statement from an organization such as WHO could help clarify that nondisclosure agreements should not be considered binding in the case of severe ethical concerns.

Do you worry that the He incident was a setback for public understanding or acceptance of genome-editing technology, and of its potentially revolutionary use in treating disease?

Yes, public awareness of CRISPR’s positive potential was growing steadily, but He’s actions spiked concerns and dented confidence that the scientific community can deploy it safely and appropriately. We encourage public debate and want to show the public that we can apply the correct guardrails to ensure the technology delivers groundbreaking somatic treatments for millions.

What do you believe are some of the most promising potential uses of CRISPR for treating disease?

There is vast potential for CRISPR to become a standard of care for treating disease. Clinical trials using CRISPR are already underway for patients with cancer, blood disorders, and eye disease. In the next few years we may see CRISPR-derived medical breakthroughs for people suffering from liver disease, muscular dystrophy, and more.

Do you worry about policy-makers overreacting to the He case, and possibly overregulating the use of CRISPR and other gene-editing technologies in a way that may stifle their potential?

No, policy-makers can strike the right balance as they have with other disruptive technologies that have helped move society forward. Currently, under the federal Dickey-Wicker Amendment, making permanent edits to the human germline is illegal in the United States, and the National Institutes of Health is forbidden from funding this type of research.

What do you wish policy-makers would focus on when it comes to CRISPR?

Policy-makers, with the counsel of scientists and bioethicists, have the opportunity to establish an enforceable framework for responsible and accountable management of CRISPR technology.

The World Health Organization has an expert advisory committee looking at governance and oversight of human genome editing, and the US National Academy of Sciences, the US National Academy of Medicine, and the Royal Society of the United Kingdom are leading an international commission to develop a framework for assessing potential clinical applications of human germline genome editing. What do you hope will emerge from these efforts?

Ultimately, we need an enforceable framework, and these organizations are critical in bolstering the effort by pushing government regulators to engage, lead, and act.

We are seeing progress, which is heartening. In July 2019, WHO issued a statement requesting that countries end any human germline editing experiments in the clinic for the time being, and in August 2019, announced the first steps in establishing a registry for such future studies. These directives from a global health authority now make it difficult for anyone to claim that they did not know or were somehow operating within published guidelines.

Do the reports of a Russian scientist’s pursuit of embryonic gene editing make you nervous?

Yes, the scientist Denis Rebrikov’s approach is concerning. He has publicly said that he is waiting for regulatory approval before implanting any edited embryos. A remaining concern is the edit he is planning to make that would enable deaf couples to produce hearing babies. This is not a modification where there is consensus that it is medically necessary.

Human germline editing has global implications. How should the scientific community think about governing human germline editing on a global scale if regulations are always country specific?

Scientists around the globe are constantly collaborating and learning from one another. The self-governance approach failed in the case of He Jiankui, but the vast majority of researchers are acting ethically and many are engaged in a deeper public conversation about how to establish strong safeguards, encourage a more deliberate global approach, and build a viable path toward responsible use.

You have actively participated in discussions about the scientific, medical, ethical, and policy implications of CRISPR. What role do individual scientists, or the wider scientific community, have in helping to ensure that new discoveries are applied responsibly for the benefit of society?

Scientists need to play their part to make time for conversations with the public in their already busy schedules. These ethical concerns are among the most important considerations for every researcher.

Scientists are equipped to not only advance ongoing scientific research but also guide the public conversation. Individuals and the scientific community alike have a responsibility and opportunity to help shape future research in an ethical manner. Likewise, the public has a role to play in ensuring that discussion of CRISPR technologies, and scientific methodology and discovery in general, takes place.

Anything else you would like people to know about the current state of genome-editing science or its implications for public policy?

Curiosity-driven research funded by taxpayers and nonprofit organizations produced the CRISPR/Cas9 technology and continues to drive the field forward. This work has spawned numerous commercial ventures, creating jobs that focus on applying genome editing technology to advance human health, agriculture, and industrial biotechnology. At a time when people and the planet need CRISPR-derived solutions, we must ensure the technology’s long-term viability by applying it responsibly and allowing it to be fairly assessed by those in need.

Shaping Our Genetic Futures

Paul Vanouse, America Project, 2016, spittoon and video projection.
Photo by Molly Renda, courtesy of the artist.

“There is a strange pleasure in the performance of DNA extraction, live in an art museum. To take materials from human volunteers who do so without the fear of identification, policing, or a potentially devastating medical diagnosis. It turns out that genetics are fun when we all do it together, sipping, swirling, and spitting into a weird and beautiful hybrid object. Through the America Project, the purpose of DNA extraction is suborned both by the intentional act of ‘promiscuous’ fluid mixing and the recontextualization of machine and scientific process into artistic expression.”

­—Helen J. Burgess, associate professor of English at North Carolina State University


Art’s Work in the Age of Biotechnology: Shaping Our Genetic Futures, organized by the NC State University Libraries, the Genetic Engineering and Society Center, and the Gregg Museum of Art & Design, elicited discussion about genetics in society through the lens of contemporary art and offered viewers new ways to think about their role in the genetic revolution.

Play Slideshow

More details about the project and viewer responses can be found at the Genetic Engineering and Society Center at NC State University.

Sougwen Chung

Multimedia artist Sougwen Chung has been collaborating with robots since 2015, exploring the connections between handmade and machine-made designs as a way to understand the relationship between humans and computers. Her multifaceted artistic practice also includes filmmaking, painting, sculpture, installation, and performance.

Chung’s 2018 piece Omnia per Omnia reimagines the tradition of landscape painting as a collaboration between herself, a team of robots, and the dynamic flow of New York City. The work explores the poetics of various modes of sensing: human and machine, organic and synthetic, and improvisational and computational. In another series, Drawing with DOUG (Drawing Operations Unit Generation), she engages in improvisational drawing performances with a robotic arm she named DOUG. In its first iteration, DOUG could move, see, and follow her gestures. In subsequent versions, DOUG could also remember and reflect on what it and others had drawn. Through her collaborative drawing performances with custom-designed robots, Chung is exploring the potential of machines to be artistic collaborators and ways for artists to participate in the rapidly developing field of machine learning.

Chung, a Chinese Canadian artist and researcher based in New York City, is an artist-in-residence at Google and the New Museum’s cultural incubator, New Inc., and a former research fellow at MIT Media Lab. In 2017, she was one of three artists selected to participate in a new partnership between Nokia Bell Labs and NEW Inc. to support artists working with emerging technologies such as robotics, machine learning, and biometrics.

Visit Sougwen Chung’s website: http://sougwen.com/

Play Slideshow

Cassini Mission to Saturn

The Cassini mission to Saturn is a joint endeavor of the US National Aeronautics and Space Administration, the European Space Agency, and the Agenzia Spaziale Italiana. Cassini is a sophisticated robotic spacecraft orbiting the ringed planet and studying the Saturnian system in detail. Cassini also carried a probe called Huygens, which parachuted to the surface of Saturn’s largest moon, Titan, in January 2005 to collect additional data. Cassini completed its initial four-year mission to explore the Saturn system in June 2008, and the first extension, called the Cassini Equinox Mission, in September 2010. Now, the healthy spacecraft is making new discoveries in a second extension called the Cassini Solstice Mission.

In late 2016, the Cassini spacecraft will begin a set of orbits called the Grand Finale, which will be in some ways a whole new mission. The spacecraft will repeatedly climb high above Saturn’s poles, flying just outside its narrow F ring 20 times. After a last targeted Titan flyby, the spacecraft will then dive between Saturn’s uppermost atmosphere and its innermost ring 22 times. As Cassini plunges past Saturn, the spacecraft will collect information far beyond the mission’s original plan, including measuring Saturn’s gravitational and magnetic fields, determining ring mass, sampling the atmosphere and ionosphere, and recording the last views of Enceladus. 

Play Slideshow

For more information about the Cassini-Huygens mission, visit www.saturn.jpl.nasa.gov and www.nasa.gov/cassini. Photos: NASA/JPL-Caltech/Space Science Institute.

From the Hill – Summer 2018

The Senate Appropriations Committee’s Energy & Water Development spending bill, approved in late May on a 30-1 vote, continues congressional pushback against the sharp Department of Energy (DOE) research and development (R&D) decreases recommended by the White House, with Senate report language referring to several of these cuts or eliminations as “short-sighted.”

Basic research programs fare particularly well in the bill, as they did in the House version adopted in mid-May. The Office of Science, DOE’s basic research arm, would receive $6.7 billion in Fiscal Year 2019, a 6.2% or $390 million increase above FY 2018. The figure is $1.3 billion above the White House request and would represent an all-time funding high if ultimately adopted. Most programs would receive at least moderate increases.

The biggest winner is the Advanced Scientific Computing Research program with a 21% increase above FY 2018. This includes a 13.5% increase for the Exascale Computing Project, as well as sizable increases for the Leadership Computing Facilities at the Argonne and Oak Ridge National Laboratories and for the National Energy Research Scientific Computing Center at the Lawrence Berkeley National Laboratory. These figures are similar to those in the House bill, but even more generous.

The High Energy Physics (HEP) program is also a winner in the Senate bill, but mostly due to a major increase for the Deep Underground Neutrino Experiment (which grew out of the Long Baseline Neutrino Experiment). Funding for the project would rise to $145 million, compared with $95 million in FY 2018. The House appropriators had provided $175 million. Even so, HEP research funding would increase by 4.2%.

Nuclear Physics research, operations, and maintenance would be increased by 8.2% or $48.2 million above FY 2018, and $110 million above the White House request. This includes a 15% increase for the Stable Isotope Production Facility. The Gamma-Ray Energy Tracking Array project would receive $6.6 million, matching the House figure.

Basic Energy Sciences user facilities would receive varying increases, as would construction or upgrade projects for the Spallation Neutron Source at Oak Ridge, the Advanced Light Source at Lawrence Berkeley, and others. The Senate bill would provide continuing funding, as requested, for the energy storage and artificial photosynthesis innovation hubs and the Energy Frontier Research Centers. Again, these figures are similar to, but more generous than, those in the House bill.

Biological and Environmental Research funding includes increases of around 4% for the Environmental Molecular Sciences Laboratory at the Pacific Northwest National Laboratory and for the Atmospheric Radiation Measurement facility serving multiple laboratories. The Senate bill also includes $10 million for establishing a national microbiome database, which was encouraged in the House bill as well.

The exception to these increases is the Fusion Energy Sciences program. Whereas the international fusion project ITER would be held flat at $122 million in the Senate bill, domestic research activities would be reduced by $107 million or 26% below FY 2018 levels.

Compared with how it treats DOE’s basic science activities, the Senate approach to the department’s technology R&D programs is much more targeted. Overall funding for fossil, nuclear, and renewable energy R&D, as well as for energy efficiency R&D, would be held generally flat, though there would be a few notable increases (and some decreases). This contrasts with the House approach, which was more favorable to fossil and nuclear energy, though appropriators in both chambers clearly resisted White House-proposed cuts.

As in the House, the Senate committee rejected the administration’s effort to eliminate the Advanced Research Projects Agency-Energy. Indeed, the Senate appropriators, unlike those in the House, would grant it a funding increase. It would receive $375 million in the Senate bill, a 6.1% increase to an all-time high.

Many programs and projects within the Office of Energy Efficiency and Renewable Energy would remain flat or see limited change in the Senate bill, though wind energy R&D would be reduced by 13%. Senate appropriators adopted sizable increases for materials research for vehicles, for solar manufacturing innovation programs, and for R&D for residential and commercial building efficiency. The Critical Materials Hub and the Energy-Water Desalination Hub were both protected from elimination. Senate appropriators also protected DOE’s four manufacturing innovation institutes from termination and directed DOE to move forward in establishing the fifth and sixth institutes.

On the Fossil Energy front, the Senate bill includes very modest increases for most advanced coal and carbon capture R&D programs. It does, however, include an 18% increase to $55 million total for carbon storage infrastructure, including continuation of the CarbonSAFE program and the Regional Carbon Sequestration Partnership program. The Senate bill also includes $30 million for a design study of two commercial-scale carbon capture projects, but leaves out funding for a pair of transformational coal technology pilots. This funding was initiated in last year’s omnibus bill and would be continued in this year’s House bill.

Although the Office of Nuclear Energy would receive flat funding in the House bill, the Senate bill would give its R&D programs a $57 million increase, though offset by a decrease for the Idaho National Laboratory’s operations and infrastructure. The Senate bill includes a sizable 27% increase above FY 2018 levels for reactor concepts research, development, and demonstration, as well as a modest increase for fuel cycle R&D programs. Appropriators also protected the nuclear modeling and simulation innovation hub from elimination.

The House Appropriations Committee approved its FY 2019 Interior and Environment spending bill on June 6. Under the bill, the Environmental Protection Agency’s (EPA) budget for science and technology would decrease by 8.9% rather than by the 40% called for by the administration.

Meanwhile, the US Geological Survey escaped the administration’s proposed 25% cut, instead seeing a 1.6% overall increase in the House bill. The committee rejected the administration’s proposal to downsize the survey’s Climate Science Centers and instead provided full funding for all eight existing centers. The appropriators would also continue to support the earthquake and volcano early warning systems, which were slated for elimination by the administration.

On June 8, the House adopted a so-called minibus—a package combining three separate FY 2019 spending measures on energy, veterans and military construction, and legislative affairs—on a rather partisan 235-179 vote. Republicans opted for the minibus strategy as an attempt to accelerate appropriations progress, which is supposed to be completed by October 1. The bill almost universally rejects the toughest provisions of the administration’s FY 2019 budget proposal. Elsewhere in the House bill, Veterans Affairs medical and prosthetic research would receive a 1.4% increase in FY 2019.

In a separate bill adopted in a closed subcommittee meeting on June 7, the House approved the FY 2019 Defense appropriations bill. According to the subcommittee’s summary, the bill would increase Department of Defense research, development, testing, and evaluation funding by roughly 3%, to $91.2 billion, in FY 2019.

On May 23, the administrator of the National Aeronautics and Space Administration (NASA), James Bridenstine, testified before the Senate Appropriations Committee on the president’s FY 2019 budget request for the agency. Notably, Bridenstine said NASA was reconsidering two Earth Science missions slated for elimination by the administration: specifically, the Climate Absolute Radiance and Refractivity Observatory Pathfinder; and the Plankton, Aerosol, Cloud, ocean Ecosystem (PACE) satellite. Bridenstine voiced support for climate change research and reassured appropriators that NASA would closely evaluate the priorities of the recent decadal survey for Earth Science. When questioned about the delays and possible cost overruns of the James Webb Space Telescope, Bridenstine noted that any financial effects of those delays should be minimal in 2019 and that he was committed to completing and launching the telescope even if it exceeds the $8 billion cost cap set by Congress. He warned that NASA should avoid similar problems with the next astrophysics flagship mission, the Wide-Field Infrared Survey Telescope, and should shift focus from flagship-class missions to smaller spacecraft in the future.

On May 16, the Senate Commerce, Science, and Transportation Subcommittee on Space, Science, and Competitiveness held a hearing on the “Future of the International Space Station.” According to a news report in The Hill, “The administration has proposed ending funding for the space station in seven years, by 2025.” However, according to a NASA official, “the station is viable until at least 2028.”

Resource Prospector, the only moon rover currently in development at NASA, has been canceled, despite President Trump’s December 2017 directive to the agency to return humans to the moon. NASA provided no reason for the cancellation, but said that it is soliciting input to develop a series of progressively larger lunar landers to eventually culminate in a crewed mission, and NASA’s administrator said that instruments that were being developed for Resource Prospector will be used in the agency’s “expanded lunar surface campaign.” The canceled rover would have surveyed one of the moon’s poles in search of volatile compounds such as hydrogen, oxygen, and water that could be mined to support future human explorers; would have been the first mission to mine another world; and was seen as a stepping-stone toward long-term crewed missions beyond Earth.

In early June, the Senate Judiciary Subcommittee on Border Security and Immigration held a hearing to discuss US visa policies regarding nonimmigrant Chinese graduate students coming to the United States to study science and engineering. The impetus behind the hearing was concern that China is exploiting US universities in order to obtain sensitive information. At the hearing, which included witnesses from the FBI, intelligence offices, and the Departments of Homeland Security and State, speakers confirmed that the administration would issue a new policy, effective June 11, to put a one-year time limit on visas issued to Chinese graduate students studying in specific fields such as robotics, aviation, and high-tech manufacturing. Furthermore, targeted students would be subject to undefined “screening measures” before being issued a visa. A group of higher-education associations also submitted a joint statement to the subcommittee highlighting concerns regarding any new visa policies.

On June 8, the U.S.-China Economic and Security Review Commission held a hearing to address Chinese market distortions stemming from intellectual property theft, patent infringement, and forced technology transfer, among other practices. At the hearing, Willy Shih, a professor at Harvard Business School, argued that the United States must double down on basic research funding to stay ahead, and he urged the president to revive the President’s Council of Advisors on Science and Technology. “We need a channel for more ideas and advice on how to secure our lead in science and technology, which ultimately drives our economic leadership,” Shih noted in written testimony. Graham Webster, a senior fellow at Yale Law School, warned against dubious national security justifications for limiting US-China scientific collaboration, arguing that the United States should remain an attractive place to study and conduct research.

In late May, the EPA’s Science Advisory Board voted to conduct reviews of five of administrator Scott Pruitt’s biggest deregulatory moves as well as his science “transparency” proposal. The decision by the influential board, to which Pruitt appointed several energy industry and state Republican officials last year, is highly unusual and came after the EPA declined to answer in any detail the initial questions from board members about how the agency ensured that its proposals to undo Obama-era environmental rules were based on sound science.

In May, President Trump pressed for a quick regulatory bailout for struggling coal power plants in a move that would buoy the mining industry. The White House called on the head of DOE, Rick Perry, to take immediate steps to keep both coal and nuclear power plants running, backing Perry’s claim that plant closures threaten national security. An administration strategy to do that, laid out in a memo to the National Security Council, circulated widely among industry groups, but it was not clear that intervention could survive the inevitable political and legal challenges.

The Food and Drug Administration (FDA) has asked federal courts in Florida and California to issue injunctions against two stem cell therapy companies, following reports of patients being blinded by their treatment. If granted, the companies will have to stop operating. The FDA previously issued warnings to the two companies, but they were ignored. In a statement announcing the injunction requests, the FDA said the clinics are “exploiting patients desperate for cures” and, instead, in some cases “causing them serious and permanent harm.” Hundreds of stem cell clinics are operating in the United States, none of them with FDA-approved procedures, even though many researchers have urged the agency to take stronger action against them. The cofounder of one of the companies being sued says that the FDA has no authority over the clinics because the treatments they provide do not involve drugs, but rather use stem cells from patients’ own bodies, and that patients have a right to harness these cells. He has vowed to fight the injunction requests all the way to the Supreme Court, if necessary.

NIH opens enrollment in All of Us

On May 6, the National Institutes of Health launched enrollment in the All of Us Research Program, an enormous precision medicine initiative that has been in planning stages for years. The aim of this ambitious program is to promote individual prevention and treatment for people of all backgrounds. The program is especially trying to connect with populations that have been historically underrepresented in biomedical research. The NIH has already enrolled 25,000 individuals, with about three-quarters coming from these target groups, and is trying to enroll at least one million participants for a yearlong beta test. Participants are asked to provide an array of information about their health and lifestyles, including data from online surveys and electronic health records. NIH has ongoing privacy efforts to make people feel comfortable sharing their health information, and it says that data from the program will be available only for research purposes.

In April, the director of the National Institutes of Health, Francis Collins, announced that NIH would not accept funds from pharmaceutical firms for opioid research. For nearly a year, Collins had touted an opioid research partnership, with industry and the taxpayers each contributing half for an initiative to conduct research on substance abuse and pain treatment. Collins took this action upon the advice of an NIH advisory panel. Rep. Tom Cole (R-OK), chair of the House Appropriations subcommittee on health, agreed with Collins’s decision, saying that accepting money from industry would be too “dangerous.” In the recent spending agreement, NIH was allocated $500 million for opioid research alone, although Collins said that this was not a factor in his decision, which stemmed from the advisory committee only.

Can the Public Be Trusted?

In the age of MEGO (my eyes glazed over) and TMI (too much information), scientists who communicate with the public must tread a fine line between full disclosure and information overload. With current scientific output topping two million papers per year, nobody can keep up with everything. Of course, the public’s need to know does not extend to every corner of science. But for some policy-relevant fields—climate science is one—the public and policy-makers do need critical scientific information to inform important policy choices. We can’t all be climate experts, so the experts must decide what information is most useful to the public and how to frame what they share so the public can interpret it in the light of the decisions to be made.

The climate change debate has deservedly generated a public thirst for knowledge, and the climate science community has mounted an extraordinary effort through the United Nations Intergovernmental Panel on Climate Change (IPCC) to consolidate a tsunami of scientific and technical research and to try to capture the consensus of expert opinion when there is one—and to highlight areas of uncertainty when there is not. The IPCC’s Fifth Assessment Report, released in 2014, was the work of more than 800 experts divided into three topical working groups: the physical science basis; impacts, adaptation, and vulnerability; and mitigation of climate change. The physical science report alone involved more than 250 experts and ran to more than 2,000 pages in its unedited original release.

IPCC reports have long undergirded international efforts to craft a global response to the threat of climate disruption. National leaders and diplomats relied on the IPCC analyses—or at least the synthesis reports—in the United Nations Framework Convention on Climate Change (UNFCCC), which produced the Kyoto Protocol in 1997 and the Paris Agreement in 2015.

As comprehensive and authoritative as the IPCC reports are, they necessarily (and appropriately) simplify the science and cannot transfer the nuanced judgment and implicit knowledge that scientists acquire. For example, understanding the limitations of computer modeling, a core discipline in climate science, requires appreciating the importance of subjective assumptions about some future social and technological trends. Alluding to that unavoidable subjectivity, Andrea Saltelli, Philip B. Stark, William Becker, and Pawel Stano pointed out in “Climate Models as Economic Guides: Scientific Challenge or Quixotic Quest” (Issues, Spring 2015) that “models can be valuable guides to scientific inquiry, but they should not be used to guide climate policy decisions.”

The public’s inevitably limited scientific sophistication creates the potential for those who don’t like the implications of the IPCC consensus to selectively twist bits of the scientific literature to undermine the mainstream position. As science journalist Keith Kloor reported in “The Science Police” (Issues, Summer 2017), some scientists were so concerned that climate deniers were using studies that identified a short-term pause in the overall trend of global warming as evidence that the threat of climate change was exaggerated that they urged researchers to avoid addressing the phenomenon in their research. Kloor argued that such policing of the scientific literature revealed a lack of faith in the public and could in turn eventually undermine society’s belief in scientific transparency.

A few pages from here, Roger Pielke Jr. makes a case that the IPCC is exhibiting a lack of trust in the public by building into its computer models assumptions that by design advance a particular course of action in the UNFCCC negotiations. Pielke maintains that these underlying assumptions artificially narrow both the range of potential risk that we face and the variety of policy options that we should be considering. But here’s the rub: since experts need to frame scientific disputes in a way that facilitates public participation and decision-making, are the current IPCC modeling assumptions helping or misleading policy-makers and the public?

Climate science experts see an enormous potential danger in climatic changes. Their reasonable fear is that the problem is too slow moving to motivate action and yet too large and complex to manage. Pielke provides evidence that IPCC’s assumptions about future coal burning exaggerate the likelihood of catastrophic climate change in order to motivate policy-makers to act. At the same time, the IPCC’s assumptions about spontaneous decarbonization (economic changes that will reduce emissions without policy intervention), along with the feasibility of carbon capture and storage, reduce the climate risk enough to make the politically feasible actions being proposed by the UNFCCC adequate to prevent a catastrophe.

That’s not an irrational strategy. Many people who accept the reality of climate change do not think that it requires extensive, full-steam-ahead action. And many who do want to charge ahead are wary of promoting policies so ambitious that they will be tossed out as unrealistic. To motivate action, it makes sense to present the public with a serious but solvable problem. An ill-defined problem and an overly ambitious agenda could lead people to shrug or throw up their hands in despair.

Pielke’s worry is that by framing the climate issue as the IPCC does, it unwisely narrows the possible future scenarios we should be considering and the range of policy responses we should be exploring. As he rightly points out, the assumptions being built into the computer models could be wrong, and if they are, policy actions based on them may not work. He argues that the public needs to be presented with a wider field of vision of how the climate could change in the hope that it will consider a richer mix of policy options. Moreover, when important assumptions are not openly discussed by scientists, the public will rightly want to know why—especially if those assumptions turn out to be wrong.

Scientists in many disciplines face the same challenge of deciding how to frame the information they communicate to the public. Is it possible to convey the nuances of levels of certainty, the weight of the evidence, the state of development in a field? Small-sample studies versus meta-analysis? Clinical research versus big data mining? How much detail does the general public—or a congressional staffer—need? How might experts best characterize the points of controversy within a field? Is there a point of diminishing returns in public engagement? Is there anyone out there who is actually using scientific understanding to make decisions rather than to justify the predetermined outlook of the tribe?

The answers are neither simple nor obvious, and will vary with the role of science in the debate and the political environment. But the underlying principle is that if science is to play a constructive role in public policy, scientists must have the public’s trust. And for scientists to earn and keep that trust, they must trust the public. We live in a democracy, not a technocracy, and that’s a good thing. It’s tempting in these fact-challenged days to retreat to the comfort of our scientific tribe, but wisdom, unlike knowledge, is widely distributed. We should generously share knowledge—including the sources of uncertainty—so that wisdom can put it to work.

Keeping the Lights On

In September 2017, two hurricanes struck the US island of Puerto Rico, crippling its electric power grid. Because Puerto Rico is a major manufacturing site for medical supplies, the nation’s hospitals soon developed acute shortages of the intravenous bags used to administer medicines. By early 2018 the Food and Drug Administration was cautiously optimistic that the shortages would be alleviated. Even so, at that point more than half of the people in Puerto Rico still had no electricity. The role of electricity in modern life is one we take for granted—until the power goes out, with repercussions distant as well as local.

The electric power system is a subject of basic importance to Americans because universal instant access to electricity is both assumed and turns out to be unexpectedly at risk. Mason Willrich’s Modernizing America’s Electricity Infrastructure is a sophisticated policy statement by a longtime energy sector professional and analyst of high stature and deep experience. Willrich, a former executive at Pacific Gas and Electric, California’s largest utility, calls for a comprehensive response to the system’s risks, structured within the existing complex scheme of utility regulation and organization. This is a reasonable approach in principle but implausible in the existing governmental situation. Willrich is an intelligent visionary, yet he may not avoid the fate of Cassandra, whose foresight is remembered because it was not heeded.

Electric power systems have been shaped over time by three principal forces: technology, economics (including, importantly, finance), and politics (including regulation). These forces have acted in concert, though not coherently. Electrical supply began with a start-up phase, in the last years of the nineteenth century, in which alternating current technology and service monopolies at and above the metropolitan scale emerged as dominant; both endure today.

During the twentieth century, a growing grid enjoyed a long period of declining rates, lasting until the 1970s. Falling rates and rising demand reinforced one another, as technology, regulation, and the mechanisms of cost-recovery took shape in a nationwide—but not quite national—industry. There followed a period of increasing rates, in which we still find ourselves. Rising rates have in turn contributed to a sharply slowed increase in demand. This change in economic circumstances is reshaping (at times disruptively) a capital-intensive, highly regulated industry.

The electric power system now is operated by more than 3,000 entities. The retail utilities seen by consumers include nearly 50 investor-owned companies, serving more than two-thirds of the nation’s ratepayers. In addition, there are more than 2,000 publicly owned utilities controlled by a variety of public bodies, including major cities such as Seattle, cooperatives that began by serving rural areas, and regional agencies such as the Tennessee Valley Authority. Publicly owned utilities serve fewer than one-third of the customers. (The remainder of the power system includes a diverse set of owners: independent power providers, small-scale sources such as rooftop solar, and transmissions systems. These are regulated via a hodgepodge of rules and laws that vary by state.) These components of the national electric supply system are linked by transmission lines administered mainly by regional independent system operators overseen by the Federal Energy Regulatory Commission (FERC).

Customers spend, on average, somewhat under 11 cents per kilowatt-hour of electricity. This provides revenues of nearly $400 billion per year, on an asset base of over $1 trillion. Electricity accounts for about 5% of US economic output. Thanks to mobile phones charged from outlets, electricity is now a presence in every hour of most Americans’ existence. The reliability of the grid and the cost of power, as a result, matter much more than might be supposed from the quantitative contribution of the electric industry to gross domestic product.

Electric power has been a network phenomenon all along, shaped by independent sources of authority and economic and political power—never unified but coherent enough to allow interconnection of different geographic provinces and technological systems. The result today is a collection of electric utilities, technologically connected to one another, that is regulated in an intricate scheme of laws and policies administered by multiple state and national regulatory bodies. Whether owned by shareholders or governmental entities, an electric utility operates under economic and political forces unlike those facing conventional businesses or government agencies. As this thumbnail sketch indicates, Willrich has undertaken a formidable task in describing and analyzing this unusually complicated industry; a reader needs some fortitude to follow the author’s guidance through this labyrinth.

Electric power began as a vertically integrated industry, with a single firm owning the wires going into customers’ homes and businesses, as well as the distant power plants that generated the current flowing in the lines. Over the past generation a series of policy and financial changes—loosely grouped under the term “deregulation”—has shifted much of the ownership of generating resources onto independent power providers.

The generation of electricity was in the midst of a technological transition by 2015. One-third of power came from coal; this was a steep decline from nearly half less than a decade earlier. The change is driven principally by the availability of low-cost, relatively clean natural gas. This transition away from coal is unfolding with unprecedented speed, in an industry where major investments are planned to last many decades. Another significant component of electricity generation is the US nuclear fleet, which is still the largest in the world, as Willrich points out, even though there have been few additions since the 1980s.

Under policy mandates adopted with strong support from environmentalists, most jurisdictions are moving toward a system of mixed but coordinated power supply that includes efficiency and renewable sources, notably wind and solar. Largely unnoticed is the challenge of integrating the new, often decentralized sources of power into a transmission and distribution system designed around large central-station power plants. The policies that induce renewable resources frequently do not include provisions for rebuilding a grid that can distribute the power they provide. The retail utilities are left with the mandate of providing electricity whenever it is needed, but with revenue streams poorly aligned with the technological realities of this task.

More broadly, the cost of electricity is largely that of the equipment used to supply it, and much of that capital is funded through borrowing by utilities and power producers. The borrowed money is paid back over time—but more slowly than the competitive forces of natural gas or the policy mandates for renewables seem to allow.

Challenges loom accordingly. Stagnant demand restrains the revenues that power companies can collect. This is because end users can take steps to limit their own demand, but also because the regulatory authorities are reluctant to permit rate increases. Over the longer term, global climate change seems likely to require significant technological change: the provision of electricity currently accounts for 40% of the nation’s greenhouse gas emissions. How to pay for a large-scale rebuilding of the electricity supply and transmission system in order to reduce those emissions, in light of the modest returns on capital currently allowed, is not at all clear. California is wrestling with this problem now, learning lessons that may be illuminating (as was the case with the state’s unhappy experience with deregulating electricity markets nearly two decades ago). Closer to hand, perhaps, is the range of risks that accompany an essential resource of postindustrial life that remains so low in profile that difficulties gather out of sight.

In response to the real but underappreciated fragility of the electric power industry, Willrich argues that four goals are important national priorities:

These goals are technical and intricate, even as they are important. Turning them into durable policy demands coordinated actions; so deep are the interdependencies that no simple strategy is plausible. To the discomfort of many environmentalists, addressing climate change may require long-term subsidies to nuclear power plants, which do not emit greenhouses gases in their operation. Wide-ranging changes to wholesale markets for electric power will be needed to facilitate the transition from fossil fuels to renewable sources and energy efficiency investments; consumers are unaware of the costs and institutional difficulties of doing so. Today’s populists continue to deny the economic reality that low-cost natural gas has doomed the coal industry. From all directions, the benign inattention that the public has lavished on electricity stands in the way of the changes needed to address the electricity we take for granted.

Willrich’s central message—although implicit—turns out to be that the institutional and technological complexity of the grid functions so smoothly on the surface, day to day, that consumers, voters, and nearly all leaders do not perceive its fragility. Neither do they understand the financial, political, and managerial burdens of conserving what is valuable. Unlike, say, national defense or health care, the energy system is a trillion-dollar nexus where public discussion is nearly impossible to marshal without a crisis to focus national attention. And it is sufficiently complex and delicate that a crisis is unlikely to be a good venue to make sensible choices: we need to rebuild the ship while it is underway, not head for the lifeboats as it sinks.

Willrich proposes policy reforms led by the federal government and involving the utility commissions of every state. Some states will continue to rely on deeply entrenched coal-fired generation, while others, such as Hawaii, aspire to a wholly renewable power supply. Many state commissions and regional operators chafe under the regulatory policies of FERC. The reform ideas that Willrich puts forth make sense conceptually, but require trust and careful choreography that currently appear to be in short supply. For instance, the decline of the coal industry has been blamed on attempts to lower greenhouse gas emissions, rather than on the technological innovations that made natural gas cheaper; this confusion has shifted debates about energy regulation in unproductive directions.

There are many, many more examples of the difficulties and confusions that must be surmounted. Doing so would be a formidable task even without the deep suspicions of government now embedded in US public life, and Willrich doesn’t acknowledge the scale of this challenge. But to citizens and leaders in business, civil society, and government who grasp the need for modernizing the nation’s electric infrastructure, Willrich’s book is a good way to begin learning about the tangle of wires and institutions behind the placid socket on the wall.

The erosion of the infrastructure of developed economies is a slow-gathering crisis whose scope and implications are hard to see until the costs of effective response become painfully high. In this electricity is not unlike other profound challenges facing the United States: streets and bridges, ever more congested, become structurally weakened and studded with potholes; public schooling is neglected until the workforce cannot keep up with economic change; central banks lower interest rates in order to prop up growth but then can no longer respond to downturns; and the electric power system weakens, invisibly, until a systemic outage exposes the limitations of its architecture.

The story of Puerto Rico is a worrying harbinger. The island’s recovery from the recent disastrous storms is hampered by an electric utility that was already bankrupt before the hurricanes struck. Puerto Rico’s government is moving to privatize the hapless public agency running the island grid. But federal assistance in rebuilding the electric power system has been limited by law to restoring what was there—which is quite different from a resilient system, in which solar and other renewables would play a significant role in a way that makes sense for users and the economy. The mismatch in Puerto Rico, between what is needed going forward and the economic and institutional structure that is responsible for doing so, resonates far beyond the island. More than 60 years ago, the musical West Side Story contained these bitter lines, sung by a Puerto Rican character: “Puerto Rico, you ugly island … always the hurricanes blowing.” The refrain that follows is “I like to be in America!” But today it is in America where the wind threatens to blow, hard.