“Science and Technology Now Sit in the Center of Every Policy and Social Issue”

In January 2021, President Biden appointed sociologist Alondra Nelson, a leading scholar of science, technology, medicine, and social inequality, to be the first deputy director for science and society in the White House Office of Science and Technology Policy (OSTP). Issues in Science and Technology editor William Kearney recently spoke with her about her role in bringing social science expertise to federal science and technology (S&T) policy and the Biden administration’s goal to make that policy fair and equitable for all members of society.

Photo by Dan Komoda.

You were writing a book about OSTP before your appointment there, and you’ve followed the ways its role in federal science policy has fluctuated over the decades. President Biden immediately heightened its role, however, when he elevated his science advisor, the OSTP director, to his cabinet. What is the significance of that move?

Nelson: I started doing the research for the book because I found it such a fascinating office for somebody who is a student of science policy. In the 1970s, the OSTP was originally imagined to be a small shop, but what’s happened over the intervening decades is that science and technology now sit in the center of every policy and social issue. And so it only makes sense—when I track evolution of this work with my academic’s hat on—that at this moment it would be a cabinet-level office.  

“What’s happened over the intervening decades is that science and technology now sit in the center of every policy and social issue.”

In answering your question, it is also important to think about the current context. Every president faces profound challenges and a unique set of historical circumstances when they come into office. For President Biden, this was a once-in-a-century pandemic combined with a climate emergency—all in the context of a growing awareness of injustice and inequity in American society, and globally. Every dimension of national and international policy, from health and education, to security, to social welfare, and everything in between, has something to do with science and technology. There’s no way to tackle the major challenges and opportunities we face without engaging science and technology. From that perspective, and given the president’s commitment to having a government that is evidence-based and informed by science, it follows that this would be a cabinet-level position. I think that the fulfillment of the aspirations and values of the Biden-Harris administration are manifest in the elevation of OSTP’s directorship to the cabinet.

OSTP is still a small shop compared to big agencies, so how do you coordinate science policy across the entire federal government so that it aligns with President Biden’s goals and vision? Is that the job of OSTP?

Nelson: Strategy and coordination are part of OSTP’s founding mission. We work in parallel with, and administer, the National Science and Technology Council (NSTC)—about which I think not enough is known by the public—to coordinate interagency alignment with the administration’s priorities. NSTC was established in 1993 and there is now a nearly 30-year infrastructure for doing exactly the kind of interagency work you suggest. NSTC is doing work on critical minerals, advanced manufacturing, scientific integrity, STEM equity, algorithmic accountability, and many of the other big issues we face. There are interagency folks at the table, sitting with OSTP colleagues, working to create strategy and policy.

On the eve of his inauguration, President-elect Biden wrote a public letter to Eric Lander, who he had nominated as OSTP director, tasking him with answering five big strategic science and technology policy questions. Among them was, “How can we guarantee that the fruits of science and technology are fully shared across all of America and among all Americans?” How are you trying to answer that question? What would success look like?

Nelson: The question President Biden posed to Director Lander in that letter suggests what is distinctive about this OSTP—and what I find really exciting about it. The question is the foundation of the Science and Society Division, which is a new division that I have the privilege of leading. Every day we are working with public servants, researchers and scientists, policymakers across government, and sectors of the American public to answer this question.

The goal is to build a science policy that intentionally and explicitly includes the perspectives of the American public, including seeing science and technology through the eyes of folks who are marginalized or vulnerable. This approach to policy views innovation as something that has been extraordinary and offered great progress and promise to some people, but has also sometimes come at the cost of harm and damage to other communities. And in this moment in which there is diminished trust in institutions and diminished trust in science, it means bringing S&T policy development out of the shadows. A phrase I often use is “showing our work.” For the government, that means being more transparent about the past, about what we’re doing in the present, and about our goals for the future. What you’ve been hearing in the language of the administration is an explicit effort to situate science and technology policy with democratic values, including inclusion, accountability, justice, and integrity. The challenge is to drive, design, and implement policy with those values always in mind.

“What you’ve been hearing in the language of the administration is an explicit effort to situate science and technology policy with democratic values, including inclusion, accountability, justice, and integrity.”

What would success look like? A STEM workforce that really looks like all of us, that reflects all of us, in the classroom and in the boardroom. Empowering new communities to be at the table of S&T policy. I think success looks like a public that feels that it can be engaged in the work of government; a lot of work we are doing in OSTP is conducting listening sessions and using other ways of engaging the public to help us think about the work we do. Success also includes a new set of rules of the road, such as an approach to innovation that is rooted in inclusion and scientific integrity. It means having a sense of responsibility to have aspirations, safeguards, and values in place that can help ensure that folks are not abused or discriminated against as new S&T comes online—to ensure, per President Biden’s question, that it really benefits all people.

You said there’s a need to be transparent about the past. What do you mean by that?

Nelson: The Biden-Harris administration has set out to pursue racial and economic justice in every facet of our work and to address head-on disparities and inequities that exist because of things that have happened in the past and continue to happen in the present. Disparities in medicine, health, and access to education didn’t just appear overnight; they congealed over time, one generation after the next, one injustice on top of another. Even those of us who might consider ourselves technophiles and science optimists grew up hearing stories of tragedies, and indeed horrors, in the past. The story that we hear most about is the Tuskegee syphilis experiments, which I often remind people was a project of the US Public Health Service, not just something that just sort of emerged or was in the private sector. That was 40 years of government research. 

We need to say that we know science and technology has not equally benefited all people. We stipulate that at the beginning. As I said before, in a context of low trust in government and institutions, it’s incumbent upon government, in a very profound way, to be forthright. If we are really going to be in service to the American public, we need to have some difficult conversations. I think from honest accounting we can move into truly innovative and mutually beneficial S&T policy and outcomes. 

“From honest accounting we can move into truly innovative and mutually beneficial S&T policy and outcomes.”

A couple of examples are the listening sessions, which I mentioned earlier, hosted by the Scientific Integrity Task Force. The task force was established through a memorandum from President Biden and was asked to recommend policies and practices that can prevent political interference in federal science, with the aim of restoring trust in government. Part of the work of the task force has been an accounting of lapses in scientific integrity as a necessary part of the process of suggesting a way forward. A second example is the Equitable Data Working Group that I cochair. This was established on the first day of the administration through an executive order on Advancing Racial Equity and Support for Underserved Communities Through the Federal Government. This group is attempting to identify and fill in demographic data gaps to help answer the question of whether or not government is doing its work equitably. We need to be honest that in many instances we couldn’t answer that question in the past because we didn’t have the data we need to do so.

Almost 20 years ago you coedited a book, Technicolor, that challenged some common assumptions about the relationship between race and technology. What misconceptions persist about the so-called digital divide?

Nelson: I’ve been thinking about these issues for a long time. Technicolor was framed around early conceptions of the digital divide. A stereotype had emerged, a kind of false narrative about technological evolution, that held that progress had been forged largely by white scientists and technologists, white innovators, and white inventors, and that the other side of the coin was that people of color were somehow less capable when it came to technology. I think now we are a little more aware as a society that that framing is incorrect; there is a rich history of Black and brown scientists, inventors, and innovators who’ve achieved critical breakthroughs, often against incredible odds. In that early work, we were trying to surface some of that history and explore the idea that the digital divide, at its worst, can become this kind of self-fulfilling prophecy, a kind of fiction that people of color can’t keep pace in a high-tech world. We shouldn’t accept the notion that working-class people, or people who haven’t had certain kinds of educational benefits, are less competent, less interested, less passionate about, and less innovative in science and technology. We’ve got to think in different ways about the digital divide.

“We shouldn’t accept the notion that working-class people, or people who haven’t had certain kinds of educational benefits, are less competent, less interested, less passionate about, and less innovative in science and technology.”

In this moment what’s true and important about the digital divide is the extent to which it offers us a prism for understanding infrastructure inequality in the United States. Certainly, COVID-19 shined a light on a range of disparities, including the inability of many to get online to work remotely or to give kids access to schooling. I’ve been proud of what the administration has done to measure those disparities and to also try to address them. The National Telecommunications and Information Administration, which advises the president on telecom issues, published this incredible mapping tool where you can actually see the places and populations with more reliable or less reliable broadband coverage. The Biden-Harris administration is planning to invest $65 billion to connect Americans to highspeed internet.

How do we change the thinking about where innovation comes from?

Nelson: We know from the organizational behavior literature that it is diversity broadly—not just racial and ethnic diversity, but broad diversity of perspective and experience—that is one of the most significant drivers of innovation. When we are setting the conditions for innovation in science and technology policy, it is a shame if we are not also leveraging this one demonstrated driver of innovation. We need to get more people involved in the work of doing science and technology policy and, of course, science and technology research and development itself. The United States is this great lab of innovation, and we should be able to turn that innovation into products and practices that not only take on hard problems like climate change and pandemics but are also more equitable.

Do you see social science becoming a bigger part of the policymaking toolkit?

Nelson: I certainly hope so. This in part is why I am at OSTP. To go back to our earlier conversation, many of the tools that we need for robust government—tools for understanding the lived experiences of the American public; for assessing the equitable, successful delivery of government services, for identifying demographic trends in economy, labor, and STEM professions; for applied data science across pressing policy areas—come from social science. How do we assess whether or not programs are serving intended communities? Is this federal program serving hard-hit communities in low-lying lands that are more likely to be exposed to climate change? That, and many others, are empirical questions that can be answered when we apply social science concepts to qualitative and quantitative data. The answers we generate can then inform policy. 

I think that as government becomes more analytical, it is very important to have social scientists at the table. One of the most important reasons is because we think about answering questions with different kinds of data, produced using both quantitative and qualitative methods. And as much as the technical analysis matters, policymaking is always going to involve that social piece, that human piece, that historical piece. I hope a new way of thinking about not just S&T policymaking but policymaking more generally can be found in social science, which helps us see tensions in society, map them, reconcile them, and understand them, and recommend changes more conducive to equitable experiences and outcomes among all members of society. I believe as a scholar and researcher, and as a policymaker, that social science evidence, at its best, really can point us to better policy solutions.

How do you communicate to the public the urgency of climate change or other pressing issues in the midst of a still overwhelming pandemic?

Nelson: One of the lessons of COVID-19 is that, in some way, we all became social scientists. It is this moment, I think, in which all of us had to come to terms with the profound complexity of the challenges that we face right now, and in the coming years. There were times in the pandemic when all of us became armchair epidemiologists, making risk assessment calculations for our families, for our neighborhoods, for our workplaces and schools. 

“As government becomes more analytical, it is very important to have social scientists at the table.”

At the same time, the science and technology around the pandemic was extraordinary: we decoded the genome of the virus in a month or so, we had a vaccine in less than a year. Yet we realize we have not conquered it. It has not been for lack of science and technology that we have not conquered it, but because of the environment in which that science and technology emerged—these are profound social questions. And when it comes to climate change, we’re living in a time where the impact is acute, it’s urgent and existential. I want to believe that all of us in the American public are learning to face up to the complexities of climate change, and the pandemic may have primed how we think about it. I hope that presents some opportunities for courageous possibilities for both domestic and international climate change policy and for pandemic preparedness.

Is there anything else you would like Issues’ readers to know about President Biden’s science policy priorities?

Nelson: I would like your readers to know that the federal R&D budget for the 2023 fiscal year not only puts a priority on cutting-edge science and technology, but it also puts a priority on innovation for equity. We’re proposing a new kind of social compact for S&T policy, in which it is pursued in the context of the social ecosystem it sits in, with a greater awareness of whom it’s supposed to benefit—and how.

A Revolution for Engineering Education

Kudos to Sheryl Sorby, Norman L. Fortenberry, and Gary Bertoline for trying to foment “humanistically” a revolution in engineering education. In “Stuck in 1955, Engineering Education Needs a Revolution” (Issues, September 13, 2021), they call for ending the “pipeline mindset.” Their article aligns with descriptions of structural education problems—and proposed solutions—in Educating Scientists and Engineers: Grade School to Grad School, produced in 1988 by the Office of Technology Assessment (OTA) and presented to the House Science Committee. It noted:

  • The pipeline is a model of the process that refines abundant “crude” talent into select “finished” products as signified by the award of baccalaureate, master’s, and doctorate degrees.
  • The pipeline model is still a black box of the educational process as a dwindling supply of talent, with its composition in flux, that has been sorted and guided toward future careers.
  • To the extent that the education system unduly limits the talent pool by prematurely shunting aside students or accepting society’s gender, race, and class biases in its talent selection, it is acting out a self-fulfilling prophecy of demographic determinism.

Unfortunately, the pipeline metaphor persists to this day. Yet so does a fundamental policy prescription that OTA identified: “The skills of scientists and engineers must be both specialized enough to satisfy the demands of a stable market for science and engineering faculty and industrial researchers and general enough to qualify degree-holders for special opportunities that arise farther afield from their training but grow central to the national interest.”

What was compelling to the OTA project team back then is even more so today: the more “semi-permeable” the nation’s talent development pathways, the heartier and more inclusive will engineering education and the workforce become.

Independent Consultant

Savannah, Georgia

Beyond Trust in Science

In “Trust in Science Is Not the Problem” (Issues, Spring 2021), Alan I. Leshner urges scientists to stretch outside their comfort zones to regularly engage with the people who are paying the bills (taxpayers and their elected representatives), and who have some questions. A skeptical habit of mind is normally highly valued by scientists, who are trained to wield skepticism with the precision of a scalpel, and to disdain those with lesser skill sets. I think it’s fair to say that some scientists are disdainful of nonscientists; nonscientists pick up on that, and they don’t much like it. The science community should take a pledge to stop criticizing—or, worse, condemning—nonscientists who are actually just acting like scientists, asking questions, expressing skepticism.

Not all those who are asking questions, criticizing science, are eager to learn or change. Many are not! But some are open to engagement, and that’s where the opportunity lies.

The science community should take a pledge to stop criticizing—or, worse, condemning—nonscientists who are actually just acting like scientists, asking questions, expressing skepticism.

I agree with Leshner that instead of asking the public to change, we should expect, and empower, the science community to make some changes. There are science societies and foundation-funded programs that are doing some important work, helping interested members of the science community learn how to effectively engage the public. It’s time to take these initiatives and more to scale, and to learn as we go, just as in any new field of scholarship and pedagogy. Let’s incentivize academia to modernize the training curriculum for graduate students to include public engagement and communication. Teaching these skills and expecting evidence of competence is important. So is including public engagement activities in promotion and tenure reviews. These are important steps to speeding accomplishment of the goal of earning public confidence and trust on a sustained basis.

Let’s require federally funded science training to include a public engagement component. (Who could make that happen? An individual university could, federal agencies could, or Congress could.) Over time, generations of scientists will be empowered to encourage—rather than discourage or scorn—public engagement by their peers; scientists will welcome skeptical questions from nonscientists and will model the scientific process by stimulating more questions. More and more effective public engagement by scientists will also underscore the power of science to add value to all our lives.

President and CEO

Research!America

Cognitive Ecosystems

Braden R. Allenby’s article, “Worldwide Weird: Rise of the Cognitive Ecosystem” (Issues, Spring 2021), is timely as we rush to build the cyber-human world. Cognitive ecosystems have always existed, as Allenby cites in the example of Edwin Hutchins’s observations of Micronesian navigators. The difference between the old cognitive systems and the new is that the old were mainly local, and the control of resources and knowledge was also local. The printed word, the industrial revolution, and colonialism produced dramatic changes to the cognitive ecosystem over the past 400–500 years. Allenby describes the cognitive ecosystem of the future taking place around us as a continuation of the trajectory of increasing complexity of techno-human systems. He emphasizes the difficulty in perceiving the challenges that this new direction entails. Emergence is inherent in any complex adaptive system, but scale multiplies techno-human systems and complexity over time. 

Since the industrial revolution, scaling, power amplification, and efficiency have been primary drivers of development. As we scale, complexity increases and the need for control increases, with lack of predictability leading to nonlinear effects. The sociologist Charles Perrow has warned us that complex designed systems will lead to emergent failures embedded in the design that were unknown to the designers. The challenges becomes unfathomable for open systems with lots of “intelligent” black boxes built in and for distributed cognitive ecology. Who is building them?

Emergence is inherent in any complex adaptive system, but scale multiplies techno-human systems and complexity over time.

An emerging model is China’s social credit system that wants to shape the cognitive ecology ordained by the party. Elsewhere, tech giants and other entities determine our ecological direction, primarily for profit. In both cases, the systems are leveraging technology to consolidate and centralize data on the physical world and citizenry, its processing and memory afforded by the scalability of the techno-cognitive ecosystem.

Allenby points out that citizens and institutions are not oriented to absorb this mass scale rapid evolution of the cognitive ecosystem—Alvin Toffler’s “future shock.” Cognitive technologies enhance centralization, at the cost of reshaping local structures and making them less independent. The loss of local newspapers weakens the local cognitive ecosystem. Consolidation of power is inevitable when scaling is made possible through technology for physical or calculative power. The real question Allenby raises is whether the United States understands this well enough to compete to preserve the power of the people while not losing to China in its march to consolidation of power in an authoritarian cognitive ecology. 

Technology facilitates scaling, in turn producing consolidation of power that leads to loss of local cognitive autonomy and ecology. American democracy was envisioned to flourish by providing a space for democratic experimentation. If that spirit is lost to this new consolidation of power, the United States will in effect will become no different than China with a different illusion of harmony—not of fear but unconscious subjugation. Without the democratic ability to shape this cognitive ecosystem, it will only consolidate existing social and national power relationships rather than the imaginary freedom that the computational cognitive ecosystem promised. The centralization of power in this cognitive ecosystem to the state or corporate structures will be the end of social democratic innovation in a democracy.

Rephrasing Allenby’s challenge, how we design institutions that check the consolidation of social power and preserve the innovative and adaptive local cognitive ecosystems without loss of freedom, while taking advantage of the global cognitive ecosystem, is the question to be answered. Justice Louis Brandeis is speaking to us and warning us again of consolidation of power in democratic societies.

Research Professor

Engineering Research Accelerator

Engineering and Public Policy

Carnegie Mellon University

Principles for US Industrial Policy

In “Design Principles for American Industrial Policy,” (Issues, Spring 2021), Andrew Schrank calls for new design principles with which to anchor new innovation and industrial policies. To have a sustained positive impact, he notes, those policies must create a wide coalition of actors supporting them. This is a truly important insight, and Schrank has demonstrated it across an array of policies over several decades. It is clear that the United States will need to heed those lessons and build new policies along the lines of the targeted-universalism design principles he favors.

Where I would add to Schrank’s contribution is by focusing on the sociopolitical ideals with which we should employ those design principles. After 50 years of growing inequality and decreasing social mobility, the United States has a dual window of political opportunity. The reality is that the majority of Americans now face significant economic insecurity and fear for their future and the future of their children. Hence, Americans want a stronger, but also fairer, nation where everyone has a real shot at the American dream.

For that reason I argue that the United States employ distribution-sensitive innovation policies (DSIPs) as its sociopolitical design principle. DSIPs are designed to reach the dual goals of increasing economic growth while enhancing economic distribution. Amos Zehavi and I have examined such policies in multiple countries of the Organisation for Economic Co-operation and Development, and our findings dovetail with Schrank’s insights. DSIPs can be successful, but their survival depends on crafting a political logic that addresses current political needs and creates a constituency that welcomes their efforts, becoming politically mobilized to ensure their survival.

After 50 years of growing inequality and decreasing social mobility, the United States has a dual window of political opportunity.

In his article Schrank mentions two modes of DSIPs, those that aims at low-skilled manufacturing workers and those aimed at the economic periphery. He showed how such programs—for example, the Manufacturing Extension Partnerships, based at the National Institute of Standards and Technology—have been achieved their policy goals, but they have done so only by creating and mobilizing a political coalition to support them. Let me offer two other domains of DSIPs to consider.

Minorities. Governments intent on better integrating members of disadvantaged minorities into the workforce tend to focus on the low end of labor markets. But real progress happens when minorities get into the growing and innovative sectors of the economy. It is not enough to get disadvantage minorities into STEM education; it is also necessary to get them into innovative activities in technology-intensive workplaces. Minority group pioneers can play a critical role by serving as role models in their communities and by becoming nodes in social-professional networks that help future generations navigate the world of technology-intensive industries. Further, the success of such programs creates its own newly empowered political supporters.

People with Disabilities (PWDs). With the United States’ rapidly aging population, the percentage of PWDs is constantly rising. Alarmingly, labor market participation rates for PWDs are very low. More than ever, new technologies hold the promise of enabling PWDs better incorporation into the workforce. Governments can help by pushing for their development and implementation. As the political battles around Medicare demonstrated, older people comprise one of the nation’s strongest political forces, and more and more of them are becoming PWDs.

Schrank has powerfully demonstrated the need to apply targeted-universalism as the core design principle. At the same time, it will be important to ensure that more and more people can actively participate in the economy and fulfill the American dream.

University Professor and Munk Chair of Innovation Studies

University of Toronto

Codirector, CIFAR’s program in Innovation, Equity & The Future of Prosperity

Author of Innovation in Real Places: Strategies for Prosperity in an Unforgiving World (Oxford University Press, April 2021)

Andrew Schrank compares two ill-fated federal industrial policy programs to three others that are still alive and kicking. He compellingly argues that the difference between the failure of the former and the success of the latter was not in their economic effectiveness, but in their political viability. Contrary to the common wisdom that programs that evade attention are the most resilient, Schrank argues that the path to political viability depends on the different programs’ ability to foster broad constituencies.

Building broad constituencies requires federal programs to adopt a “targeting within universalism” design in which universalism guarantees that all relevant program clients get something and the economically least-developed are targeted to receive a disproportionately higher share of funding than others. However, unlike in social policy programs, targeting in industrial policy is not required to further essential program goals, but to acquire the support of actors—often the representatives of economically weaker states—that would hardly benefit from program allocations rewarded according to purely competitive criteria.

While I readily agree with Schrank that universalism is required to build broad support for programs, I wonder whether targeting is necessary from a political perspective. It is likely that as long as a state receives an equal share of program funding, it would extend its support for the program. Hence, targeting—that is, allocating outsized shares to the less-developed—is unnecessary, at least from a political standpoint.

I wonder whether targeting is necessary from a political perspective.

However, as Schrank duly notes, industrial policy is not exclusively about promoting industry competitiveness. In an era of rising inequality in general, and rising spatial inequality more specifically, governments are seeking ways to narrow the gaps and jump start economic development in “left behind” regions and towns. While it is true that return on government investment tends to be higher in economic powerhouses such as California, from a social equity perspective investing in less-developed Arkansas is the higher priority. I would argue therefore that the rationale for targeting (within universalism) is primarily furthering social goals. All states should benefit from funding to create a broad constituency; targeting is required to address growing social inequities.

Regardless, Schrank’s broader message that for industrial programs to succeed they must expand their constituencies is apt. Indeed, following this reasoning, for industrial policy programs to gain and retain political viability they should be designed to be inclusive. For instance, engaging unions in these programs—as is done, for example, in Germany—could bring a significant new constituency into the fold.

Of course, doing this, or more generally initiating new programs, is no mean task in today’s politically polarized age. Nevertheless, the Senate’s recent passage of the $250 billion US Innovation and Competition Act offers hope that given economic challenges (think China) and widespread social plight, industrial policy is on the rise again. Schrank offers sound advice about program design principles that if followed would increase the likelihood that these new programs would survive and thrive in the coming decades.

Chair, Department of Public Policy

Department of Political Science

Tel Aviv University (Israel)

Associate Program Director, CIFAR’s program in Innovation, Equity & the Future of Prosperity

Andrew Schrank makes a series of excellent points about the contemporary industrial policy discussion in the United States. I have considerable sympathy for what he says and regard the worries that he addresses concerning the ability of the US political system to design an effective set of policies benefitting American businesses and their workers to be of central contemporary political importance. I offer two thoughts in reaction to his argument.

Loss of industries, firms, and employment due to the China shock has left us with a surviving manufacturing sector that is relatively lean, and fairly competitive.

First, how uncompetitive are American companies that are still in business? The US decline in manufacturing employment has to do with the emergence of China and with secular improvements in productivity, more with the former than with the latter. The United States is still one of the largest manufacturers in the world. Loss of industries, firms, and employment due to the China shock has left us with a surviving manufacturing sector that is relatively lean, and fairly competitive. It is just that the surviving relatively competitive manufacturing sector that we have does not generate a great deal of employment. It would be important to know if Schrank wants policies that will make these already competitive companies even more competitive, which would benefit existing companies, but likely have only modest employment effects. Or if he wants to create more companies so that more people will be employed in manufacturing. Industrial policy will address the first problem, but it’s not clear that it is the right tool for the second problem.

Second, if multiplier effects from manufacturing on the generation of jobs are a key reason to be concerned with the health of manufacturing, wouldn’t it be important to generate an industrial policy that paid explicit attention to the ways in which employment and competitiveness are entangled across manufacturing, service, financial, and even agricultural domains? In some ways, Schrank’s premise—the nation needs a politically feasible but perhaps not so efficient industrial policy—is undermined by his focus on manufacturing alone. Industrial policy is a targeted program, not a universal one. By his own analysis, this is likely to generate opposition from those domains that are not targeted. Schrank wants those drawing up industrial policy plans to take the tension between universality and particular benefits into account, but there are a variety of universals and particulars in play. Is he focusing on the right ones?

Paul Klapper Professor in the College and Division of Social Science

Department of Political Science

University of Chicago

Democratizing Talent and Ideas

The new National Science Foundation director, Sethuraman Panchanathan, or Panch, as he encourages us to address him, shows in his Issues interview (Spring 2021) why he was chosen to lead NSF at this challenging time for the agency and the country. He has the ideal background, vision, energy, and passion to take on the task. That will all be tested as NSF moves into new territory—not uncharted, but not quite traditional either.

At a time when American leadership, its economy, and indeed the future well-being of its people are being challenged as never before, the nation’s political leaders are turning to NSF to play a particularly important role. They are asking for it not only to “promote the progress of science,” as the beginning of the agency’s mission statement reads, but to ensure that scientific discoveries and inventions are put to use by supporting translation to industrial application. Concerns have been raised about whether this is a proper role for NSF, whether this new responsibility will erode NSF’s tradition of excellence in supporting basic research in most nonbiomedical areas of science and engineering, and whether NSF can deliver.

The proposed bipartisan, bicameral Endless Frontier Act (renamed the Innovation and Competition Act), sponsored by Senate Majority Leader Chuck Schumer of New York, Senator Todd Young (R-IN), and Representatives Ro Khanna (D-CA) and Mike Gallagher (R-WI), is unprecedented in its proposed funding and bold challenges for NSF, including the creation of a new directorate focused on technology and innovation. It represents a major step up in NSF’s funding, responsibilities, and expectations on the part of Congress. 

At a time when American leadership, its economy, and indeed the future well-being of its people are being challenged as never before, the nation’s political leaders are turning to NSF to play a particularly important role.

On the House side, the proposed National Science Foundation for the Future Act is also bold and contains many of the features of the Senate counterpart, but with more emphasis on traditional research programs and on education and human resources. President Biden has similar aspirations for NSF and is proposing a 20% increase for the agency for fiscal year 2022, which, in part, will also fund a new directorate.

At this moment I would not venture to predict the outcome. But it’s clear that NSF is likely to be challenged to expand the scope of its activities—hopefully, with substantial additional funding. So, the questions are apt: Why NSF? And, can NSF do this job?

Serving as NSF’s tenth director, I was privileged to see firsthand how its program managers and support staff work so effectively and efficiently to get the most out of the agency’s relatively small budgets. I experienced the benefits of advice and cooperation from the National Science Board, which shares policymaking authority with the director. And I had a chance to study how NSF, over seven decades since its founding, has been able to adapt its programs to changes in the science and engineering disciplines and in requirements for new experimental facilities and research modes, and also to incorporate the most effective approaches to improving STEM education and inclusiveness. And it has done this while continuing to fund the most meritorious basic research proposals, using expert peer review. I am confident that NSF can do the job.

As to why NSF? I don’t see any other independent agency that could better do what is being asked of NSF. And there is no time to create one. Congressional leaders believe that a lead agency is needed and they have turned to NSF to play that important role. But I want to be clear on this: the challenge the United States faces in the coming decades, primarily from the rapid rise of China, is larger than what any one agency can do.

Fortunately, the federal government has many mission agencies that support excellent research, and it will be necessary for all of them—the Department of Energy, the National Institutes of Health, the National Aeronautics and Space Administration, the National Institute of Standards and Technology, the National Oceanographic and Atmospheric Administration, the Defense Advanced Research Projects Agency, and others—to be given additional funding so they can prioritize and coordinate their research activities in support of President Biden’s list of charges to his science advisor and Office of Science and Technology Policy director Eric Lander, who now sits on the president’s Cabinet.

Senior Fellow, Rice University’s Baker Institute for Public Policy

Former Director, National Science Foundation, 1993–1998

When asked to comment on the interview with National Science Foundation Director Sethuraman Panchanathan, I paused because I felt he had covered the subject so brilliantly and comprehensively that there was little I could add. Instead, I decided to focus on ways in which synergies within the “research triangle” (academia, industry, and government) amplify advances in science and technology to meet national objectives. Vannevar Bush, the architect of postwar US science policy, and Arthur Bueche, the influential head of research and technology at General Electric, were not only early advocates of building synergies within the triangle; they also personified the pursuit of these synergies in their own careers.

NSF nurtures synergistic research through interdisciplinary collaborations. In so doing, bright graduate students from all over the world come to US universities to pursue doctorate degrees in science and engineering under highly recognized faculty, many of foreign origin.

What Bush and Bueche would find amazing if they were alive today is the extent of innovation and entrepreneurship now taking place within the research triangle. Many universities have established centers to teach innovation by both learning and doing, foundries for rapid prototyping and testing, start-up centers for enterprise development, legal counseling for preparing and filing patents, and university-managed research parks for nurturing start-ups and attracting venture capital.

Technically aligned companies now locate development centers in proximity to these universities not only to gain access to unique research instruments and facilities but also to recruit top talent.

Government agencies play key roles in advancing science and technology developments within the triangle. They do so not only through their own laboratories but also through federally funded research and development centers, university-affiliated research centers, and cooperative research and development agreements by which they share their facilities and expertise with private companies to aid them in new product developments. Also included are industry technology development clusters, corridors, and parks in proximity to Department of Energy and Department of Defense laboratories and to NASA research centers.

Government agencies play key roles in advancing science and technology developments within the triangle.

By examining the types of collaborative science and technology clusters in the United States, one can appreciate the many ways in which entrepreneurial push can join with commercialization pull to build bridges across the so-called valley of death between R&D activity and commercial use. If one examines the distribution of these centers and clusters among the 50 states, one finds fewer than five not significantly represented.

Finally, it is important to delineate the difference between “incubators” and “concentrators.” All of the examples mentioned above are incubators of scientific discoveries, new technologies, and economic growth. Concentrators exist primarily in large metropolitan areas that attract rapidly growing innovative enterprises because of their proximity to supply chains, transportation hubs, air- and seaports, markets, and large-enterprise services (business, legal, and financial).

Economic growth concentrates as it migrates from distributed incubators to regional concentrators. It doesn’t follow that by increasing the geographic distribution of incubators that a greater distribution of concentrators will follow without substantial infrastructure investment.

David A. Ross Distinguished Professor of Nuclear Engineering Emeritus

Purdue University

Science for Policy

In “Science for Hyper-Partisan Times” (Issues, Spring 2021), Jeffrey Warren tells the history of the North Carolina Policy Collaboratory, based at the University of North Carolina-Chapel Hill. His frame is as the Collaboratory’s leader, cast into the role within a university whose leaders and faculty members initially were distrustful of someone moving from NC Senate Republican staff to the university. My perspective is as the dean of a school that was a natural collaborator with the new entity; one of our departments is environmental sciences and engineering. Before the Collaboratory had been launched, there was concern, as Warren recounts, that its funding was a way for Republican legislators to control environmental research and regulation. Many leaders and faculty members across the university were skeptical about the Collaboratory, fearful that accepting funding from the new entity might taint them.

My experience in leading a large National Institutes of Health division had taught me the value of bipartisanship. We are neither the Republican school of public health nor the Democratic school of public health. We are the people’s school of public health. Practically speaking, at the time the Collaboratory was launched, with Democrats out of power in the White House, the NC governor’s mansion, and the state house, we needed new friends, or, at least, allies. I was willing to give Warren a chance.

We are neither the Republican school of public health nor the Democratic school of public health. We are the people’s school of public health.

However, I also was determined that we accept funds only if they were unconstrained by politics. A few faculty members applied for pilot funds and had good experiences. I agreed to meet with Warren. We recognized that we had similar goals: creating a more environmentally healthy state, contributing to the scientific knowledge base with policy-relevant work, funding our faculty researchers, and creating a model that could be replicated in other states.

As the first grants were completed, it was clear that strong research with practical application potential had been funded. The findings and views of scientists had not been directed or stifled. Researchers and administrators saw that we were in this together, and that people on both sides of the political aisle could support an environmentally healthy state and use science as the foundation for policy and regulation. On many occasions, legislators from both parties were curious and interested in environmental science. I participated in a call in which one of our researchers gave a minitutorial to a legislator about mass spectroscopy—at his request.

When the pandemic struck, thanks to the strong relationships enabled by the Collaboratory, we went to legislative leaders with a request for substantial funding to answer critical questions related to SARS-CoV-2, the virus that causes COVID-19. They listened and asked hard questions. In the end, as Warren recounts, they provided about $44 million for the Collaboratory to manage projects of researchers from multiple UNC universities to address pressing questions—from those related to COVID-19 testing and transmission, to the efficacy of wastewater systems for virus surveillance, to how to help businesses return to prosperity as the pandemic eased. I am not aware of another state that provided such generous funding to its universities to apply their scientific expertise to speed the end of the pandemic.

It was not all milk and honey. We, in academia, were quick to bristle when some of our statements were questioned by legislators. We sometimes defaulted to distrust legislators. I suspect they may have felt the same way about us. But something positive happened over the last several years of the Collaboratory. Many faculty members and administrators had gone from hands-off to cautious appraisal to full-on partnership. Jeff Warren often acted as an effective translator and communicator between researchers and legislators. The results are good for all participants and for the environment of North Carolina. Academics and legislators should talk more, judge less, and focus on outcomes that benefit their states and regions.

Alumni Distinguished Professor

UNC Gillings School of Global Public Health

Practically Wise

In “COVID-19 Through Time” (Issues, Spring 2021), Joseph J. Fins, a physician and bioethicist, describes how “the interplay of time, knowledge, judgment, and action is an essential determinant of how science works in the real world.” Fins notes that early in the pandemic, New York guidelines offered an approach to ventilator allocation for patients with respiratory failure due to COVID-19 infection based on then current knowledge and experience. However, with time, knowledge about management of COVID-19 patients with respiratory failure increased. It became apparent the guidelines were inappropriate. Fortunately, he writes, “tragic choices” of prematurely removing affected patients from ventilator support were not made. To Fins, “medical progress depended on the passage of time” during which clinical observations and research informed the treatment of patients with COVID-19 disease. He notes that this quantitative aspect of time—measured in minutes and so on—is what the Greeks called chronos.

Fins also highlights kairos, a qualitative aspect of time. Kairos, he writes, “asks us to appreciate the moral significance of the timing of our decision.” Is now the right time? Of course, chronos informs kairos. Yet throughout the COVID-19 pandemic, clinicians, health systems, and others have been “forced to contemplate kairos before having the benefit of … evidence provided by chronos.”

Indeed, as a physician, bioethicist, and leader of a regional health system, I can attest to the interplay between chronos and kairos and associated uncertainty during the pandemic. The following is a small sample of questions we have addressed with minimal evidence provided by chronos: What personal protective equipment should be worn when seeing patients without symptoms of COVID-19? How do we keep our hospital safe for patients, visitors, and employees? Will we have enough ventilators? (We also grappled with the matter of ventilator allocation.) Do we restrict visitors? How and where should we conduct COVID-19 testing? Who benefits from active treatment? How do we safely care for patients with non-COVID-19 diseases? How do we safely reopen the elective practices? Who gets a vaccine first? How do we ensure that vulnerable populations are vaccinated?

I can attest to the interplay between chronos and kairos and associated uncertainty during the pandemic.

How does one make decisions in this midst of daunting uncertainty? I posit an additional Greek concept: phronesis, also known as practical wisdom or prudence. The bioethicist Edmund Pellegrino describes phronesis as “the capacity for deliberation, judgement and discernment in difficult moral situations” and the clinician’s “most valuable virtue.” It promotes wise decisionmaking. The bioethicist Lauris Kaldjian describes five elements of phronesis-based decisionmaking: worthwhile ends; accurate perception of concrete circumstances detailing the specific practical situation at hand; commitment to virtues and moral principles; and, based on these, deliberation and motivation to act in order to achieve the conclusions reached by such deliberation. Phronesis, in turn, informs praxis—doing what is best given the situation.

Our Hospital Incident Command System was activated for more than a year and met regularly to address the COVID-19 pandemic. Our deliberations, decisions, and actions have reflected Kaldjian’s framework. Communications to our patients, staff, and communities have been frequent and regular. We have articulated what we know, what we don’t know, and the rationale for decisions. Over time, chronos informed kairos and phronesis, and our confidence in managing the myriad effects of the pandemic, from patient care to public health efforts, grew.

Finally, I agree with Fins that a third dimension of time—the study of the past—should be embraced. I am also optimistic that, as he writes, “our efforts to achieve some imperfect, early measure of kairos in the present will be deemed prudent”—that is, practically wise.

Regional Vice President, Mayo Clinic Health System–Southwest Wisconsin

Professor of Medicine and Professor of Biomedical Ethics, Mayo Clinic College of Medicine & Science

Teaching the Stories of Science

In “Shining a Light on the Impacts of Our Innovations” (Issues, Spring 2021), Ainissa Ramirez makes a compelling argument for bringing the stories of scientists and the impact of their research into our textbooks and classrooms. When I went through my degrees, I was taught about science. I learned about polymers—their synthesis, properties, and uses. I learned about interfacial polymerization and the “nylon rope trick” to demonstrate the preparation of a synthetic polymer. It became one of my favorite demonstrations at outreach events. It wasn’t until many years after I was teaching that I realized Stephanie Kwolek, the DuPont chemist known for inventing a family of synthetic fibers of exceptional strength and stiffness, developed the nylon rope trick as a teaching tool. I wished I had known more about her much earlier in my career. Being able to see someone like me in science—someone who had always been there but whose story was rarely told in classes—would have been transformational.

We owe it to our students to teach science in context not only so they might see themselves in the scientists who came before, but also so they might understand the impact science can have. When the Scottish inventor John Boyd Dunlop developed pneumatic tires, he may not have considered the impact on the raw materials’ sources or on the disposal of tires. But the desire for natural rubber exploded with the pneumatic tire, and the exploitation, torture, enslavement, and murder of Black people in the Congo and Indigenous tribes in the Amazon grew out of this invention. What started as a better tricycle tire for Dunlop’s child became the mutilation and murder of a five-year-old named Boali in the Congo. Alice Seeley Harris, a missionary, tried to stop such atrocities from continuing by documenting this horror with one of the first consumer cameras.

Being able to see someone like me in science—someone who had always been there but whose story was rarely told in classes—would have been transformational.

One can, of course, teach without the stories. Materials, sources, uses, molecular features can all be taught. But when we arm our students and colleagues with the stories and context, we arm them and ourselves with the opportunity to do better science. We have the opportunity to think about where the cobalt we use in batteries is mined by children, and we have the opportunity to think about the end of life for the cobalt-containing batteries. We can engage with the work of people such as the polymer chemist Karen Wooley and the chemical engineer Jodie Lutkenhaus and their development of new materials for recyclable lithium-ion batteries. We can learn about the work of Max Liboiron, a geographer in the field of “discard studies” who characterizes plastics in the ocean, and LaShanda Korley, a materials scientist who is developing new plastics with the end of life in mind from the start.

When we teach science without societal context, we give our students and ourselves permission to ignore our responsibilities as scientists. But that responsibility does not recede. When we teach our students about the scientists who did the work they study, they have an opportunity to see themselves in those scientists. When we teach them about the impact of science, we give our students the opportunity to think about the full ramifications of their work. When we teach science in context, we have the opportunity to be better scientists.

Associate Dean for Research and Faculty Development

College of Engineering and Information Technology

Professor of Chemical, Biochemical, and Environmental Engineering

University of Maryland, Baltimore County

New Voices in the Future of Science Policy

The latest issue of the Journal of Science Policy & Governance invited early career authors to reimagine the next 75 years of science policy. Supported by The Kavli Foundation and published in collaboration with the American Association for the Advancement of Science, the special collection offers bold, innovative, and actionable ideas for how the US scientific enterprise can become more equitable and inclusive, helping to contribute to a brighter future for all Americans.

These articles seek to broaden the view of how scientists can participate in achieving positive social impact. Authors focused on such issues as citizen involvement in science, promoting trust in science, embracing democratic principles, and addressing the needs of the American people.

More specifically, in the issue’s three winning essays, the authors argue for making rural regions a priority in US science policy in order to make the benefits of research and innovation more broadly beneficial and equitable; improving global scientific communication and collaboration by translating STEM papers into languages other than English; and reframing science policy and funding to emphasize social benefits as much as knowledge generation in scientific research.

Thinking Like A Citizen

In “Need Public Policy for Human Gene Editing, Heatwaves, or Asteroids? Try Thinking Like a Citizen” (Issues, Spring 2021), Nicholas Weller, Michelle Sullivan Govani, and Mahmud Farooque call on President Biden to invest in opportunities for Americans to meaningfully participate in science and technology decisionmaking. They argue this can be done by institutionalizing participatory technology assessment (pTA) at the federal level.

I’ve worked for many years as a pTA facilitator and researcher, and have spent the last three years researching the factors that make pTA successful (or not) in federal agencies. Like the authors, I would underscore the value pTA brings to decisionmaking. Participants in pTA forums—including those who normally think of “politics,” “government,” or even “public engagement” as dirty words—learn through their participation that they can engage with their fellow citizens on topics of importance in productive and generative ways. The authors suggest that, given the tremendous potential pTA has for improving democratic discourse and decisionmaking, now is the time for federal investment in and institutionalization of these approaches. 

But such investments should be made thoughtfully. My research team’s work on pTA in federal agencies underscores three important realities. First, pTA efforts are vulnerable to shifting political and administrative priorities. Seeking ways to institutionalize engagement efforts within agencies—and not just as one-off “experiments” or as part of more ephemeral challenges, prizes, or grant-making—is more likely to lead to lasting change. The creation of Engagement Innovation Offices within agencies, for example, would increase long-term capacity and organizational learning. Such offices should be separate from communications offices, whose remit is different, and should focus on experimenting with and developing dialogic forms of public engagement. 

Second, our research shows that successful public engagement requires skilled engagement professionals who understand the importance of deliberative approaches. These are agency personnel who are technically literate but also formally trained in public engagement theory and practice. They understand agency culture, are good at collaborating across departments and directorates, know how to navigate administrative rules, get how leadership and power function in the agency, and have enough technical and political knowledge and agency clout to innovate in the face of tradition and resistance.

Third, academic programs in science and technology studies (STS) have an important role to play, and agencies should develop partnerships with them to create pipelines for these skilled professionals to move into the federal government. Academic programs should prioritize training in pTA and other public engagement tools, and provide experiences for STS students with technical training, literacy, or both to be placed in agencies as a form of experiential learning. Agencies should facilitate such placements, perhaps via the proposed Engagement Innovation Offices.

The federal Office of Science and Technology Policy will be an important player in these efforts. It can provide training, organization, funding, and influence. But running pTA efforts out of a centralized office may prove to be less robust and sustainable than embedding Engagement Innovation Offices and professionals within agencies, making them more resistant to, though not totally insulated against, political headwinds and shifting budget priorities.

Professor of Public Policy and Administration

School of Public Service

Boise State University

Like Nicholas Weller, Michelle Sullivan Govani, and Mahmud Farooque, I am encouraged by the current surge of interest in how science and technology policy couples with social policy. But I am somewhat concerned by the authors’ narrative of participatory technology assessment (pTA) as a new breakthrough of “postnormal” participatory engagement into heretofore isolated scientific-technological domains. Enthusiasm for such participatory breakthroughs has a more than 50-year history, which offers some warnings. I submit two examples from the work of historians of technology.

Jennifer Light’s book From Warfare to Welfare (2003) follows the migration of defense analysts into urban policy in the 1960s and ’70s. During that period, these analysts and their liberal allies in municipal politics readily embraced citizen participation as a means of quelling urban “alienation,” leading them to champion cable television as a way to create new avenues for local engagement. Those efforts floundered before cable later flourished in the hands of media companies uninterested in such matters.

In his book Rescuing Prometheus (1998, note the title’s implied narrative), Thomas Hughes prefigured the current authors’ interest in postnormal policy by announcing the advent of “postmodern” technological systems characterized by stakeholder engagement. Unhappily, his exemplar of such efforts was Boston’s then-in-progress Central Artery / Tunnel project, which did eventually reform the cityscape but also became notorious as the project management boondoggle known as the “Big Dig.”

Emphatically, my point is not to imply that participatory decisionmaking is a fatally flawed concept. Rather, it is to encourage a healthy awareness that, instead of displacing a perceived plague of scientific-technical solutionism, initiatives such as pTA could end up writing new chapters in the checkered history of social scientific solutionism if they are not thought through. Solutionism is no less solutionism and an expert is no less an expert simply because the innovation being proffered is social in nature rather than a gadget or an algorithm.

By portraying their pTA approach as a breakthrough social intervention, the authors arrive quickly and confidently at their recommendation of instantiating it as a new box on the federal org chart. I would challenge them to grapple more openly with the lessons of participatory decisionmaking’s long history and the difficulties facing their particular branded method.

Critical questions include: At what points in the decisionmaking process should such exercises be conducted? How much additional burden and complexity should they impose on decisionmaking processes? What sorts of decision should be subject to alteration by public opinion, how should issues be presented to public representatives, and how should contradictory values be reconciled?

More fundamentally, we may ask if this is a true method of participatory decisionmaking, or is it an elaborate salutary regimen for cloistered program managers? With a public deluged with requests for feedback on everything from their purchases to their politicians, is feedback into the multitude of technical decisions within the federal government what the public wants, or is that notion itself a creation of the expert imagination? 

Senior Science Policy Analyst

American Institute of Physics

Author of Rational Action: The Sciences of Policy in Britain and America, 1940–1960 (MIT Press, 2015)

Is Science Philanthropy Too Cautious?

It was a delight to see Robert W. Conn’s coherent, synthetic history of the role of philanthropy in support of US science and technology, presented in “Why Philanthropy Is America’s Unique Research Advantage” (Issues, August 11, 2021). The field I was trained in, molecular biology, originated in large part through the vision of Warren Weaver at the Rockefeller Foundation, and early practitioners were supported by the Carnegie Corporation. Now philanthropies in biomedical research are hugely important complements to the National Institutes of Health and other government funders.

I have been puzzled for over two decades by a simple observation, and I would welcome Conn’s thoughts. Why has it taken so long, and relied entirely on initiative within government, to develop a DARPA-like component in biomedical research, when this was an obvious niche to fill? Conn describes how major R&D initiatives draw on a very broad array of investigator-initiated research projects—the vaunted R01 NIH grant and its equivalents. But now the convergence of science and information technology has naturally led to larger teams that require management, direction, and vision: examples being CERN, the Human Genome Project, and the BRAIN Initiative. And now there is serious talk of cloning and adapting the DARPA framework to address problem-oriented research—that is, to systematically pursue grand challenges that will necessarily entail many teams pulling in harness.

Why has it taken so long, and relied entirely on initiative within government, to develop a DARPA-like component in biomedical research, when this was an obvious niche to fill?

Yet most philanthropies have mainly cherry-picked successful science from the ranks of stars in the NIH- and National Science Foundation-funded constellations. It is an effective, successful, but very conservative strategy. It is powerful and successful, for sure—witness the amazing contributions of Howard Hughes Medical Institute investigators or the outsize influence of the Bill & Melinda Gates Foundation in global health. But serious efforts to pool resources toward social goals that won’t be achieved otherwise seem outside the box. Witness the amazing story of developing vaccines but failing to distribute them equitably or even effectively because the incentives to the companies that control the products do not align with public health goals.

It seems there must be some incentives within philanthropy that thwart thinking at scale or hinder cooperation among philanthropies, or that cleave to existing frameworks of intellectual property and control that nonprofit funding might be able to work around. Could the Science Philanthropy Alliance that Conn cites do better? What would that look like?

Professor, Arizona State University

Nuclear Waste Storage

In “Deep Time: The End of an Engagement” (Issues, Spring 2021), Başak Saraç-Lesavre describes in succinct and painful detail the flawed US policy for managing nuclear waste. She weaves through a series of missteps, false starts, and dead-ends that have stymied steady progress and helped to engender our present state—which she describes as “deadlocked.”

Her description and critique are not meant to showcase political blunders, but to caution that the present stasis is, in effect, a potentially treacherous policy decision. The acceptance of essentially doing nothing and consigning the waste to a decentralized or centralized storage configuration is in fact a decision and a de facto policy. To make the situation worse, this status quo was not reached mindfully, but is the result of mangled planning, political reboots, and the present lack of a viable end-state option.

Although there may be some merit to accepting a truly interim phase of storing nuclear waste prior to an enduring disposal solution, the interim plan must be tied to a final solution. As decreed in the Nuclear Waste Policy Act, and reinforced by the Blue Ribbon Commission on America’s Nuclear Future, centralized interim storage was to be the bridge to somewhere. But the bridge is now looking like the destination, and it would be naive not to view it as another disincentive to an already anemic will to live up to the initial intent.

To make the situation worse, this status quo was not reached mindfully, but is the result of mangled planning, political reboots, and the present lack of a viable end-state option.

Saraç-Lesavre seems to believe this current impasse constitutes the end of an earlier era in which shaping decisions and outcomes was once driven by a “moral obligation to present and future generations.” She sees the current unfolding scenario as a reversal of a once-prevailing ethos.

I have been involved for the past 20-plus years in just about every sector dealing with nuclear waste disposal. Beginning with the formation of a nongovernmental organization opposed to incineration of waste, I have conducted work on stakeholder engagement with the Blue Ribbon Commission, the Nuclear Regulatory Commission, the Bipartisan Policy Commission, and the Department of Energy, as well as with a private utility, a private nuclear waste disposal company, and an emerging advanced reactor company. From these perspectives and my experience with them, my impression is that the issue of nuclear waste management is continually given consideration, but rarely commitment. Lip service is the native language and nearly everyone speaks it.

For US policymakers—and truly all stakeholders involved with nuclear waste—it will require a steely and coordinated commitment to solve the problem. This has always been the case, but now the problems are becoming more complex, the politics more partisan, and a path that once appeared negotiable is now nearly unnavigable. The reason for this is less about a lack of resolve to comprehend the “deep time” in which we need to consider the implications of nuclear waste, and more about the impediments and cheap workarounds wrought by the short cycles of “political time.”

Until we can take the politics out of nuclear waste disposal, it will not be the most sound decisions that prevail, but those with the prevailing wind in their sails.

Mary Woollen Consulting

Başak Saraç-Lesavre raises some fundamental and important issues. Spent nuclear fuel (SNF) was created over the past 40-plus years in return for massive amounts of clean nuclear-generated electricity. Are we going to begin to provide a collective answer for managing and disposing of SNF or are we going to shirk our clear moral responsibility and leave a punishing legacy to our children and future generations? More than 50 reactor sites across the United States continue to store SNF on site with no place to send it.

Saraç-Lesavre appears to support the recommendation of important but selective parties whose advice is that “spent nuclear fuel should be stored where it is used.” However, while championing consent-based siting, she does not include the views of those communities and states that now house this stranded SNF, nor the views of much of the science community that is working to provide a viable solution. When those nuclear power plants were originally sited, it was with the understanding that the federal government would take responsibility for removing SNF and disposing of it, allowing the sites to be decommissioned and returned to productive use. Siting and opening a repository for permanent disposal will take many decades even under the most optimistic scenarios; the nation needs to develop one or more interim storage sites that can be licensed, built, and opened for SNF acceptance decades earlier.

Saraç-Lesavre mentions the Obama administration’s Blue Ribbon Commission on America’s Nuclear Future conclusion that siting a nuclear waste repository or interim storage facility should be consent-based. However, the commission made eight fundamental recommendations while also making it clear that it was not a matter of picking just one or several of them; rather, they were all required to resurrect an integrated US program and strategy that had the best chances for success. Two of the commission’s recommendations called for “prompt” actions to develop both a repository for the permanent disposal of SNF (and other high-level radioactive wastes) and centralized interim storage to consolidate SNF in the meantime. The reasoning was detailed and sound, and those recommendations remain highly relevant today.

A healthy, enduring federal repository program is needed and needed promptly. Whether through government, private industry, or a private/public partnership, consent-based centralized interim storage remains needed as well.

Former Lead Advisor, Blue Ribbon Commission on America’s Nuclear Future

Başak Saraç-Lesavre’s commentary on the interim storage of commercially generated spent nuclear fuel raises a variety of important issues that have yet to be addressed. Here I would like to expand on some of her most salient points.

One of the consistent characteristics of the US strategy for the back-end of the nuclear fuel cycle has been the absence of a systematic understanding of the issues and a failure to develop an encompassing strategy. In the report on a two-year study by an international team of nuclear waste management experts, Reset of America’s Nuclear Waste Management Strategy and Policies, sponsored by Stanford University and George Washington University, one of the most important findings was that the present US situation is the product of disconnected decisions that are not consistently aligned toward the final goal—permanent geologic disposal. The isolated decision to consolidate spent fuel inventories at just a few sites is another example of this same failed approach. Any decision to go forward with interim storage needs to be part of a broader series of decisions that will guarantee the final disposal of spent fuel in a geologic repository.

Another critical issue is the meaning of “interim” storage, as interim may well become permanent in the absence of a larger strategy. The present proposal is for interim storage for some 40 years, but it will almost certainly be longer if for no other reason than it will take the United States some 40 to 50 years to site, design, construct, and finally emplace spent nuclear fuel and high-level waste at a geologic repository. The siting of an interim storage facility and the transportation of waste to that facility will be a major undertaking that will take decades. One can hardly imagine that once the waste is moved, there will be an appetite for another campaign to move the waste again to a geologic repository, particularly as time passes, funding decreases, and administrations change. One must expect that as 34 states move their nuclear waste to just a few locations, such as to Texas and New Mexico, the national political will to solve the nuclear waste problem will evaporate. 

The siting of an interim storage facility and the transportation of waste to that facility will be a major undertaking that will take decades.

What is the alternative to interim storage? There is an obvious need is to secure the present sites by moving all the spent fuel into dry storage containers that should then situated below grade or in more secure berms. There may be good reason to move the casks from closed reactor sites to those that are still operating. As reactor sites shut down and are decommissioned, there may be value in retaining pools and waste handling facilities so that spent fuel casks can be opened, examined, and repackaged as needed. As my colleagues and I have reported, even this short list of requirements reveals that the selection between alternatives will be a difficult mix of technical issues that will have to be coordinated with the need to obtain consent for local communities, tribes, and states.

Interim storage, by itself, will not solve the United States’ nuclear waste problem. In this case, today’s solution is certain to become tomorrow’s problem.

Center for International Security and Cooperation

Stanford University

Başak Saraç-Lesavre’s article addresses the important topic of our moral obligations to future generations, but its focus only on nuclear waste is too narrow. The most important questions for our long-term obligations involve the long-term problems of all wastes generated by all energy technologies.

The focus on nuclear wastes is logical, in the same sense that it’s logical to look for lost keys under a streetlight. One of the major advantages of nuclear waste, compared with wastes that other energy technologies produce, is that it’s plausible to plan for and implement reasonable approaches to safely manage the waste for decades, centuries, and millennia into the future.

We need to shine a brighter light on the question of the very-long-term environmental and public health consequences of all the wastes produced by energy technologies, whether it be the rare-earth mill tailings in Baogang, China, thousands of coal-ash impoundments worldwide, fracking waste waters reinjected into wells, or most importantly, the thousands of gigatons of carbon dioxide and methane released from our current use of fossil fuels.

It sounds deeply dissatisfying, but our current de facto policy to use interim storage for spent nuclear fuel makes sense. Today’s lack of consensus about the permanent disposal of spent fuel is logical, because we do not now know whether it is actually waste, or is a valuable resource that should be recycled in the future to recover additional energy. We cannot predict this today, any more than in the 1970s one could predict whether shale oil could be a resource or should be left in its existing geologic isolation. Certainly with the shale technology of the 1970s, which involved mining, retorting, and generating large volumes of tailings, shale oil was not a resource. But technology has changed, and based upon statistics from the Department of Energy’s Energy Information Agency, the advent of shale fracking gets most of the credit for recent reductions in US carbon dioxide emissions from electricity generation.

Regardless of whether existing spent fuel is recycled in the future, there will still be residuals that will require deep geologic disposal. The successful development of the Waste Isolation Pilot Plant in the United States, for the deep geologic disposal of transuranic wastes from US defense programs, provides evidence that geologic disposal can be developed when societal consensus exists that materials really are wastes, and where clear benefits exist in placing these materials into permanent disposal. Today the United States needs to rethink its approach to managing nuclear wastes. Deep geologic disposal will be an essential tool, and development of multiple options, focused on disposal of materials that are unambiguously wastes, makes sense as the path forward.

William and Jean McCallum Floyd Endowed Chair

Department of Nuclear Engineering

University of California, Berkeley

He is also the Chief Nuclear Officer for Kairos Power, and is an early investor and advisor to the start-up company Deep Isolation

Başak Saraç-Lesavre provides an excellent account of the failed US government effort to implement a long-term solution to manage and dispose of the country’s growing backlog of spent nuclear fuel, and makes a compelling case that the prolonged absence of a national nuclear waste strategy has led to a lack of coordination and direction that could lead to inequitable and unjust outcomes. In particular, the development of consolidated interim storage facilities—if made economically attractive for nuclear plant owners—would likely undermine any political consensus for pursuing the challenging goal of siting and licensing a geologic repository.

For this reason and others, the Union of Concerned Scientists does not support consolidated interim storage facilities, and has consistently opposed legislation that would weaken current law by allowing the government to fund such facilities without closely coupling them to demonstrated progress in establishing a geologic repository. However, Saraç-Lesavre references our organization’s position in a way that could give the misleading impression that we support surface storage of spent nuclear fuel at reactor sites for an indefinite period. Although we strongly prefer on-site interim spent fuel storage to consolidated interim storage, provided it is stringently regulated and protected against natural disasters and terrorist attacks, maintaining a network of dispersed surface storage facilities forever is in no way an adequate substitute for a deep underground repository.

Maintaining a network of dispersed surface storage facilities forever is in no way an adequate substitute for a deep underground repository.

The challenge, of course, is how to revive the defunct repository siting program and execute it in a manner that addresses both environmental justice and intergenerational equity concerns. This is not straightforward. Locating a repository site in a region far from the reactors where spent fuel was generated and the areas where the electricity was consumed may seem unjust to the host community, but it could well be the best approach for minimizing long-term public health risks. In that case, one means of redress for the affected community would be fair compensation. The likely need to provide such compensation must be factored into the total cost of the repository program.

An alternative path is offered by the company Deep Isolation, which has proposed to bury spent fuel at or near reactor sites in moderately deep boreholes. While this concept raises significant technical and safety issues and would require major changes to current law and regulations, it does have the political advantage of obviating the need to find a centralized repository location. Whether it is a more just solution, however, is an open question.

Director of Nuclear Power Safety

Union of Concerned Scientists

Başak Saraç-Lesavre’s article on nuclear waste storage offers valuable insights, as do the other two articles on nuclear energy, but none addressed two glaring energy issues.

While not directly related to nuclear power, increasing renewable power will affect the electricity market. Sophisticated market mechanisms balance power output and distribution against demand to ensure fair electricity pricing. The temporal, seasonal, and weather-dependent variations in renewable production, along with the current inability to store large amounts of surplus energy, can upset that market. Backup resources, currently provided mostly by fossil-fuel generation, are critical to electricity marketplace stability.

Excess energy from renewables can force prices to plummet, discouraging investment and exacerbating price uncertainty. A lack of backup forced price spikes in Texas in February 2021 when the industry could not produce or obtain sufficient energy to meet needs driven by an unusual cold spell.

This ties into the three nuclear articles in that they did not acknowledge the advances in energy technology that could solve two issues: what to do with nuclear waste and how to back up renewables. There are at least 50 groups developing new concepts in nuclear fission and fusion energy. Few will survive, but those that do will affect storage of nuclear fuel policy, grid reliability, and uranium mining.

A nuclear power plant concept that one of us developed—the molten uranium thermal breeder reactor (MUTBR)—could reduce nuclear waste, provide backup to renewables, and reduce uranium mining.

There are at least 50 groups developing new concepts in nuclear fission and fusion energy. Few will survive, but those that do will affect storage of nuclear fuel policy, grid reliability, and uranium mining.

The MUTBR is mostly fueled by molten uranium and plutonium metals reduced from nuclear waste. They are its fuel and, as liquids, are pumped through a heat exchanger to give up their energy. It has large fuel tubes to facilitate uranium-238 fission and is a “breed-and-burn” reactor. In operation, it “breeds” enough plutonium from the plentiful isotope of uranium to fully replace the easily fissionable but scarce isotope of uranium (the primary fuel in conventional reactors) and plutonium that has fissioned (burned). The MUTBR may be a way to deal with used nuclear fuel while producing copious amounts of carbon-free power on demand.

The MUTBR could provide flexible backup power in the way hydroelectric dams can do to smooth out changes in production and demand. A dam’s reservoir storage capacity facilitates production management. MUTBR can facilitate backup in three ways.

First, it can send excess heat (beyond what is profitable to sell) to thermal reservoirs. Its high operating temperature would enable cheap heat reservoirs using molten sodium chloride salt. That energy can be released to the grid when demand and prices are higher. Second, if this salt heat storage is depleted but electricity demand is high, biofuels could be used to generate heat energy for its electric generators for backup. The generators would be sized to convert substantially more heat to electricity than the maximum available from its reactor. Third, when the thermal salt reservoirs are fully charged and power demand is low, MUTBR’s patented control mechanism provides flexibility to reduce its fission rate. This system would also improve safety by automatically throttling fission if there is a system failure.

These features of the MUTBR design could provide an excellent non-carbon-producing complement to renewable resources, filling in when renewable production is low and reducing output when renewables are near maximum output. These economic considerations are basic to having a reliable electric power industry.

Washington, DC

Washington, DC

The Next 75 Years of Science Policy

In this special section, we will be publishing dozens of ambitious, challenging, and innovative proposals on how to structure the resources of science to enable the best possible future. Contributors will include everyone from recognized global leaders to early career researchers, policymakers, businesspeople, and our readers, creating a forum for the exchange of ideas about reinvigorating the scientific enterprise.

A Viable Nuclear Industry

In “Reimagining Nuclear Engineering” (Issues, Spring 2021), Aditi Verma and Denia Djokić self-define as intellectual anomalies. There is an unwritten rule in the nuclear sector that only people with nuclear engineering degrees are legitimized to have a valuable opinion or knowledge about anything nuclear. Thus, it is very unusual for someone from within the nuclear sector to recognize the major intellectual shortcomings of a discipline that’s increasingly insular and siloed, and which receives any knowledge coming from outside its own ranks as a threat. Verma and Djokić, as nuclear-trained people, are legitimized in the eyes of the sector, but they are also breaking the second major rule of the nuclear sector: militancy. Indeed, they are exceptional among nuclear engineers. Both are a new and most-needed type: the humanist nuclear engineer.

Having researched and written about nuclear economic history for over a decade, I have come across these two unwritten rules far more often that I would like to acknowledge. Yet I reckon that nuclear engineers (and nuclear institutions such as the International Atomic Energy Agency and the European Atomic Energy Community) are, for the most part, the victims of their own training and traditions. As Verma and Djokić expose, in the academic curricula of nuclear engineers across the globe there is little room for self-critical reflection of a sector with over half a century of history to ponder. For sure, reflection upon nuclear incidents and accidents exists, but wider self-introspection about nuclear impacts on society rarely occurs within nuclear training.

Those who do not understand that some scientific advances cause concerns in society express their surprise by arguing that “technology has no ideology.” Even if one could accept this premise, it is impossible to ignore that the groups, institutions, and people who promote a particular technological option do have implicit and explicit ways of understanding what society should be. Technologies are not installed in a vacuum, but rather are inserted into a specific place and time. This social context determines whether a technology has (or not) a similar reception in different societies. Alternative technologies have different potentials to alter the social fabric and ways of life, generate interest or anxiety, and promote businesses (or cause the disappearance of others). In short, technology and society interact and transform mutually.

In the academic curricula of nuclear engineers across the globe there is little room for self-critical reflection of a sector with over half a century of history to ponder.

When asked about these issues, many of the nuclear engineers we interviewed for our research claim that those are issues that do not concern engineers. After all, they are concerned with the design, building, and use of nuclear reactors and infrastructures. The impacts of those and the associated societal complexities are for the politicians to solve, according to most of the engineers we interviewed.

Verma and Djokić aim at building bridges to close the gap between nuclear and social sciences. By introducing these other aspects into the academic curricula of nuclear engineering, nuclear engineers may become more aware of how their decisions have long-lasting impacts beyond the technology itself and may help to improve some of the blind spots that are likely to prove problematic down the line. This is a wake-up call for creating a new curriculum for the humanist nuclear engineer of the future.

Full Professor of Economic History

Director, Institute for Advanced Research in Business and Economics

Universidad Publica de Navarra (Spain)

Aditi Verma and Denia Djokić call for rethinking our collective approach to the benefits and risks of nuclear technology—a call that is crucial and timely. As humanity confronts the catastrophic consequences of climate change, questions related to the viability of nuclear energy to achieve a decarbonized world abound. The authors, however, push the boundaries of the current conversation by arguing that what is required to make nuclear energy “viable” for the twenty-first century is much more than just an exercise in technological development.

Nuclear energy has a role to play if investments in this technology are informed and driven by a human-centered approach. This requires nations to act to mitigate the risks that the nuclear technology enterprise generates and unevenly distributes across societies. It also demands engineers to become more self-aware of their role as “servants of societies” so that in their design of complex nuclear technological systems, they also account for critical social issues including equality, environmental sustainability, and intergenerational justice.

Two critical arguments emerge as central in the authors essay.

First, nuclear technological decisionmaking ought to be embedded into broader multidimensional societal processes. Throughout history, technological advancements have shaped societies, cultures, and nations. Almost always, new technologies have brought about significant benefits but equally altered social norms and environmental habitats. The acceleration and disruption of technological innovation, especially in the past century, have too often taken place in the absence of strong national mitigation strategies. Nuclear power plants, for example, while contributing to economic opportunities in the communities where they operate, have also heightened local safety risks, and led to the production of nuclear waste that remains today one of the most serious intergenerational environmental issue our societies remain incapable of solving.

The acceleration and disruption of technological innovation, especially in the past century, have too often taken place in the absence of strong national mitigation strategies.

Verma and Djokić explain how the calculation of risks in the nuclear field all too often remains the purview of a small and often homogenous group of decisionmakers (whom the authors of a related Issues article call the gatekeepers). To make nuclear energy viable for the future, nuclear technological investments must be pondered and assessed based on broader factors including intergenerational justice, environmental sustainability, and community needs for economic equity and safety.

Second, to achieve a human-centered approach to nuclear technology, future generations of nuclear engineers must be educated in both the arts and the sciences. While Verma and Djokić praise their scientific training, they also acknowledge how their exposure to other disciplines, including the social sciences, has helped them become more conscious of their social responsibility as engineers.

In redesigning and rethinking how future nuclear engineers ought to be trained, the authors point to a radical rethink of the current approach to the probabilistic risk assessment that dominates the field. While probabilistic risk assessment relies on the rule of logics and plausible reasoning, it also severely limits out-of-the-box thinking, experimentation, and creativity. An interdisciplinary education will provide nuclear engineers with a full toolbox of strategies and approaches, and make them more socially aware and therefore more effective in their own work as engineers.

Ultimately, the authors’ argument is powerful and reaches beyond the nuclear field. In a time of social and racial reckoning in the United States and around the world, they call for engineers to contribute to this historical moment by embracing a broader and deeper meaning of their role for the good of their communities, nations, and the world.

Executive Director, Project on Managing the Atom, Belfer Center for Science and International Affairs

Harvard Kennedy School

At a time when addressing climate change has refocused the world on the possibilities of nuclear energy and when commercial developers envision a new wave of twenty-first century nuclear products, Aditi Verma and Denia Djokić wisely ask the nuclear energy community to pause, reflect, and reconsider their approach to deploying nuclear technology.

Deploying nuclear technology is a socio-technical challenge whose success is far less likely if treated solely as a technology development challenge. The authors wisely describe the task in terms of their personal stories, recognizing that acceptance of the technology is the sum of many personal stories. Their article should stand up to history as a critical contribution in the philosophy of nuclear energy development.

Professor and Chair

Department of Nuclear Engineering & Radiological Sciences

University of Michigan

The Greatest Show on Earth

Long before Times Square blinked to light, New York City had the shimmering Crystal Palace, the centerpiece of the Exhibition of the Industry of All Nations, a World’s Fair that began in the summer of 1853. For a 25-cent ticket, throngs of people marveled at the multitiered structure of iron and steel. Poet Walt Whitman called it “Earth’s modern wonder.” Inside the palace were the technological wonders of the age. An English whaling gun was said to look as if it could “do some execution upon the monsters of the deep.” An automated tobacco roller cut and wound 18 cigars a minute, superseding—ominously, in hindsight—hand labor.

But the wonder that still resonates today began with a May 1854 demonstration by Elisha Graves Otis. The 42-year-old engineer was a bedframe maker and a tinkerer with a passion for fixing faults and frailties. In his trim Victorian suit, lush beard, and silk stovepipe hat, Otis mounted a wooden platform secured by notched guide rails. His assistant then hoisted the platform some 50 feet above the ground, grabbing the crowd’s attention.

Otis was there to correct a fault of his own making. He had developed an elegant solution to the problem of cable failure in platform elevators that made use of a hoist with a passive automatic braking system—but none had sold. It wasn’t because people didn’t need them; elevators often catastrophically broke down in granaries and warehouses, killing and maiming their passengers. Otis realized that his design, though superior and straightforward, needed showmanship. The World’s Fair was his moment to flaunt his vertical flight of fancy and function.

Otis had developed an elegant solution to the problem of cable failure in platform elevators that made use of a hoist with a passive automatic braking system—but none had sold.

When the assistant dramatically used an ax to cut the suspension cable holding the platform, the crowd gasped in shock. It appeared to be an act of lunacy—and suicide for Otis, who stood on the platform. However, the platform stopped with a jerk just a couple of feet lower as the braking system arrested the freefall. “All safe,” Otis reassured the audience, “all safe.”

And thus, the crucial safety innovation that led to the launch of the modern vertical city was enabled by a now-legendary stunt. It’s impossible to imagine urban life without it.

Otis’s demonstration exemplifies a time-honored formula that mixes technology and design with entertainment. In some fields, “demo or die” has come to supplant “publish or perish”—highlighting the fact that products or people, no matter how deserving, will not advance unless they are first noticed. From Thomas Edison’s electric theatrics to Steve Jobs’s turtlenecked stage flair, the demo culture has thrived on symbolism, spotlight, and special effects in which pomp is the essence of persuasion.

Still, magicians will tell you that a trick will fail if it lacks meaning, no matter how incredible. There must be a link between the magic and its purpose. Showmanship “brings out the meaning of a performance and gives it an importance that it might otherwise lack,” writer and magician Henning Nelms observed. “When showmanship is carried far enough, it can even create an illusion of meaning where none exists.” And, of course, meaning is something that technology often needs desperately when it has not yet attained a place in our lives.

Meaning is something that technology often needs desperately when it has not yet attained a place in our lives.

When someone asked Phineas Taylor Barnum to describe the qualifications of a showman, the brisk ballyhooer said that the person “must have a decided taste for catering for the public; prominent perceptive faculties; tact; a thorough knowledge of human nature; great suavity; and plenty of ‘soft soap.’” When asked just what “soft soap” was, he clarified: “Getting into the good graces of people.” These human factors are relevant to engineers as well.

Although showmanship is frowned upon when it is pursued too overtly, it is sometimes unavoidable. Consider the rousing words of President Kennedy in 1962: “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard.” The showmanship in his words was apparent, and it got us to the moon. But what of “the other things?” If we are to take up Kennedy’s bold challenge, it’s time to elevate showmanship to do the other things he alluded to—perhaps the necessary things that are less sexy and more vexy.

Showmanship for prosocial needs could move people to action if the emphasis is on mindful mending rather than blank boosterism. Just imagine a prime-time commercial for roads and public works that inspires infrastructure improvements rather than promoting the latest new feature or flavor. Or a promo for ventilation, sanitation, and disease surveillance systems, all significant public health achievements made possible by invisible engineering. Or a modern-day Elisha Otis demo that captures the public’s attention about the powers of safety standards, quality management, and preventive maintenance in elevators? All of these are actions and technologies that our lives—literally—depend upon.  

Maintenance may seem too pedestrian to be a candidate for showmanship. As scholars Daniel Kammen and Michael Dove suggest: “Academic definitions of ‘cutting edge’ research topics exclude many of the issues that affect the largest number of people and have the greatest impact on the environment: everyday life is rarely the subject of research.” For this reason, innovation and the nonstop narrative around it have become a cultural default, as ambient as elevator music.

Innovation and the nonstop narrative around it have become a cultural default, as ambient as elevator music.

But when the dazzling prominence of innovation overshadows the subtler, kinder, and attentive acts that characterize maintenance, it leads to the collapse of everyday expectations. And these little maintenance misfortunes may ultimately put a stop to legitimate big-picture innovations. Why, after all, build a system if there is no ethic to maintain it well? Maintenance is not a static process; it builds on change, and just like innovation, it fuels change. Innovators often claim to make history, but maintainers start from and sustain the necessary continuities of history. There can be no useful innovation without a vast, invisible infrastructure of maintenance activity that keeps civilization running. 

Nestled between the duties of innovation and maintenance is a responsibility for cultural engineering that does not end when a commission or contract comes to completion. It is a perpetual effort to be attentive to future neglect and decay in our shared dependencies. Very few subjects are as relevant, and also neglected, as care and maintenance—acts integral to our survival and progress and as crucial as the creation itself. Indeed, maintenance over a system’s life cycle may consume more than it took to make a new system. But the result is often a catastrophe avoided. Engineers are full of such half-jokes: today’s innovations are tomorrow’s vulnerabilities. Without maintenance, failures flourish.

Moonshots and their like may inspire us to attempt the impossible. Still, far more practical value has come from suitcase wheels than Ferris wheels, no matter how flashy the latter are. Maintenance is the unsung partner that enables innovation. It is both life and—in its connection to history, present, and the future—larger than any single life. And it needs showmanship to attract the attention it requires to assume its proper place in our civic priorities.

Otis never thought he would become a showman at the Crystal Palace, but P. T. Barnum did. History records that Otis received a hundred dollars for his stunt from the man. There was no need for an elevator pitch.

A Higher Ed Maelstrom

Kumble R. Subbaswamy has provided a useful guided tour of American public universities in the wake of the pandemic wreckage. His narrative, not surprisingly titled “Public Universities,” part of the postpandemic special section (Issues, Winter 2021), reminds me of Edgar Allen Poe’s classic 1841 short story, “Descent in the Maelstrom.” For Poe’s narrator, the only way to survive a furious ocean hurricane and sinking ship was to tread water, keep calm, and thoughtfully observe one’s own predicament. It’s a fitting metaphor for university presidents whose academic ships have been battered since the pandemic’s beginning last March. All constituents in American higher education would do well to read and to heed this remarkable profile of what public universities are facing.

Subbaswamy’s account is enduring because he avoids polemics, opting instead to provide thoughtful analysis about the endangered residential campus model for public higher education. Even before the COVID-19 crisis exposed and increased the liabilities of the traditional residential campus, we have had new models for innovative higher education. For example, I have been intrigued by the Universities at Shady Grove, launched in 2000. Located in Rockville, Maryland, near Washington, DC, the University of Maryland system has brought together nine of the state’s public universities to cooperate in offering upper-division and graduate-level degree programs, most of which are attuned to the changing national economy and demand for educated professionals. It provides an alternative to the model of the rural state land grant university campus that started to flourish in the early 1900s.

Even before the COVID-19 crisis exposed and increased the liabilities of the traditional residential campus, we have had new models for innovative higher education.

Elsewhere there are comparable signs of innovation. But what happens to public universities that are mortgaged into a traditional residential campus? The problem is pronounced because a decade ago numerous presidents, boards, and donors pursued massive building campaigns, often marked by grand structures. The price tag often was construction debt between $1 billion and $2 billion, much of which will be paid by future generations of students who are charged mandatory fees. By 2015 some ambitious universities’ expansion projects were featured in national media, a publicity meteor that was difficult to sustain—and now is difficult to afford. The high-stakes gamble by some aspiring public university presidents was that this was a way to transform a provincial institution into a prestigious architectural showcase. Less evident is whether these provided the right infrastructure for the science research. So, even though the traditional grand campus may no longer be necessary or effective, the nation is stuck with these monuments that perpetuate American higher education’s “edifice complex.”

Furthermore, in communities ranging from small towns to major cities, a college or university often is the largest landowner and employer. That powerful presence brings responsibility to institutional leaders in renegotiating “town and gown” relations. If all that campus real estate and new magnificent buildings are no longer necessary, how ought these be reconfigured to appropriate new uses? What do we now identify as the essentials of a college education and degree? Thanks to Chancellor Subbaswamy’s thoughtful essay, we have an invitation to a great conversation that can generate light as well as heat in revitalizing public universities in the postpandemic era.

University Research Professor

University of Kentucky

Author of American Higher Education: Issues and Institutions and A History of American Higher Education

Data for the People!

In “The Path to Better Health: Give People Their Data” (Issues, Winter 2021), Jason Cohen makes an important contribution to the discussion of data privacy.

Data privacy in the midst of data integration, data organization, interoperability, and advanced analytics are table stakes for health care organizations—but challenging. The rush to commercialize on personal health data represents a particular risk for underserved populations, who already suffer poor outcomes due to lack of access to health care. There should be an approach to being thoughtful for these populations and filter for critical review of algorithmic bias. Creating a framework for ownership of health data that empowers these populations is essential to ensuring that they receive the benefits of the data science revolution.

Principal and Founder, JDB Strategies

Chief Clinical Product Officer and Medical Director, Medical Home Network

Jason Cohen presents some interesting perspectives. Several points in particular jumped out at me.

It is very true that patients making poor decisions is at the center of chronic health problems. With the application of artificial intelligence and data, health tech companies are well positioned to make a difference and deliver personalized patient engagement programs. These engagement platforms can educate patients and drive behavior change to help them adopt healthy habits.

Patients owning their own data and being able to control who gets to use them is a great concept. If such tools are developed and adopted, patients certainly will have a lot more control and power. Some of this is already happening with Apple’s iPhone, Microsoft’s Office 365, and Google’s search queries, where the phone or device is keeping track of communications happening between individuals and the world around them. Big data analysis of the tone of the messages and the time spent on various apps or the content that is consumed can provide leading indicators of a person’s mental state.

Future use of such technology seems positioned to expand.

Founder & CEO

RediMinds Inc.

Jason Cohen makes several excellent points, but he does not mention the practical importance of data context. For example, radiologic images require skilled interpretation, and even the image characteristics or “findings,” may then support only a probabilistic measure of the health or prognosis of the patient. Clinicians that use such information to guide patient management are well aware of the reasons the imaging was requested, the context in which such measurements are acquired, ways findings might be interpreted by the local radiologist, and confounding factors specific to the patient, but the future data user doesn’t have that advantage. The data mining algorithms used by a third party years later may not be sufficiently sophisticated or the information in the training data sets may not be available to provide accurate support to the caregiver.

The article by Ben Shneiderman in the same issue, “Human-Centered AI,” discusses these challenges. It is not made clear how care will be better for everyone if each patient owns his or her data, but it seems obvious that countries that have nationalized patient data repositories, such as Norway, offer their citizens a better foundation for clinical practice.

Assistant Professor of Radiology, Retired

Harvard Medical School, Brigham and Women’s Hospital

Making Roads Safer for Everyone

While I might quibble with a few of the details of “New Rules for Old Roads” (Issues, Winter 2021), by Megan S. Ryerson, Carrie S. Long, Joshua H. Davidson, and Camille M. Boggan, I agree with the basic premise: the way we measure safety for pedestrians and bicyclists is inadequate and ineffective compared with a proactive approach.

For example, pedestrian safety research has consistently found that people walking are more likely to be killed on higher-speed, multilane roadways than in other environments. High Injury Networks tend to show that these roadway types are also problematic for bicyclists and motorists. Yet instead of proactively addressing known risky road types, in many cases transportation professionals wait for, as the authors note, a certain number of injuries a year or overwhelming demand in order to justify inconveniencing drivers with countermeasures that result in delay. Even when changes are made, they often occur at spot locations, rather than throughout a system.

Yet, making spot changes to a system without addressing the root cause of the problem only kicks the can down the road. Additionally, from an outside perspective, prioritizing the people already protected in climate-controlled metal boxes over those who are unprotected—particularly when the former disproportionately cause harm via air and noise pollution and injury, and the latter may be unable to drive, whether due to age, ability, income, or choice—seems questionable at best. The premise of prioritizing the driver is thick with inequity, yet it is the backbone of our current system.

The way we measure safety for pedestrians and bicyclists is inadequate and ineffective compared with a proactive approach.

The authors argue that part of the problem is a lack of consistent metrics to adequately measure the experiences of people walking and bicycling, and I welcome their data-driven examination of stress measures for bicyclists in various environments. This kind of research can augment crash data analysis and guide the design of user-responsive roadway environments and countermeasures, such as the protected bike lanes measured in the study, before additional crashes occur. At the same time, we should avoid creating rigorous requirements for research to change standards when that rigor was not met when creating the initial standard. There is power in simply asking people about the types of facilities they want for walking and bicycling and where they feel safe and unsafe, and then believing and prioritizing those perspectives, which are often consistent between studies. People inherently want safe, comfortable, and convenient access, and are clear about where those needs are met or not.

Additionally, more recent efforts to examine safety systemically, promoted by the Federal Highway Administration and aided by research from the National Cooperative Highway Research Program, have developed methods to analyze patterns in crash data that can allow for more holistic safety solutions. These efforts identify combinations of features that tend to be associated with crashes, allowing cities to proactively address them with retrofits or countermeasures before additional crashes occur.

Ultimately, the nation needs new design standards that reduce the need for studies for each city or roadway. Through incorporating biometric, stated preference, near miss, and crash studies into a systemic effort, we can identify high-risk road types and create metrics and design standards to ensure that high-risk roadways are transformed to be safe and comfortable for all users over time.

Assistant Research Professor, School of Geographical Sciences and Urban Planning

Arizona State University

Owner, Safe Streets Research & Consulting

Missing Millions

Reflecting on William E. Spriggs’s article, titled “Economics,” part of the postpandemic special section (Issues, Winter 2021), led me to focus on his main point that “modern economics … greatly rests on a host of assumptions.” In the context of novel coronavirus pandemic, Spriggs argues that there are several revealed shortcomings in the assumptions and models that economists traditionally use for decisionmaking—assumptions that led to missed opportunities and perhaps negative impacts on the health and well-being of the nation’s workforce.

Yet there are many extensions to traditional models that include the interdisciplinary work between economists and psychologists (neuroeconomics), economists and political scientists (political economy of digital media), economists and computer scientists (data science), and economists and medical practitioners (health economics and analytics). Research in these areas has led to breakthroughs that get us closer to solutions to the problems related to the human condition. However, even with these tools there is one major shortcoming beyond assumptions and models: the paucity of data representing all residents in America.

There is one major shortcoming beyond assumptions and models: the paucity of data representing all residents in America.

The “missing millions” is a concept that has emerged in the discussion about the need for greater diversity, equity, and inclusion in science, technology, engineering, and mathematics—the STEM fields. An extension of this missing millions concept in the COVID-19 pandemic era relates to the lack of access to health and communications services for millions of marginalized residents. A recent New York Times article titled “Pandemic’s Racial Disparities Persist in Vaccine Rollout” stated that “communities of color, which have borne the brunt of the Covid-19 pandemic in the United States, have also received a smaller share of available vaccines.” More importantly, the article stated that the data were inconsistent and that the full accounting of individuals of various ethnicities was unknown, noting that “in some states as much as a third of vaccinations are missing race and ethnicity data.”

No matter the mea culpa of Spriggs’s article on behalf of economists regarding the gaps in economic analysis related to false assumptions; more importantly, our empirical analyses, policies, and implementation of those policies are grossly inadequate because of gaps in data collection and accountability. The nation can do much better at protecting all members of the workforce if we can deploy the vaccine—and clean water, energy-saving technologies, job opening announcements, and other public goods, all things that rely on knowing the magnitude of these problems in underserved communities. Models and algorithms that decisionmakers rely on have limited efficacy because of the missing millions problem. The invisible people, not the invisible hand, is the problem to be solved. How can we make better economic policy if everyone isn’t counted?

Dean, Ivan Allen College of Liberal Arts

Georgia Institute of Technology

Think About Water

An Ecological Artist Collective

Think About Water is a collective of ecological artists and activists who got together to use art to elevate the awareness and discussion of water issues. Created by the painter and photographer Fredericka Foster in early 2020, the collective was intended to celebrate, as the organizers describe it, “our connection to water over a range of mediums and innovative projects that honor this precious element.” Think About Water is a call to action that invites viewers to a deeper engagement with the artwork. 

Lisa Reindorf, "Tsunami City" (2020)
Lisa Reindorf, Tsunami City, 2020. Oil and acrylic gel on panel, 40 x 60 inches.

In her work, Lisa Reindorf combines knowledge from architecture and environmental science. Her paintings examine the environmental impact of climate change on water. In aerial-view landscapes, she creates interpretations of coastal areas, in particular rising seas.

The collective’s first group exhibition is titled Think About Water. Curated by collective member Doug Fogelson, the exhibit was presented in virtual space through an interactive virtual reality gallery. Artists included the exhibit were Diane Burko, Charlotte Coté, Betsy Damon, Leila Daw, Rosalyn Driscoll, Doug Fogelson, Fredericka Foster, Giana Pilar González, Rachel Havrelock, Susan Hoffman Fishman, Fritz Horstman, Basia Irland, Sant Khalsa, Ellen Kozak, Stacy Levy, Anna Macleod, Ilana Manolson, Lauren Rosenthal McManus, Randal Nichols, Dixie Peaslee, Jaanika Peerna, Aviva Rahmani, Lisa Reindorf, Meridel Rubenstein, Naoe Suzuki, Linda Troeller, and Adam Wolpert.

Fredericka Foster, "River Revisited" (2017).
Fredericka Foster, River Revisited, 2017. Oil on canvas, 40 x 60 inches.

Fredericka Foster has been painting the surfaces of moving water in their infinite variety for years. She believes that painting, using tools of color and composition, can be an aid to societal change: “Art accesses another way of knowing, and it takes both rationality and emotional connection to create lasting change.”
Ilana Manolson, "Current" (2019)
Ilana Manolson, Current, 2019. Acrylic on Yupo paper, 69 x 75 inches.

Artist and naturalist Ilana Manolson finds herself drawn to the edges of swamps, ponds, rivers, and oceans. “As water changes, it changes its environment whether through erosion, flooding, nutrition, or drought. And what we as humans do upstream, will, through the water, affect what happens downstream.”
Linda Troeller, "Radon Waterfall, Bad Gastein, Austria" (2015)
Linda Troeller, Radon Waterfall, Bad Gastein, Austria, 2015. Photograph, 16 x 20 inches.

Linda Troeller is interested in water as a healing power. Bad Gastein, Austria’s thermal waterfall, was first referred to in writing in 1327 as “medicinal drinking water.” According to Troeller, “It is very fresh, crystal-clear—the droplets contain radon that can be absorbed by the skin or through inhalation or from drinking from fountains around the town.”
Rosalyn Driscoll, "River of Fire" (2011)
Rosalyn Driscoll, River of Fire, 2011.

Rosalyn Driscoll writes of her work, “I explore the terrain of the body and the Earth by making sculptures, installations, collages and photographs that connect people to their senses, the elements, and the natural world. My interest in bodily experience and sensory perception led to making sculptures that integrate the sense of touch into their creation and exhibition.”

For more information about the collective and the show, visit www.thinkaboutwater.com. Images courtesy of Think About Water and the individual artists. 

COVID and Disability

In her article, “Time,” part of the postpandemic special section (Issues, Winter 2021), Elizabeth Freeman observes that the COVID-19 pandemic has drawn us all into the alternate temporality that the disability community names as “crip time.” Perhaps the most relevant framework is that of chronic illness whose very nomenclature encodes temporality, as in the Twitter hashtag coined by the activist Brianne Benness, #NEISvoid (No End In Sight Void), an apt motif for this pandemic year.

Yet some return, if not to normal, then to a world beyond the crisis stage of the pandemic will arrive. What will this new world look like? It will be profoundly shaped by disability alongside other social categories such as race, gender, and class. Disability is not a mere matter of medical defect or rehabilitative target, but a complex of cultural, economic, and biopsychosocial factors in which “disability” materializes at the point of interaction between individuals and environments. Thus, for example, a wheelchair user is perfectly able so long as the built environment includes ramps and elevators and the social environment is inclusive. This crucial truth, so often overlooked in narrowly medical understandings of disablement, must inform us moving forward.

Disability is not a mere matter of medical defect or rehabilitative target, but a complex of cultural, economic, and biopsychosocial factors in which “disability” materializes at the point of interaction between individuals and environments.

We must at last reckon with the full range of disability’s social and cultural meanings. COVID-19 has been devastating to disabled people. In early 2021, the United Kingdom’s Office for National Statistics reported that 60% of its COVID-19 deaths thus far were of people with disabilities. Yet their disabled population has not been prioritized for vaccination, and disabled people were long excluded from vaccine priorities in the United States. Clearly forces are at work beyond the logics of science, as the weight of the cultural stigma of disability means that our lives are quite literally seen as less valuable.

Meanwhile, we are on the cusp of a vast explosion in the disabled population in the United States. “Long COVID,” as it is termed, is already producing a range of disabling chronic illnesses, causing such diverse disorders as cardiac damage, neurological dysfunction, chronic pain, and brain fog, often affecting previously healthy young people. As reported by JAMA Cardiology, a stunning 30% of Ohio State University football players who had mild or asymptomatic cases of COVID-19 were found to have significant heart damage afterward. And already people with long COVID in the United States are contending with the medical doubt and struggle for basic survival that typifies the chronically ill experience.

As after each of the nation’s major wars, a rapid expansion in the disabled population offers both challenge and opportunity to forward new technologies, reimagined infrastructure, and cultural recognition of the range of human abilities. Such innovations benefit disabled and nondisabled people alike. Will we allow our deeply inadequate disability support structures to totally collapse under the weight of long COVID? Or will we seize this opportunity to remake those structures to benefit disabled and nondisabled people alike? Disabled people must be at the table making these decisions about our lives, but it is crucial that all who seek a more equitable and sustainable society join us there.

Associate Professor of Disability Studies, English, and Gender and Women’s Studies

University of Wisconsin-Madison

Innovating Nurses

In “Innovating ‘In the Here and Now’” (Issues, Winter 2021), Lisa De Bode shares several accounts of nurses in the United States who leveraged their own innovative behaviors to problem solve for the benefit of their patients’ health status during the COVID-19 pandemic. Nurses developed innovative workarounds at scale as a result of significant unmet needs due to a lack of sufficient and available resources for their hospitalized patients and themselves.

While workarounds are not new to nurses, as Debra Brandon, Jacqueline M. McGrath, and I reported in a 2018 article in Advances in Neonatal Care, the circumstances of the pandemic are new to us all. We have not seen a health crisis of this caliber in over 100 years. The COVID-19 pandemic revealed the many systemic weaknesses in the nation’s health care delivery system. De Bode shares a few aspects of how those systemic weaknesses revealed unmet needs affecting nurses’ ability to provide quality care. In response to these pervasive unmet needs, nurses were left to their own devices. Nurses amplified their own innovative behaviors to create workarounds at scale.

Nurses developed innovative workarounds at scale as a result of significant unmet needs due to a lack of sufficient and available resources for their hospitalized patients and themselves.

I am delighted to see nurses’ innovative behaviors highlighted and shared with the world. I am also grateful for how De Bode so eloquently integrates the historic role of nurses in the 1918 Spanish flu pandemic: “nursing care might have been the single most effective treatment to improve a patient’s chances of survival.” This year, 103 years later, nurses were voted the most trusted profession for the 19th year in a row. Thus, the value of nurses on the health of the public has sustained over a century. Yet we continue to expect nurses to work around system-level limitations within health care organizations instead of recognizing how these workarounds are placing nurses, patients, and their families at risk for suboptimal care and the potential for medical errors.

To innovate is to address unmet needs for a population of people that brings positive change through new products, processes, and services. De Bode’s article reveals a population of people, the nursing workforce, who have sustained significant unmet needs for an enduring period with no visible end in sight. As a profession, an industry, and society, we cannot ignore that nurses are human beings, too, also in need of care and resources.

If nurses do not have what they need to provide quality care for patients, then time is unnecessarily spent working to first solve for the unmet need, in order to then care for the patient. Researchers have found that time spent on workarounds can be upward of 10% of each nurse’s shift, a factor likely contributing to symptoms of burnout. Months before the pandemic, the National Academy of Medicine reported in Taking Action Against Clinician Burnout that 34% to 54% of nurses were experiencing symptoms of burnout.

This empirical data combined with the enduring COVID-19 pandemic should be more than enough for our profession and the health care industry to recognize the need to reevaluate how we invest in our nurses and the environment in which they deliver care. We may be able to work around a lack of equipment and supplies, but we cannot risk working around a lack of nurses in the workforce.

DeLuca Foundation Visiting Professor for Innovation and New Knowledge

Director, Healthcare Innovation Online Graduate Certificate Program

University of Connecticut School of Nursing

Founder & CEO, Nightingale Apps & iCare Nursing Solutions

Nursing has long been a poorly respected, poorly paid, but high-risk profession. Historically in Europe and North America, nurses were volunteers from religious denominations; in other societies, nurses typically were family or community caregivers. Even as nursing professionalized and added requirements for classroom education and clinical training, it remained lower status than other medical disciplines. Numerous studies have tracked detrimental impacts of this dynamic on patient outcomes; in extreme but strikingly frequent cases, intimidation by surgeons has prevented nurses from speaking out to prevent avoidable medical errors.

As Lisa De Bode describes, nurses nevertheless have played a central role as innovators throughout history. She cites Florence Nightingale’s new guidelines on patient care and the efficacy of nursing during the 1918 flu pandemic before noting that nursing generally is considered a field of “soft” care that enables physicians and surgeons to invent “hard” tools, therapeutics, and other biomedical machinery. Yet as Jose Gomez-Marquez, Anna Young, and others in the Maker Nurse and broader nurse innovation communities have identified in recent years, nurses have been “stealth innovators” throughout history. Interestingly, this work was recognized within the profession at times. From 1900 to 1947 the American Journal of Nursing ran an “improvising” column of nurse innovations that met criteria of efficacy, practicality, and not creating new risks to patients or attendants. After 1947, the journal ran a regular series to share innovations, “The Trading Post,” which included sketches, lists of materials, and recipes. Ironically, as nursing professionalized, recognition of the tinkering mindset and peer-to-peer sharing of ideas declined.

De Bode’s article provides diverse examples of rapid response, nurse-originated innovations during the ongoing COVID pandemic. She also observes and subtly pushes against definitions of innovation that are based solely on “things,” such as pharmaceuticals and medical devices. Innovations—and inventions—that originate from nurses typically fall into vaguely classified categories of “services” and “care.” They aren’t patentable, reducible to products that can be licensed to other clinics, or the basis for making a pitch deck to present to venture capitalists. Like the invention of hip-hop, the creation of new clothing styles by individuals in the Black community, and the work of thousands of inventors who are Black, Indigenous, or people of color in low-status professions, these advances are not treated as property of the inventor and often are not archived and celebrated as breakthroughs.

Just as 80% of the mass of the universe is made up of unobserved dark matter, we ignore the majority of the innovations that ensure that hospitals function or that myriad other aspects of our daily lives actually improve year on year. Ironically, even as the United States celebrates itself as an innovation-based economy and advocates for stronger intellectual property systems worldwide, it ignores the majority of its domestic innovations. A reset in how we define “inventor” and which innovators we resource with funding and recognition is overdue.

Director, Lemelson Center for the Study of Invention and Innovation

Smithsonian Institution

Lisa De Bode has cast a critical spotlight on the role of innovation undertaken by nurses, particularly within the crisis of the COVID pandemic. Many nurses would not consider themselves as inventors or entrepreneurs, nor do many others in the health system—but in fact often they are. Nurses are often commonly considered as the doers, executing the plans of others and for the most part this is true. As De Bode explains, nurses often engage in “workarounds,” tailoring approaches designed for them, not designed with them or by them.

Many nurses would not consider themselves as inventors or entrepreneurs, nor do many others in the health system—but in fact often they are.

But the fact is that many nurses devise innovative approaches and designs. As innovators, nurses can drive changes in systems and processes that impact care delivery and patient outcomes and improve the working life of nurses and other health professionals. Increasing collaborations with patients, their families, health providers, and members of other disciplines, such as engineers, demonstrate significant promise. De Bode has created a window into the working lives of nurses. Listening to their views and opinions and leveraging their expertise is vital to solving the complex problems of our health systems.

For decades nurses have been voted the most trusted profession. Clearly, our patients value us. So it is important that those who design and fund our institutions and models of care to listen to the voices of nurses and their advocacy for patients. The impacts are potentially transformational.

Dean

Johns Hopkins School of Nursing

The Importance of a Computer Science Education

In “A Plan to Offer Computer Science Classes in All North Carolina High Schools” (Issues, Winter 2021), Lena Abu-El-Haija and Fay Cobb Payton make a compelling case for how to improve the way computer science (CS) is taught in high schools. Here we want to extend their insightful plans by focusing on how the authors’ well-stated goals can be achieved through a culturally responsive computing lens. We suggest four points of consideration when implementing their CS education plans for Black, brown, and other underserved students.

First, culturally responsive computing, as a frame for developing learners’ CS competencies through asset building, reflection, and connectedness, should inform barometers of success for CS programs. Thus, the proposed high school CS education initiatives should not mimic college programs—that is, they should not measure their effectiveness based on where students go (e.g., employment at top tech companies) but on what students do once they arrive there. Achievement markers should shift to focus on how students use their computing knowledge as a tool to solve challenges affecting them and their communities. We know from developing and implementing our own culturally responsive CS programs (e.g., COMPUGIRLS), success comes only when participants have space and resources to use their newly acquired technology skills in culturally responsive and sustaining ways.

Second, the culturally responsive frame can further inform the curriculum of the proposed high school CS programs. As an early-career Black woman in computing, I (Stewart) can confirm that current CS education focuses on domain-knowledge and how to build systems. Beyond an ethics course late in my undergraduate program, there was little emphasis in my training on the social context surrounding technology, including questions of when and why we build systems. Who the technology might affect and whether the technology yields acceptable justice-oriented outcomes are rarely posed in CS programs. Although the human-computer interaction community pursues answers to these questions, all aspects of the technology-creation pipeline, and by extension CS education, need to critically reflect on these and other contextualized interrogatives.

Achievement markers should shift to focus on how students use their computing knowledge as a tool to solve challenges affecting them and their communities.

Third, culturally responsive computing needs to be embedded in all aspects of the proposed plans. To achieve this, teacher training must include far more than increasing educators’ CS competencies. Computer scientists, educators, and social scientists should collaboratively design preparation programs (and ongoing professional development) that equip teachers with the knowledge of culturally responsive pedagogy. Computer scientists alone cannot ignite the necessary “evolution” the authors describe.

Fourth, and finally, to achieve a culturally responsive vision of CS education, equitable allocation of funding is crucial. Resources must be distributed in a way that considers the sociohistorical realities of racist policies that led to the marginalization of Black and brown students in education, in general, and in computer science, in particular. For example, districts that are the results of redlining should receive more resources to implement CS education programs than districts that benefited from this practice.

In sum, we, too, call on policymakers to apply a culturally responsive, justice-focused perspective to these CS education programs in order to empower the voices of Black and brown innovators. To do anything else will ensure the nation remains limited by its innovations.

Postdoctoral Fellow, Human-Computer Interaction Institute

Carnegie Mellon University

Professor, School of Social Transformation

Executive Director, Center for Gender Equity in Science and Technology

Arizona State University

Lena Abu-El-Haija and Fay Cobb Payton lay out both a strong argument and solid steps for why and how computer science (CS) can be a part of every student’s educational pathway. The authors share research describing the lack of quality CS education—especially for Black and Brown students—that poses problems for both the future of North Carolina’s children and the state’s economy (which depends heavily on tech companies in Research Triangle Park). Building on the momentum of recent efforts to address these issues, the authors call for actions that move beyond “episodic intervention” toward “comprehensive change” with a designated CS ambassador to oversee regional implementation, CS accountability measures for rating school success, and more.

Their suggestions are brilliant, much needed, and could set a valuable example for other states nationwide. They also inspired the following questions. First, how can statewide plans ensure buy-in across the educational landscape? I wondered if their plan might allow space for a multistakeholder committee working with the CS ambassadors, consisting of students, teachers, administrators, counselors, researchers, policymakers, and industry professionals? My own state, California, has formed the CSforCA multistakeholder coalition, ensuring that diverse perspectives can inform what decisions get made, for whom, and for what purpose toward sustaining long-term local implementation.

California has formed the CSforCA multistakeholder coalition, ensuring that diverse perspectives can inform what decisions get made, for whom, and for what purpose toward sustaining long-term local implementation.

Relatedly, how can we elevate students’ voices—and particularly those of populations underrepresented in computing—toward shaping more meaningful CS education experiences? Students know best about what motivates their engagement, yet rarely are they invited to shape the direction of schooling. As the authors astutely note, “cultural context, competency, and relevancy in the teaching of the subject are key.” How can youth help drive the movement toward exactly this kind of CS education?

I also believe including diverse stakeholders would ensure that the plan’s school rating system adequately accounts for the different kinds of hurdles that low-resource schools face that wealthier schools don’t, and how that impacts CS education implementation differently.

Additionally, economic drivers for CS education are valuable for gathering diverse communities behind computing education; almost everyone agrees that all people deserve to thrive professionally and financially. However, our research focused on students’ perspectives in CS education reveals that youth are thinking about more than their future careers. They are asking how computing can solve challenging problems that negatively impact communities. CS is a form of power; technology shapes how we communicate, think, purchase goods, and so on, while social media and newsfeeds influence our mental health, ethical convictions, voting habits, and more. Yet CS continues to be controlled by a population that does not reflect the diversity of experiences, values, and perspectives of the nation’s low-income, Black, Brown, Indigenous, female, LGBTQ, disabled, and minoritized communities.

The authors emphasize that a CS education plan is needed to ensure greater diversity in computer science fields. But we also have a moral imperative to question the ethical implications of our increasingly computerized world and prepare our youth to do the same, regardless of whether they pursue computing careers.

Director of Research

UCLA Computer Science Equity Project

Lena Abu-El-Haija and Fay Cobb Payton offer a compelling and comprehensive plan for moving forward. Too few girls, too few Black and Latino students, have access to and participate in computer science courses and pursue college majors and careers in the field. Increasing piecemeal access to computer science one school or district at a time won’t reverse these trends, and the authors are right to call for more comprehensive action. To extend their important argument, I offer one additional rationale for this course of action.

Abu-El-Haija and Cobb Payton rightly observe that there is substantial economic opportunity available to young people with computer science skills. This argument fits within the dominant conception of schools within American public discourse: schools as sites of workforce training and labor market preparation. But the earliest arguments from public school advocates such as Thomas Jefferson and Horace Mann were not fundamentally economic, but civic. Communities fund public schools because a common preparation for all young citizens poses the brightest possible future for our shared democracy.

US democracy is strongest when it most comprehensively represents its citizenry, and right now, the field of computer science is woefully out of sync with the broader population. Multiple studies of the demographic makeup of the largest technology companies in the United States reveal that these companies are overpopulated with white and Asian men, and these concentrations are more pronounced when analyses focus on engineering jobs. When the digital infrastructure of society is developed by people who fail to represent the full breadth and diversity of the nation, we cannot be surprised when problems and disasters ensue.

A rapidly growing body of scholarship reveals a wide variety of ways that new technologies fail to serve all users: facial recognition tools that can’t identify dark skinned faces, language technologies trained on datasets filled with bias and prejudice, pregnancy tracking apps with no mechanism for responding meaningfully and compassionately to miscarriages. The list goes on and on.

When the digital infrastructure of society is developed by people who fail to represent the full breadth and diversity of the nation, we cannot be surprised when problems and disasters ensue.

It isn’t the case that women or people of color are not interested in opportunities in computing. Indeed, in its earliest days, computing and programming were seen as “women’s work,” requiring attention to detail, organization, and persistence. But then, throughout the 1980s, especially as personal computers entered the marketplace, computing was deliberately marketed to white boys, the composition of graduate programs changed dramatically, and the conditions for our current industry became locked in place: women and minoritized people who tried to enter the computing field faced the twin challenges of learning a complex and challenging field while simultaneously overcoming their outsider status.

We will live in a better society when our computational tools—increasingly essential to our markets, democracy, and social lives—are built by people from all backgrounds and all walks of life. The most promising pathway to that better future, as Abu-El-Haija and Cobb Payton suggest, involves giving all young people an early introduction to computer science and supporting diverse students beyond that introduction.

Associate Professor of Digital Media

Massachusetts Institute of Technology

Director, MIT Teaching Systems Lab

Why Buy Electric?

It is true that the United States, once the global leader in electric vehicles, is falling behind China and Europe, as John Paul Helveston writes in “Why the US Trails the World in Electric Vehicles” (Issues, Winter 2021). The policies the author references in China and Norway have an underlying theme: they make gasoline vehicles more expensive and less convenient to own compared with electric vehicles. In the United States where vehicle purchase tax and gas prices are very low, buyers have no reason not to purchase a gasoline car. Every time a household purchases a new gasoline vehicle, it is more comfortable, more efficient, cheaper to run, safer, and better equipped than ever before. There is nothing pushing car buyers away from gasoline vehicles, and therefore consumers do not seek alternatives such as electric vehicles.

On the issue of US car dealerships not selling or promoting electric vehicles, we should look to automakers, not dealerships, to get to the source of this issue. Dealerships sell the vehicles that automakers produce; if automakers don’t produce electric vehicles in large numbers, dealerships cannot sell them in large numbers, and therefore won’t be motivated to train salespeople on selling electric vehicles.

There is nothing pushing car buyers away from gasoline vehicles, and therefore consumers do not seek alternatives such as electric vehicles.

On the issues of government regulations, the increase in electric vehicle sales in Europe is largely attributed to European emissions standards, which are difficult to comply with without selling electric vehicles. In the United States, federal fuel economy standards may not be sufficient to do this, and the zero emission vehicle (ZEV) sales mandate, which is often credited with the commercialization of electric vehicle technology, needs to be updated to encourage more electric vehicle sales.

Without more progressive fuel economy standards and more ambitious ZEV sales targets coupled with higher gasoline prices and higher vehicle tax, the United States may continue to lag behind Europe and China. As Helveston notes, a more aggressive approach is certainly needed.

Plug-in Hybrid and Electric Vehicle Research Center

Institute of Transportation Studies

University of California, Davis

Maintaining Control Over AI

A pioneer in the field of human-computer interaction, Ben Shneiderman continues to make a compelling case that humans must always maintain control over the technologies they create. In “Human-Centered AI” (Issues, Winter 2021), he argues for AI that will “amplify, rather than erode, human agency.” And he calls for “AI empiricism” over “AI rationalism,” by which he means we should gather evidence and engage in constant assessment.

In many respects, the current efforts to develop the field of AI policy reflect Shneiderman’s intuition. “Human-centric” is a core goal in the OECD AI Principles and the G20 AI Guidelines, the two foremost global frameworks for AI policy. At present, more than 50 countries have endorsed these guidelines. Related policy goals seek to “keep a human in the loop,” particularly in such crucial areas as criminal justice and weapons. And the call for “algorithmic transparency” is simultaneously an effort to ensure human accountability for automated decisionmaking.

There is also growing awareness of the need to assess the implementation of AI policies. While countries are moving quickly to adopt national strategies for AI, there has been little focus on how to measure success in the AI field, particularly in the areas of accountability, fairness, privacy, and transparency. In my organization’s report Artificial Intelligence and Democratic Values, we undertook the first formal assessment of AI policies taking the characteristics associated with democratic societies as key metrics. Our methodology provided a basis to compare national AI policies and practices in the present day. It will provide an opportunity to evaluate progress, as well as setbacks, in the years ahead.

The current efforts to develop the field of AI policy reflect Shneiderman’s intuition.

Information should also be gathered at the organization level. Algorithmic Impact Assessments, similar to data protection impact assessments, require organizations to conduct a formal review prior to deployment of new systems, particularly those that have direct consequences for the opportunities of individuals, such as hiring, education, and the administration of public services. These assessments should be considered best practices, and they should be supplemented with public reporting that makes possible meaningful independent assessment.

In the early days of law and technology, when the US Congress first authorized the use of electronic surveillance for criminal investigations, Congress also required the production of detailed annual reports by law enforcement agencies to assesses the effectiveness of those new techniques. Fifty years later, those reports continue to provide useful information to law enforcement agencies, congressional oversight committees, and the public as new issues arise.

AI policy is still in the early days, but the deployment of AI techniques is accelerating rapidly. Governments and the people they represent are facing extraordinary challenges as they seek to maximize the benefits for economic growth and minimize the risks to public safety and fundamental rights during this period of rapid technological transformation.

Socio-technical imaginaries have never been more important. We concur with Ben Shneiderman in his future vision for artificial intelligence (AI): humans first. But we would go further with a vision of socio-technical systems: human values first. This includes appreciating the role of users in design processes, followed by the identification and involvement of additional stakeholders in a given, evolving socio-technical ecosystem. Technological considerations can then ensue. This allows us to design and build meaningful technological innovations that support human hopes, aspirations, and causes, whereby our goal is the pursuit of human empowerment (as opposed to the diminishment of self-determination) and using technology to create the material conditions for human flourishing in the Digital Society.

We must set our sights on the creation of those infrastructures that bridge the gap between the social, technical, and environmental dimensions that support human safety, protection, and constitutive human capacities, while maintaining justice, human rights, civic dignity, civic participation, legitimacy, equity, access, trust, privacy, and security. The aim should be human-centered value-sensitive socio-technical systems, offered in response to local community-based challenges that are designed, through participatory and co-design processes, for reliability, safety, and trustworthiness. The ultimate hope of the designer is to leave the outward physical world a better place, but also to ensure that multiple digital worlds and the inner selves can be freely explored together.

With these ideas in mind, we declare the following statements, affirming shared commitments to meeting common standards of behavior, decency, and social justice in the process of systems design, development, and implementation:

As a designer:

  1. I will acknowledge the importance of approaching design from a user centered perspective.
  2. I will recognize the significance of lived experience as complementary to my technical expertise as a designer, engineer, technologist, or solutions architect.
  3. I will endeavor to incorporate user values and aspirations and appropriately engage and empower all stakeholders through inclusive, consultative, participatory practices.
  4. I will incorporate design elements that accept the role of individuals and groups as existing within complex socio-technical networks, and are sensitive to the relative importance of community.
  5. I will appreciate and design for evolving scenarios, life-long learning and intelligence, and wicked social problems that do not necessarily have a terminating condition (e.g., sustainability).
  6. I will contribute to the development of a culture of safety to ensure the physical, mental, emotional, and spiritual well-being of the end user, and in recognition of the societal and environmental implications of my designs.
  7. I will seek to implement designs that maintain human agency and oversight, promote the conditions for human flourishing, and support empowerment of individuals as opposed to replacement.
  8. I will grant human users ultimate control and decisionmaking capabilities, allowing for meaningful consent and providing redress.
  9. I will seek continuous improvement and refinement of the given socio-technical system using accountability (i.e., auditability, answerability, enforceability) as a crucial mechanism for systemic improvement.
  10. I will build responsibly with empathy, humility, integrity, honor, and probity and will not shame my profession by bringing it into disrepute.

As a stakeholder:

  1. You will have an active role and responsibility in engaging in the design of socio-technical systems and contributing to future developments in this space.
  2. You will collaborate and respect the diverse opinions of others in your community and those involved in the design process.
  3. You will acknowledge that your perspectives and beliefs are continually evolving and refined over time in response to changing realities and real-world contexts.
  4. You will be responsible for your individual actions and interactions throughout the design process, and beyond, with respect to socio-technical systems.
  5. You will aspire to be curious, creative, and open to developing and refining your experience and expertise as applied to socio-technical systems design.
  6. You will appreciate the potentially supportive role of technology in society.

As a regulator:

  1. You will recognize the strengths and limitations of both machines and people.
  2. You will consider the public interest and the environment in all your interactions.
  3. You will recognize that good design requires diverse voices to reach consensus and compromise through dialogue and deliberation over the lifetime of a project.
  4. You will strive to curate knowledge, and to distinguish between truth and meaning; and will not deliberately propagate false narratives.
  5. You will act with care to anticipate new requirements based on changing circumstances.
  6. You will be objective and reflexive in your practice, examining your own beliefs, and acting on the knowledge available to you.
  7. You will acknowledge the need for human oversight and provide mechanisms by which to satisfy this requirement.
  8. You will not collude with designers to install bias or to avoid accountability and responsibility.
  9. You will introduce appropriate enforceable technical standards, codes of conduct and practice, policies, regulations, and laws to encourage a culture of safety.
  10. You will take into account stakeholders who have little or no voice of their own.

Professor, School for the Future of Innovation in Society and the School of Computing and Decision Systems Engineering

Arizona State University

Director of the Society Policy Engineering Collective and the founding Editor in Chief of the IEEE Transactions on Technology and Society

Lecturer, School of Business, Faculty of Business and Law

University of Wollongong, Australia

Coeditor of IEEE Transactions on Technology and Society

Professor of Intelligent and Self-Organising Systems, Department of Electrical & Electronic Engineering

Imperial College London

Editor in Chief of IEEE Technology and Society Magazine

Reaping the Benefits of Agricultural R&D

In “Rekindling the Slow Magic of Agricultural R&D” (Issues, May 3, 2021), Julian M. Alston, Philip G. Pardey, and Xudong Rao focus on a critical issue: the decline in funding of agricultural research and development for the developing world. I believe, however, that they give too much credit to the public response to COVID-19. An equally proactive response to the climate crisis or the crisis of agricultural production/food security would in the end save more lives. That said, there are two further issues to note.

First, despite the scientific and long-term human importance of the Green Revolution, the experience taught us a great deal about the potential negative social consequences of new technologies. It taught us to distinguish development ideology from development technology; to apply the latter carefully in light of local power relations; and to think of rural development as more than just raising farm production, but also increasing rural populations’ quality of life. In many countries, for example, wealthy farmers or absentee landowners took advantage of labor-reducing, better-yielding technologies to increase productivity and production, but also to push smallholders and tenants off the land. (Although in part overcome over time, this lament was often heard in India and Africa.) We need to bear these lessons in mind as we go at it again so that new technologies do not have the same disruptive, inequality-increasing impact today as Green Revolution technologies had in earlier decades.

By the same token, a key weakness in the entire system has been its continued (and continuing) dependence on local governments. In a great many cases, rural problems—e.g., low farm-gate prices, lack of access to technology and knowledge, food insecurity itself—are the direct result of government policies. Today, countries are paying the price of such policies, as rural areas empty and onetime farmers give up in the face of increasing personal food insecurity. The loss of these farmers only increases national and international food insecurity in a world where food reserves are shrinking.

Governments and intergovernmental organizations deal with governments, so this may be beyond reach. But to the extent that external research designs can focus on the rural poor majority or constrain governments—or both—to put investment where it will help those in real need, not just those in power, it would be wonderful.

Cofounder and Codirector

Warm Heart Foundation

Phrao District, Thailand

Reliable Infrastructure

There’s much to applaud in Mikhail Chester’s “Can Infrastructure Keep Up With a Rapidly Changing World?” (Issues, April 29, 2021). I’d like to offer two reservations and one alternative from a different perspective—that of real-time operators in the control rooms of large critical infrastructures such as those for water and energy.

My first reservation is that the premature introduction of so-called innovative software has plagued real-time systemwide operations of key infrastructures for decades. Indeed, as long as there are calls for more and better software and hardware, there will be the need for control operators to come up with just-in-time workarounds for the inevitable glitches. System reliability, at least in large systemwide critical infrastructures, requires managing beyond design and technology.

Second, talk about trade-offs when it comes to the design and operation of these large systems is ubiquitous. Control operators and their wraparound support staff see real-time system demands differently.

As long as there are calls for more and better software and hardware, there will be the need for control operators to come up with just-in-time workarounds for the inevitable glitches.

Reliability in real time is nonfungible: it can’t be traded off against cost or efficiency or whatever when the safe and continuous provision of the critical service matters, right now. No number of economists and engineers insisting that reliability is actually a probability estimate will change the real-time mandate that some systemwide disasters must be prevented from ever happening. That disasters do happen only reinforces the public’s and the operators’ commitment to the precluded event standard of systemwide reliability.

What do these reservations (and others for that matter) add up to? Remember the proposed congressional legislation—introduced in 2007 and reintroduced in 2020—for the creation of a National Infrastructure Reinvestment Bank to fund major renovations of the nation’s infrastructure sectors? What was needed then and now is something closer to a National Academy for Reliable Infrastructure Management to ensure the tasks and demands of the rapidly changing infrastructures match the skills available to manage them in real time.

Coauthor of High Reliability Management (Stanford University Press, 2008) and Reliability and Risk (Stanford University Press, 2016)

AI and Jobs

During my tenure as program manager at the Defense Advanced Research Projects Agency, I watched with admiration the efforts of John Paschkewitz and Dan Patt to explore human-AI teaming, and I applaud the estimable vision they set forth in “Can AI Make Your Job More Interesting?” (Issues, Fall 2020). My intent here is not to challenge the vision or potential of AI, but to question whether the tools at hand are up to the task, and whether the current AI trajectory will get us there without substantial reimagining.

The promise of AI lies in its ability to learn mappings from high dimensional data and transform them into a more compact representation or abstraction space. It does surprisingly well in well-conditioned domains, as long as the questions are simple and the input data don’t stray far from the training data. Early successes in several AI showpieces have brought, if not complacency, a lowering of the guard, a sense that deep learning has solved most of the hard problems in AI and that all that’s left is domain adaptation and some robustness engineering.

But a fundamental question remains—whether AI can learn compact, semantically grounded representations that capture the degrees of freedom we care about. Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are. If perturbing a handful of pixels can cause a deep network to misclassify a stop sign as a yield sign, it’s clear that the AI has failed to learn the semantically relevant letters “STOP” or the shape “octagon.” AI both overfits and underfits its training data, but despite exhaustive training on massive datasets, few deep image networks learn topology, perspective, rotations, projections, or any of the compact operators that give rise to the apparent degrees of freedom in pixel space.

Ask a slightly different question than the one AI was trained on, and one quickly observes how brittle its internal representations are.

To its credit, the AI community is beginning to address problems of data efficiency, robustness, reliability, verifiability, interpretability, and trust. But the community has not fully internalized that these are not simply matters of better engineering. Is this because AI is fundamentally limited? No, biology offers an existence proof. But we have failed our AI offspring in being the responsible parents it needs to learn how to navigate in the real world.

Paschkewitz and Patt’s article poses a fundamental question: how does one scale intelligence? Except for easily composable problems, this is a persistent challenge for humans. And this, despite millions of years of evolution under the harsh reward function of survivability in which teaming was essential. Could an AI teammate help us to do better?

Astonishingly, despite the stated and unstated challenges of AI, I believe that the answer could be yes! But we are still a few groundbreaking ideas short of a phase transition. This article can be taken as a call to action to the AI community to address the still-to-be-invented AI fundamentals necessary for AI to become a truly symbiotic partner, for AI to accept the outreached human hand and together step into the vision painted by the authors.

Former Program Manager

Defense Sciences Office

Defense Advanced Research Projects Agency

Technologists—as John Paschkewitz and Dan Patt describe themselves—are to be applauded for their ever-hopeful vision of a “human-machine symbiosis” that will “create more dynamic and rewarding places for both people and robots to work,” and even become the “future machinery of democracy.” Their single-minded focus on technological possibilities is inspiring for those working in the field and arguably necessary to garner the support of policymakers and funders. Yet their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor, and the authors’ wanderings into history, politics, and policy warrant some caution.

While it is heartening to read about a future where humans have the opportunity to use their “unique talents” alongside robots that also benefit from this “true symbiosis,” contemplating that vision through the lens of the past technology-driven decades is a head-scratcher. This was an era that brought endless wars facilitated by the one-sided safety of remote-control battlefields, and though there was an increase in democratic participation, it was in reaction to flagrant, technology-facilitated abuses that provoked outrage about political corruption (real and imagined) motivating citizens to go to the ballot box—thought to be secure only when unplugged from the latest technology. We should also consider the technology promises of Facebook to unite the global community in harmony, or the Obama administration’s e-government technology initiative expanding access and participation to “restore public faith in political institutions and reinvigorate democracy.”

Their vision of a bright, harmonious future that solves the historically intractable problems of the industrial workplace fails to consider the reality seen from the office cubicle or the warehouse floor.

As to the advances in the workplace, they did produce the marvel of near-instant home delivery of everything imaginable. But those employing the technology also chose to expand the size of the workforce that drew low pay and few benefits, and relied on putting in longer hours and working multiple jobs to pay the rent—all while transferring ever-greater wealth to the captains of industry, enabling them to go beyond merely acquiring yachts to purchasing rockets for space travel.

Of course, it might be different this time. But it will take more than the efforts of well-meaning technologists to transform the current trajectory of AI-mediated workplaces into a harmonious community. Instead, the future now emerging tilts to the dystopian robotic symbiosis that the Czech author Karel Čapek envisioned a century ago. Evidence tempering our hopeful technologists’ vision is in the analyses of the two articles between which theirs is sandwiched—one about robotic trucks intensifying the sweatshops of long-haul drivers, and the other about how political and corporate corruption flourished under the cover of the “abstract and unrealizable notions” of Vannevar Bush’s Endless Frontier for science and innovation.

For technologists in the labs, symbiotic robots may be a hopeful and inspirational vision, but before we abandon development of effective policy in favor of AI optimization, let us consider the reality of Facebook democracy, Amazonian sweatshops, and Uber wages that barely rise above the minimum. We’d be on the wrong road if we pursue a technologist’s solution to the problems of power and conflict in the workplace and the subversion of democracy.

Professor of Planning and Public Policy, Edward J. Bloustein School

Senior Faculty Fellow, John J. Heldrich Center for Workforce Development

Rutgers University

John Paschkewitz and Don Patt provide a counterpoint to those who warn of the coming AIpocalypse, which happens, as we all know, when SkyNet becomes self-aware. The authors make two points.

First, attention has focused on the ways that artificial intelligence will substitute for human activities; overlooked is that it may complement them as well. If AI is a substitute for humans, the challenge becomes one of identifying what AI can do better and vice versa. While this may lead to increases in efficiency and productivity, perhaps, the greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The degree to which AI will be a substitute or complement will depend upon the activity as well as the new kinds of activities that AI may make possible. Whether the authors are correct, time will judge. Nevertheless, the role of AI as intermediary is worth thinking about particularly in the context of the economist Ronald Coase’s classic question: what is a firm? One answer is that it is a coordinating device. Might AI supplant this role? It would mean the transformation of the firm from employer to intermediary, such as ride-sharing platforms.

Greater gains are to be had when AI complements human activity as an intermediary in coordinating groups to tackle large-scale problems.

The second point is more provocative. AI-assisted governance anyone? Paschkewitz and Patt are not suggesting that Plato’s philosopher king be transformed into an AI-assisted monarch. Rather, they posit that AI has a role in improving the quality of regulation and government interventions. They provide the following as illustration: “An alternative would be to write desired outcomes into law (an acceptable unemployment threshold) accompanied by a supporting mechanism (such as flowing federal dollars to state unemployment agencies and tax-incentivization of business hiring) that could be automatically regulated according to an algorithm until an acceptable level of unemployment is again reached.”

This proposal is in the vein of economics’ Taylor rule, whose goal was to remove discretion over how interest rates should be set. Following the rule, rates should be pegged to the gap between the desired inflation rate and the actual rate. AI would allow one to implement rules more complex than this and contingent on far more factors.

We have examples of such things “in the small”—for example, whose income tax returns should be audited and how should public housing be allocated. Although these applications have had problems—say, with bias—the problems are not fundamental in that one knows how to correct for them. However, for things “in the large,” I see three fundamental barriers.

First, as the authors acknowledge, it does not eliminate political debate, but shifts it, from the ex post (what should we do now) to the ex ante (what should we do if). It is unclear that we are any better at resolving the second kind of debate than the first. Second, who is accountable for outcomes with AI-assisted policy? For even the “small” things, this issue is unresolved. Third, the greater the sensitivity of regulation to the environment, the greater the need for accurate measurements of the environment and the greater the incentive to corrupt it.

George A. Weiss and Lydia Bravo Weiss University Professor

Department of Economics & Department of Electrical and Systems Engineering

University of Pennsylvania

Building a Better Railroad

Carl E. Nash’s article, “A Better Approach to Railroad Safety and Operation” (Issues, Fall 2020), reflects an incomplete understanding of positive train control (PTC) technology, leading to misstatements about the PTC systems that have been put in place. Importantly, Nash’s assertion that full implementation of PTC is in doubt is simply false. The railroad industry met Congress’s December 31, 2020, deadline for implementing PTC systems as mandated by the Rail Safety Improvement Act of 2008.

The act requires that PTC systems must be able to safely bring a train to a stop before certain human-error-caused incidents can occur. Recognizing that trains often operate across multiple railroads, the law requires that each railroad’s PTC system be fully interoperable with other railroad systems across which a train might travel.

Nash believes the reason PTC was not completed earlier was money. The nation’s largest railroads have invested about $11 billion in private capital to develop this first-of-its-kind technology. Money was not the reason PTC was not completed earlier. PTC had to be designed from scratch to be a failsafe technology capable of operating seamlessly and reliably. This task was unprecedented. It took as long as it did to implement PTC because of the complexity of delivering on the promise of PTC’s safety benefits.

Nash falsely equates rail operations to highways, and implies that a system similar to Waze or Google Maps can work on rail operations. The two modes are not the same, and the level of precision necessary for a fully functioning PTC system is far more exacting than what helps you find the fastest route home.

PTC had to be designed from scratch to be a failsafe technology capable of operating seamlessly and reliably. This task was unprecedented.

Contrary to what Nash would have you believe, the predominant PTC system used by freight railroads and passenger railroads outside the Northeast Corridor does use GPS. Also contrary to what he stated, locomotives that travel across the nation are equipped with nationwide maps of PTC routes. The transponder system that Nash referred to is a legacy system limited to Amtrak’s Northeast Corridor and some commuter railroads operating in the Northeast, and it is used because the transponders were already in place.

Nash asserts that each railroad has its own PTC system. In fact, the freight railroads have collaborated on PTC, with the Association of American Railroads adopting PTC standards to ensure that there is no incompatibility as locomotives move across the railroad network.

Railroads are proud of their work to make PTC a reality and know that it will make this already safe industry even safer. What Nash does get right, though, is that PTC systems must be dynamic. They will continue to require maintenance and evolve to fulfill additional needs. Meeting the congressional deadline was not the end for PTC; it marked the beginning of a new, disciplined phase that promises to further enhance operations and improve efficiency. Armed with PTC and other cutting-edge technologies, the rail industry is poised to operate safely, efficiently delivering for us all.

Senior Vice President-Safety and Operations

Association of American Railroads

On September 12, 2008, a Union Pacific Railroad freight train and a Metrolink commuter train collided in Chatsworth, California, resulting in 135 injuries and 25 fatalities. In response, Congress passed the Rail Safety Improvement Act of 2008, which mandated that each Class I railroad (comprising the nation’s largest railroads) and each entity providing regularly scheduled intercity or commuter rail passenger transportation must implement a positive train control (PTC) system certified by the Federal Railroad Administration (FRA). Each railroad was to install a PTC system on: (1) its main line over which 5 million or more gross tons of annual traffic and poison- or toxic-by-inhalation hazardous materials are transported; (2) its main line over which intercity or commuter rail passenger transportation is regularly provided; and (3) any other tracks the secretary of transportation prescribes by regulation or order.

On January 15, 2010, FRA issued regulations that require PTC systems to prevent train-to-train collisions, over-speed derailments, incursions into established work zones, and movements of trains through switches left in the wrong position, in accordance with prescribed technical specifications. The statutory mandate and FRA’s implementing regulations also require a PTC system to be interoperable, meaning the locomotives of any host railroad and tenant railroad operating on the same main line will communicate with and respond to the PTC system, including uninterrupted movements over property boundaries.

FRA has worked with all stakeholders, including host and tenant railroads, railroad associations, and PTC system vendors and suppliers, to help ensure railroads fully implement PTC systems on the required main lines as quickly and safely as possible. Since 2008, the Department of Transportation has awarded $3.4 billion in grant funding and loan financing to support railroads’ implementation of PTC systems.

Currently, 41 railroads are subject to the statutory mandate, including seven Class I railroads, Amtrak, 28 commuter railroads, and 5 other freight railroads that host regularly scheduled intercity or commuter rail passenger service. Congress set a deadline of December 31, 2020, by which an FRA-certified and interoperable PTC system must govern operations on all main lines subject to the statutory mandate.

As of December 29, 2020, PTC systems govern operations on all 57,536 route miles subject to the statutory mandate. In addition, as required, FRA has certified that each host railroad’s PTC system complies with the technical requirements for PTC systems. Furthermore, railroads have reported that interoperability has been achieved between each applicable host and tenant railroad that operates on PTC-governed main lines. The Federal Railroad Administration congratulates the railroads, particularly their frontline workers, as well as PTC system suppliers/vendors and industry associations, on this transformative accomplishment.

Director, Office of Railroad Systems and Technology

Federal Railroad Administration